id
stringclasses
179 values
question
stringlengths
8.75k
85.9k
answer
dict
2001.06354
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which method for integration peforms better ensemble or consensus dropout fusion with shared parameters? Context: <<<Title>>> Modality-Balanced Models for Visual Dialogue <<<Abstract>>> The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue. However, via manual analysis, we find that a large number of conversational questions can be answered by only looking at the image without any access to the context history, while others still need the conversation context to predict the correct answers. We demonstrate that due to this reason, previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history (e.g., by extracting certain keywords or patterns in the context information), whereas image-only models are more generalizable (because they cannot memorize or extract keywords from history) and perform substantially better at the primary normalized discounted cumulative gain (NDCG) task metric which allows multiple correct answers. Hence, this observation encourages us to explicitly maintain two models, i.e., an image-only model and an image-history joint model, and combine their complementary abilities for a more balanced multimodal model. We present multiple methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters. Empirically, our models achieve strong results on the Visual Dialog challenge 2019 (rank 3 on NDCG and high balance across metrics), and substantially outperform the winner of the Visual Dialog challenge 2018 on most metrics. <<</Abstract>>> <<<Introduction>>> When we pursue conversations, context is important to keep the topic consistent or to answer questions which are asked by others, since most new utterances are made conditioned on related mentions or topic clues in the previous utterances in the conversation history. However, conversation history is not necessarily needed for all interactions, for instance, someone can change topics during a conversation and can ask a sudden new question which is not related to the context. This is similar to the setup in the Visual Dialog task BIBREF0, in which one agent (say the `asker') keeps asking questions and the other one (say the `answerer') keeps answering the questions based on an image for multiple rounds. The asker can ask a question from the conversation context. Then the answerer should answer the question by considering the conversation history as well as the image information, e.g., if the asker asks a question, “Are they in pots?” (Q4 in Fig. FIGREF1), the answerer should find a clue in the past question-answer pairs “Is there a lot of plants?” - “I only see 2.” (Q3-A3 in Fig. FIGREF1) and figure out what `they' means first to answer the question correctly. On the other hand, some questions in this task are independent of the past conversation history, e.g., “Can you see a building?” (Q8 in Fig. FIGREF1), where the answerer does not need to look at conversation context and can answer the question only based on the image information. We first conduct a manual investigation on the Visual Dialog dataset (VisDial) to figure out how many questions can be answered only with images and how many of them need conversation history to be answered. This investigation shows that around 80% of the questions can be answered only with images. Moreover, on the model side, we verify this observation by building a model that uses only images to answer questions. As expected, this image-only model works very well on the primary task metric of NDCG (evaluated on dense annotations which consider multiple similar answers as correct ones with similarity weights on them) without any help from the conversation history (see Table TABREF40). However, we find that the image-only model does not get higher scores on other metrics such as mean reciprocal rank (MRR), recall@k, and mean rank (evaluated on single ground-truth answers). Because the image-only model does not use any conversation-history information, we hypothesize that this scoring behavior might be related to the amount of history information available, and hence we also conduct additional experiments by building an image-history joint model and train it with different lengths of history features. From these experiments, we see a tendency that a model with the less amount of history features gets a higher NDCG score (with lower values for other metrics), whereas a model with more history information has the opposite behavior. Previously, BIBREF1 argued that the Visdial dataset has an answer bias such that a simple model without vision or dialogue history could achieve reasonable results. However, our motivation is different from theirs. The purpose of our paper is to find characteristics of existing multimodal models on the dataset (which are biased towards the language information in the dialogue history), analyze behaviors of these models on different metrics, as well as employ this analysis to build better, less biased models that achieve more balanced scores. Since NDCG measures more of a model's generalization ability (because it allows multiple similar answers), while the other metrics measure a model's preciseness, we interpret the results of these above experiments to mean that a model with more history information tends to predict correct answers by memorizing keywords or patterns in the history while a model with less history information (i.e., the image-only model) is better at generalization by avoiding relying on such exact-match extracted information. We think that an ideal model should have more balanced behavior and scores over all the metrics rather than having higher scores only for a certain metric and such a model could be considered as the one with both preciseness and generalization. To this end, we propose two models, an image-only and an image-history-joint model. We analyze that the answers these two models produce are complementarily good, and better at different metrics. Hence, we integrate these two models (image-only and image-history-joint) in two ways: consensus-dropout-fusion and ensemble. Our final consensus-dropout-fusion ensemble model scores strongly on both NDCG and recall metrics for the VisDial v1.0 test dataset, and these scores outperform the state-of-the-art of the Visual Dialog challenge 2018 on most metrics. Also, our model shows competitive balanced results in the Visual Dialog challenge 2019 (test-std leaderboard rank 3 based on NDCG metric and high balance across metrics). <<</Introduction>>> <<<Related Work>>> <<<Visual Question Answering (VQA)>>> Visual question answering is a task in which a machine is asked to answer a question about an image. The recent success of deep neural networks and massive data collection BIBREF2 has made the field more active. One of the most challenging parts of the task is to ground the meaning of text on visual evidence. Co-attention BIBREF3 is proposed to integrate information from different modalities (i.e., image and language) and more advanced approaches have shown good performance BIBREF4, BIBREF5, BIBREF6. A bilinear approach has also been proposed to replace simple addition or concatenation approaches for fusing the two modalities BIBREF7, BIBREF8, BIBREF9, BIBREF10. In our work, we employ multi-modal factorized bilinear pooling (MFB) BIBREF11 to fuse a question and image-history features. <<</Visual Question Answering (VQA)>>> <<<Visual Dialog>>> The Visual Dialog task BIBREF0 can be seen as an extended version of the VQA task, with multiple rounds of sequential question-answer pairs as dialog history, including an image caption, which should be referred to before answering a given question. This conversation history can help a model better predict correct answers by giving direct or indirect clues for the answers, or proper context for co-reference resolution. However, having conversation history also means that a model should extract relevant information from the history and introduces another challenge to the task. Many approaches have been proposed to handle this challenge. BIBREF12 tries to extract the clues from history recursively while BIBREF13 and BIBREF14 employ co-attention to fuse visual, history, and question features. In our work, we employ BIBREF15's approach to fuse visual and history features before they are attended by a question. Our joint model with fused features has much information from history and we find that it is in complementary relation with our image-only model. Thus, we combine the two models to take the most appropriate information from each model to answer questions. <<</Visual Dialog>>> <<</Related Work>>> <<<Models>>> In the Visual Dialog task BIBREF0, two agents interact via natural language with respect to an image. The asker keeps asking about the image given an image caption without seeing the image. The other agent (i.e., answerer) keeps answering the questions by viewing the image. They conduct multiple rounds of conversation accumulating question-answer pairs which are called `history' (Figure FIGREF1). The full history $\textrm {HISTORY}$ consists of question-answer pairs as well as an image caption which describes the given image, such that at a current time point $t$, the previous history is $\textrm {HISTORY}_t = \lbrace C, (Q_{1},A_{1}), (Q_{2},A_{2}), ..., (Q_{t-1},A_{t-1}) \rbrace $, where $C$ is the image caption and $Q_{t-1}$ and $A_{t-1}$ are the question and answer at round $t-1$, respectively. Then, given a new current time-stamp question $Q_t$, the history $\textrm {HISTORY}_t$, and the image, the model has to rank 100 candidate answers from the answerer's perspective. <<<Features>>> Visual Features: For visual features, we use object features which are extracted from an image by using Faster R-CNN BIBREF16. The visual feature, $V_{rcnn} \in \mathbb {R}^{k \times d_{v}}$, is a matrix whose rows correspond to objects, where $k$ is the number of objects (k=36 in our experiment), $d_{v}$ is dimension size of visual feature ($d_{v}$ = 2048 for ResNet backbone). Question Features: The word sequence of a question at round $r$, $W_{q_{r}} = \lbrace w_{q_{r}1}, w_{q_{r}2},..., w_{q_{r}T_{q_r}}\rbrace $ is encoded via an LSTM-RNN BIBREF17, and, we take the last hidden state as a question representation: $q_{r} = h_{T_{q_{r}}}^{q_{r}}$, where $T_{q_{r}}$ is the length of the question at round $r$. History Features: History $H_r$ is a history feature at round $r$ encoded from concatenation of a question and a ground truth answer, such that where $T_{a_{r-1}}$ is the length of the answer of round $r-1$, and the length of history at round $r$ is $T_{h_{r}}=T_{q_{r-1}}+T_{a_{r-1}} $. The history $H_r$ is also encoded with an LSTM, We also take the last hidden state as history representation at round $r$: $H_r = h_{T_{h_r}}^{h_r}$. Note that the first history feature $H_1$ comes from the image caption $C$. <<</Features>>> <<<Image-Only Model>>> We first build a model which only uses visual features to answer questions. We employ a state-of-the-art `bottom-up and top-down' approach from BIBREF18, in which we apply the attention mechanism over detected object features. We also adopt the multi-modal factorized bilinear pooling (MFB) method BIBREF11 to calculate attention weights over the visual features with respect to a question feature. From projected visual features and a question feature, we obtain $z \in \mathbb {R}^{k \times d_{m}}$ by applying MFB: where $\textrm {Linear}_{d_v\times d}$ is a linear projection which projects points from a $d_v$-dimension space to a $d$-dimension space. where $M$, $N$ $\in \mathbb {R}^{d_{m} \times d \times m}$ are trainable parameters, $d$ is the dimension of projected visual features and a question feature, $d_m$ is dimension of the fused feature, and $m$ is the number of factors. ${1}_k$ $\in \mathbb {R}^k$ is a vector whose elements are all one. Following BIBREF11, we also apply the power normalization and $\ell _2$ normalization to obtain $\hat{z}_{r}$. After applying linear projection, the softmax operation is applied to get a weight vector $\alpha $: $\alpha _{r} = \textrm {softmax}(L\hat{z}_{r}^{\top })$. We then get a visual representation vector, $v_{r}$ by weighted summing the projected visual features: $v_{r} = \sum _{i=1}^k \alpha _{ri}V_i$, where $L \in \mathbb {R}^{1 \times d_m }$ is trainable parameter, and $V_i$ is the $i$-th row vector of visual feature matrix $V$. The visual representation vector and a question feature vector are combined with element-wise product after linear projection. After one more linear projection, we get the final feature, $f_{v_{r}}^{q_{r}}$ which is further used to rank answers. where $\textrm {fc}_*$ is an fully-connected layer. <<<Answer Selection>>> For each round, there are 100 candidate answers. The $l$-th answer at round $r$, is encoded in the same way as question and history. where $T_{a_{rl}}$ is the length of the $l$-th candidate answer. Scores for each candidate answer are calculated by dot product between fused feature $f_{v_r}^{q_r}$ and each candidate answer representation, $a_{rl}$: $s_{rl} = f_{v_r}^{q_r}\cdot a_{rl}$. <<</Answer Selection>>> <<</Image-Only Model>>> <<<Image-History Joint Model>>> We calculate the similarity matrix, $S_r \in \mathbb {R}^{k \times r} $ between visual and history features following BIBREF15. where $w_s \in \mathbb {R}^{3d}$ is trainable parameter and $H_j$ is the $j$-th row vector of the history feature $H_{1:r}$. From the similarity matrix, the new fused history representation is: Similarly, the new fused visual representation is: These fused features are then fed to the MFB module and attended over w.r.t. a question feature, respectively, following the same process as a visual feature in the image-only model. The weighted-summed features are combined with a question feature through element-wise product and concatenated together to produce the integrated representation: where $v_{r}^f$ and $h_{r}^f$ are weighted-sum of fused features with respect to a question feature. Figure FIGREF5 depicts the whole process of the joint model in this section. <<<Round Dropout>>> To prevent the model from over-relying on history information, we propose a novel dropout approach in which some rounds of history features are dropped out (Figure FIGREF17). To be specific, we randomly pick up to 3 rounds of history from entire history except image caption feature and throw them away. where $N_h^r$ is number of history features at round $r$ and $N_D^r$ is the number of history features to drop at round $r$. <<</Round Dropout>>> <<</Image-History Joint Model>>> <<<Combining Image-Only & Image-History Joint Models>>> Since each of our models has different abilities, we exploit their complementary abilities together by combining them in two ways. The first is our novel consensus dropout fusion which integrates the two models in training time. The other way is to build an ensemble model from the two models at test time. <<<Consensus Dropout Fusion>>> In order to integrate the image-only model and the image-history joint model into one model, we propose a novel integration method called consensus dropout fusion. Our consensus dropout fusion is the combination of a consensus method and an instance dropout method (Figure FIGREF23). <<<Consensus>>> We employ a consensus method in which logits from each model are added to produce the final logit following BIBREF19's approach. where $L_{I}$ and $L_{J}$ are the logit from image-only model and image-hitory joint model, respectively, and $L_{IJ}$ is the new logit obtained by adding the two logits. <<</Consensus>>> <<<Instance Dropout>>> To allow the image-only model to have a stronger effect producing more balanced results over all metrics, we apply dropout to instances of the logit of the joint model. To be specific, when we add two logits, we multiply $L_{J}$ by $I_{drop}$, where ${1}_{(N\times R)} \in \mathbb {R}^{(N\times R)}$ and ${1}_{d} \in \mathbb {R}^{d}$ are all-ones vectors of $(N\times R)$ and $d$ dimension, respectively. $N$ is the training batch size and $R$ is the length of rounds of the conversation history. The dropout mask, $\xi $, is calculated following BIBREF20's work. <<</Instance Dropout>>> <<</Consensus Dropout Fusion>>> <<<Ensemble>>> We also integrate our 2 models via an ensemble. We train each model separately and combine them at test time. To be specific, we take logits from the pre-trained models and select the answer with the highest sum of logits. <<</Ensemble>>> <<</Combining Image-Only & Image-History Joint Models>>> <<</Models>>> <<<Experimental Setup>>> <<<Dataset>>> We use the VisDial v1.0 BIBREF0 dataset to train our models, where one example has an image with its caption, 9 question-answer pairs, and follow-up questions and candidate answers for each round. At round $r$, the caption and the previous question-answer pairs become conversational context. The whole dataset is split into 123,287/2,000/8,000 images for train/validation/test, respectively. Unlike the images in the train and validation sets, the images in the test set have only one follow-up question and candidate answers and their corresponding conversational context. <<</Dataset>>> <<<Metrics>>> For evaluation, the Visual Dialog task employs four metrics. NDCG is the primary metric of the Visual Dialog Challenge which considers multiple similar answers as correct ones. The other three are MRR, recall@k, and mean rank where they only consider the rank of a single answer. Our experiments show the scores of NDCG and non-NDCG metrics from our image-only and joint models have a trade-off relationship due to their different ability (as shown in Sec.SECREF41) in completing Visual Dialog tasks: the image-only model has a high NDCG and low non-NDCG values while the joint model has a low NDCG and high non-NDCG values. <<</Metrics>>> <<<Training Details>>> In our models, the size of word vectors is 300, the dimension of visual feature is 2048, and hidden size of LSTM units which are used for encoders of questions, context history, and candidate answers is 512. We employ Adam BIBREF21 as the optimizer. We set the initial learning rate to 0.001 and decrease it by 0.0001 per epoch until 8th epoch and decay by 0.5 from 9th epoch on. For round dropout, we set the maximum number of history features to be dropped to 3 and we tune the p value to 0.25 for our instance dropout in the consensus dropout fusion module. Cross-entropy is used to calculate the loss. <<</Training Details>>> <<</Experimental Setup>>> <<<Analysis and Results>>> In this section, we first discuss how many questions are answered only from image and how many of them need image and history jointly to be answered by conducting a manual investigation. We find that a large portion of questions in the VisDial dataset can be answered by only using images. Next, to verify the observation from the manual investigation, we perform a follow-up experiment and find a trade-off relation between the amount of history features and the metric scoring trend of models. We then analyze the answers from two models (image-only and image-history joint model) and show they are in complementary relation. Lastly, we show each model can make up for the other by being combined in consensus dropout fusion or in an ensemble model. <<<Human Evaluation: Is Image Alone Enough?>>> We conduct a human evaluation on image, history, and question. To be specific, we randomly select 100 images (which leads to 1000 questions) from the validation set for the evaluation and count the number of questions which can be answered only with images and the number of questions which need conversation context to be answered (ground-truth answers are provided to check if the answers can be inferred given corresponding questions and images instead of providing all the 100 candidate answers). Two annotators conduct the experiment independently and questions on which both annotators mark as being able to be answered only with images are classified as only-image questions otherwise as need-history questions. The inter-annotation agreement (kappa) is 0.74. As shown in Table TABREF36, around 80% of the questions can be answered only from images. Conversely, this also implies that a model needs conversation context to better perform the task. However, as discussed in Sec.SECREF1, using only history is not enough either (only 1% of the questions can be answered) and thus history should be used jointly with images. Note that we consider a question with a pronoun as answerable only with an image if the pronoun can be inferred (co-reference) from the corresponding image (e.g., a question mentions `he' and the image has only one person who is a boy). <<</Human Evaluation: Is Image Alone Enough?>>> <<<Reduced Question-Answer Rounds>>> We next run our joint model with various lengths of history. To be specific, we make our joint model use only k previous history features to answer a question. As shown in Table TABREF40, there is a trade-off between the values of metrics and the number of history features. As the number of history features the joint model uses is increased, the score of NDCG is decreased while other metrics are increased. On the other hand, as the number of history features the joint model uses is decreased the score of NDCG is increased while other metrics are decreased. If we see the Visual Dialog primary task metric of NDCG as a barometer of the model's ability to generalize and the other metrics can be seen as an indicator of preciseness, this means that decreased size of history gives a model the ability of generalization at the cost of preciseness. From this tendency, the image-only model has the highest NDCG score. <<</Reduced Question-Answer Rounds>>> <<<Complementary Relation>>> If the image-only model is good at NDCG, can we exploit its ability by combining it with the joint model? To figure out this possibility, we compare each answer from the image-only model and the joint model. To be specific, for R@1, we list up the correct answers from each model and count answers which are in both sets, i.e., the intersection. From the intersection, we obtain the union of the two sets. For NDCG, there is not one single correct answer. So we roughly calculate the intersection by taking minimum values between the two models' scores and averaging them. As we can see in Table TABREF42, the intersections do not take the entire score of either model for both metrics. This could mean image-only and joint models have room to be improved by combining them together. <<</Complementary Relation>>> <<<Model Combination Results>>> Considering the complementary relation between image-only model and joint model, combining the two models would be a good approach to take the best from the both. So, we integrate these two models via two methods: consensus dropout fusion and ensemble (see Sec.SECREF26). <<<Consensus Dropout Fusion Results>>> As shown in Table TABREF46, consensus dropout fusion improves the score of NDCG by around 1.0 from the score of the joint model while still yielding comparable scores for other metrics. Unlike ensemble way, consensus dropout fusion does not require much increase in the number of model parameters. <<</Consensus Dropout Fusion Results>>> <<<Ensemble Model Results>>> As also shown in Table TABREF46, the ensemble model seems to take the best results from each model. Specifically, the NDCG score of the ensemble model is comparable to that of the image-only model and the scores of other metrics are comparable to those of the image-history joint model. From this experiment, we can confirm that the two models are in complementary relation. <<</Ensemble Model Results>>> <<</Model Combination Results>>> <<<Final Visual Dialog Test Results>>> For the evaluation on the test-standard dataset of VisDial v1.0, we try 6 image-only model ensemble and 6 consensus dropout fusion model ensemble. As shown in Table TABREF48, our two models show competitive results compared to the state-of-the-art on the Visual Dialog challenge 2018 (DL-61 was the winner of the Visual Dialog challenge 2018). Specifically, our image-only model shows much higher NDCG score (60.16). On the other hand, our consensus dropout fusion model shows more balanced results over all metrics while still outperforming on most evaluation metrics (NDCG, MRR, R@1, and R@5). Compared to results of the Visual Dialog challenge 2019, our models also show strong results. Although ReDAN+ BIBREF26 and MReaL–BDAI show higher NDCG scores, our consensus dropout fusion model shows more balanced results over metrics while still having a competitive NDCG score compared to DAN BIBREF25, with rank 3 based on NDCG metric and high balance rank based on metric average. <<<Ensemble on More Models>>> We also run an ensemble model from our image-only, joint, and consensus dropout fusion models (6 of each and total 18 models) and evaluate it on the test-standard dataset of the VisDial v1.0. This model's scores (NDCG: 59.90, MRR: 64.05, R@1: 50.28, R@5: 80.95, R@10: 90.60, Mean: 4.00) are in between our image-only ensemble model and our consensus dropout fusion ensemble model, i.e., this ensemble model has a higher NDCG than the consensus dropout fusion ensemble model and higher non-NDCG scores than the image-only ensemble model. This result shows that our image-only, joint, and consensus dropout fusion models make up for each other by being combined in an ensemble model as we expected. <<</Ensemble on More Models>>> <<</Final Visual Dialog Test Results>>> <<</Analysis and Results>>> <<<Ablation Study>>> Round Dropout: As shown in Table TABREF52, our round dropout (see Sec.SECREF24) improves the NDCG score by 1.2. A possible interpretation is that round dropout could help the model avoid from over-fitting to some patterns in the history features by intentionally dropping some of the features in the training session. Consensus Dropout Fusion and Dropout Rate: We run our consensus dropout fusion model (see Sec.SECREF27) with different instance dropout rates to figure out how the dropout rates affect the performance of the model. As shown in Table.TABREF53, as the dropout rate increases the NDCG score is also increased while scores of non-NDCG metrics are decreased. By changing the dropout rate, we can modulate the influence of each model (image-only and joint models) over the combined model. We choose a value of 0.25 for the dropout rate since it yields more balanced scores over all metrics. Ensemble Combination: We try different combinations from image-only and joint models to build ensemble models. The total number of models amounts to 3, i.e., image-only + image-only (I+I), joint + joint (J+J), and image-only + joint (I+J) ensemble models. As shown in Table TABREF54, scores of the I+J ensemble model are comparable to same-kind ensemble models (I+I and J+J). To be specific, for the NDCG metric, the I+J model outperforms the J+J model, while, for other metrics (MRR, recall@k, and mean rank), the I+J model outperforms the I+I model. This might imply that the balanced scores (i.e., high scores over all metrics) of the I+J model are from the complementary relation between image-only and image-history joint model. Output Examples: Due to space constraints and no supplementary allowed in AAAI rules, we provide detailed examples in this arxiv version's appendix, showing the coreference and memorization phenomena of the joint image-history model as well as the image-only model's example outputs on image-only questions. Examples of only-image questions, and the ranking lists of the image-history joint and image-only models are also provided. <<</Ablation Study>>> <<<Conclusion>>> We first showed that current multimodal models on the Visual Dialog task over-rely on the dialogue history, and relatedly, image-only and image-history joint models achieve complementary performance gains. Hence, to balance the best abilities from each model, we proposed two ways of combining them: consensus dropout fusion and ensemble. Our consensus dropout fusion and ensemble model achieve strong ranks on multiple leaderboards. Specifically, the models show higher scores than the state-of-the-art results of the Visual Dialog challenge 2018 and more balanced scores than highest ranked results of the Visual Dialog challenge 2019. Given the characteristics of the dataset and current model behaviors, a potential future direction is to combine the power of the two models dynamically, e.g., learn to select a proper model based on the question type. <<</Conclusion>>> <<</Title>>>
{ "references": [ "ensemble model" ], "type": "extractive" }
1910.08210
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How does propose model model that capture three-way interactions? Context: <<<Title>>> RTFM: Generalising to Novel Environment Dynamics via Reading <<<Abstract>>> Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps. <<</Abstract>>> <<<Introduction>>> Reinforcement learning (RL) has been successful in a variety of areas such as continuous control BIBREF0, dialogue systems BIBREF1, and game-playing BIBREF2. However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments. Prior work on language grounding and language-based RL (see BIBREF3 for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, or the dynamics of the environment vary and are presented in language for some fixed goal BIBREF9. In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training. Our contributions are two-fold. First, we propose a grounded policy learning problem that we call (). In , the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language templated descriptions of dynamics and goals to produced a combinatorially large number of environment dynamics to train and evaluate . Second, we propose to model the joint reasoning problem in . We show that generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM BIBREF10, BIBREF6 both in terms of sample efficiency and final win-rate on . Through curriculum learning where we adapt trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future. <<</Introduction>>> <<<Related Work>>> <<<Language-conditioned policy learning.>>> A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control BIBREF11 and games BIBREF5, BIBREF6 to step-by-step navigation BIBREF7. In contrast to learning policies for imperative instructions, BIBREF4, BIBREF9 infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal. <<</Language-conditioned policy learning.>>> <<<Language grounding.>>> Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images BIBREF12, games BIBREF13, BIBREF14, robot control BIBREF15, BIBREF16, and navigation BIBREF17. We study language grounding in interactive games similar to BIBREF11, BIBREF5 or BIBREF8, where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics. <<</Language grounding.>>> <<</Related Work>>> <<<>>> We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training. To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective(blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on. In , the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure FIGREF3 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing weapons if they pick up a new weapon. A monster moves towards the player with 60% probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix SECREF13 for details). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations. During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. “fire goblin” from “Order of the forest”) to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. “fanatical sword”). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer). In order to win the game (e.g. Figure FIGREF3), the agent must identify the target team from the goal (e.g. Order of the Forest) identify the monsters that belong to that team (e.g. goblin, jaguar, and lynx) identify which monster is in the world (e.g. goblin), and its element (e.g. fire) identify the modifiers that are effective against this element (e.g. fanatical, shimmering) find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword) pick up the correct item (e.g. fanatical sword) engage the correct monster in combat (e.g. fire goblin). If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of +1 if it wins the game and -1 otherwise. presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand. We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion. In addition to the main tasks, we also study a simpler formulation called that has a fixed goal. In , the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. “a beats b, b beats c, c beats a”). We then spawn a monster in the world with a randomly assigned type (e.g. “b goblin”), as well as an item corresponding to each type (e.g. “a”, “b”, and “c”). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure FIGREF49 shows an instance of . <<</>>> <<<Model>>> We propose the model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the () layer, which forms the core of our model. <<<() layer>>> Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning BIBREF10 and instruction following BIBREF6. In , the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, builds codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure FIGREF12 shows the layer. We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and non-bold letters for scalars. Exact dimensions of these variables are shown in Table TABREF42 in appendix SECREF8. Let $_$ denote a fixed-length $_$-dimensional representation of the text and $_$ the representation of visual inputs with height $H$, width $W$, and $_$ channels. Let $$ denote a convolution layer. Let + and * symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features: Unlike FiLM, we additionally modulate text features using visual features: The output of the layer consists of the sum of the modulated features $$, as well as a max-pooled summary $$ over this sum across spatial dimensions. <<</() layer>>> <<<The model>>> We model interactions between observations from the environment, goal, and document using layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure FIGREF18 shows the model. Let $_$ denote word embeddings corresponding to the observations from the environment, where $_[:, :, i, j]$ represents the embeddings corresponding to the $_$-word string that describes the objects in location $(i, j)$ in the grid-world. Let $_$, $_$, and $_$ respectively denote the embeddings corresponding to the $_$-word document, the $_$-word inventory, and the $_$-word goal. We first compute a fixed-length summary $_$ of the the goal using a bidirectional LSTM BIBREF18 followed by self-attention BIBREF19, BIBREF20. We abbreviate self-attention over the goal as $_= (_)$. We similarly compute a summary of the inventory as $_= (_(_))$. Next, we represent the document encoding conditioned on the goal using dot-product attention BIBREF21. We abbreviate attention over the document encoding conditioned on the goal summary as $_= {_}{_}$. Next, we build the joint representation of the inputs using successive layers. At each layer, the visual input to the layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $_$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let ${a; b}$ denote the feature-wise concatenation of $a$ and $b$. For the $i$th layer, we have $_{\text{-}}(_)$ is another encoding of the document similar to $_$, produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$, we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $^{(0)} = {\sum _j_{, j}; _}$. We max pool a linear transform of the initial visual features to compute the initial visual summary $^{(0)} = (_^{(0)} + _)$. Let $$ denote visual summary of the last layer. We compute the policy $$ and baseline $$ as where $_{\rm policy}$ and $_{\rm baseline}$ are 2-layer multi-layer perceptrons with $$ activation. We train using an implementation of IMPALA BIBREF22, which decouples actors from learners and uses V-trace for off-policy correction. Please refer to appendix SECREF10 for details. <<</The model>>> <<</Model>>> <<<Experiments>>> We consider variants of by varying the size of the grid-world ($6\times 6$ vs $10\times 10$), allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (nl). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section SECREF3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information. We compare to the FiLM model by BIBREF6 and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of . In no_task_attn, the document attention conditioned on the goal utterance ((DISPLAY_FORM26)) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In no_vis_attn, we do not attend over the document given the visual output of the previous layer ((DISPLAY_FORM27)), and the document is instead represented through self-attention. In no_text_mod, text modulation using visual features ((DISPLAY_FORM14)) is removed. Please see appendix SECREF9 for model details on our model and baselines, and appendix SECREF10 for training details. <<<Comparison to baselines and ablations>>> We compare to baselines and ablated variants on a simplified variant of in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language templated descriptions (no nl). Figure FIGREF29 shows that compared to baselines and ablated variants, is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50$% win rate. Table FIGREF29 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with outperforming FiLM and the CNN model. We find similar results for , its ablated variants, and baselines on other tasks (see appendix SECREF11 for details). <<</Comparison to baselines and ablations>>> <<<Curriculum learning for complex environments>>> Due to the long sequence of co-references the agent must perform in order to solve the full ($10\times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of . We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10\times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table TABREF32 (see Figure FIGREF58 in appendix SECREF12 for training curves of each stage). We see that curriculum learning is crucial to making progress on , and that initial policy training (first row of Table TABREF32) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table TABREF33 shows variants of the last stage of the curriculum in which the model was trained on $6\times 6$ versions of the full and in which the model was trained on $10\times 10$ versions of the full . We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve . This highlights the difficulties of the problem and suggests that there is significant room for improvement in developing better language grounded policy learners. <<<Attention maps.>>> Figure FIGREF34 shows attention conditioned on the goal and on observation summaries produced by intermediate layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggests that attention mechanisms in help identify relevant information in the document. <<</Attention maps.>>> <<<Analysis of trajectories and failure modes.>>> We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full . We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials. <<</Analysis of trajectories and failure modes.>>> <<</Curriculum learning for complex environments>>> <<</Experiments>>> <<<Conclusion>>> We proposed , a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study , we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed , a model that captures three-way interactions between the goal, document, and observations, and that generalises to new environments with dynamics not seen during training. outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, performs well on complex tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans BIBREF23 and induce hierarchical policies BIBREF24, BIBREF25. <<</Conclusion>>> <<</Title>>>
{ "references": [ " We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for . The final output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation." ], "type": "extractive" }
1908.08593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How much is performance improved by disabling attention in certain heads? Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%, this operation vary across tasks" ], "type": "extractive" }
1908.08593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: In which certain heads was attention disabled in experiments? Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "single head,disabling a whole layer, that is, all 12 heads in a given layer" ], "type": "extractive" }
1908.08593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What handcrafter features-of-interest are used? Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "nouns,verbs,pronouns,subjects,objects,negation words,special BERT tokens" ], "type": "extractive" }
1908.08593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What subset of GLUE tasks is used? Context: <<<Title>>> Revealing the Dark Secrets of BERT <<<Abstract>>> BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models. <<</Abstract>>> <<<Introduction>>> Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. <<</Introduction>>> <<<Related work>>> There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. <<</Related work>>> <<<Methodology>>> We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. <<</Methodology>>> <<<Experiments>>> In this section, we present the experiments conducted to address the above research questions. <<<BERT's self-attention patterns>>> Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. <<<Results>>> fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. <<</Results>>> <<</BERT's self-attention patterns>>> <<<Relation-specific heads in BERT>>> In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. <<</Relation-specific heads in BERT>>> <<<Change in self-attention patterns after fine-tuning>>> Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. <<</Change in self-attention patterns after fine-tuning>>> <<<Attention to linguistic features>>> In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. <<</Attention to linguistic features>>> <<<Token-to-token attention>>> To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. <<</Token-to-token attention>>> <<<Disabling self-attention heads>>> Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. <<</Disabling self-attention heads>>> <<</Experiments>>> <<<Discussion>>> In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. <<</Discussion>>> <<<Conclusion>>> In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens. <<</Conclusion>>> <<</Title>>>
{ "references": [ "MRPC,STS-B,SST-2,QQP,RTE,QNLI,MNLI" ], "type": "extractive" }
1911.02711
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Do they predict the sentiment of the review summary? Context: <<<Title>>> Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis <<<Abstract>>> Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system. <<</Abstract>>> <<<Introduction>>> Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. <<</Introduction>>> <<<Related Work>>> The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. <<</Related Work>>> <<<Method>>> In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. <<<Problem Formulation>>> The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. <<</Problem Formulation>>> <<<Model Overview>>> Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. <<</Model Overview>>> <<<Summary Encoder>>> Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. <<</Summary Encoder>>> <<<Hierarchically-Refined Review Encoder>>> The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. <<<Sequence Encoding Layer>>> Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. <<</Sequence Encoding Layer>>> <<<Attention Inference Layer>>> In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. <<</Attention Inference Layer>>> <<</Hierarchically-Refined Review Encoder>>> <<<Output Layer>>> Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. <<</Output Layer>>> <<<Training>>> Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. <<</Training>>> <<</Method>>> <<<Experiments>>> We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. <<<Datasets>>> We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. <<</Datasets>>> <<<Experimental Settings>>> We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. <<</Experimental Settings>>> <<<Baselines>>> <<<HSSC @!START@BIBREF6@!END@.>>> This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. <<</HSSC @!START@BIBREF6@!END@.>>> <<<SAHSSC @!START@BIBREF7@!END@.>>> This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. <<</SAHSSC @!START@BIBREF7@!END@.>>> <<<BiLSTM+Pooling.>>> For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. <<</BiLSTM+Pooling.>>> <<<BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. <<</BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> <<<BiLSTM+Hard Attention>>> To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. <<</BiLSTM+Hard Attention>>> <<</Baselines>>> <<<Development Experiments>>> We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. <<<Self-attention Baseline>>> We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. <<</Self-attention Baseline>>> <<<Hidden Size>>> We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. <<</Hidden Size>>> <<<Number of Layers>>> As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. <<</Number of Layers>>> <<</Development Experiments>>> <<<Results>>> Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. <<<Review Length>>> Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. <<</Review Length>>> <<<Case Study>>> Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. <<</Case Study>>> <<</Results>>> <<</Experiments>>> <<<Conclusion>>> We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1911.02711
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which review dataset do they use? Context: <<<Title>>> Exploring Hierarchical Interaction Between Review and Summary for Better Sentiment Analysis <<<Abstract>>> Sentiment analysis provides a useful overview of customer review contents. Many review websites allow a user to enter a summary in addition to a full review. It has been shown that jointly predicting the review summary and the sentiment rating benefits both tasks. However, these methods consider the integration of review and summary information in an implicit manner, which limits their performance to some extent. In this paper, we propose a hierarchically-refined attention network for better exploiting multi-interaction between a review and its summary for sentiment analysis. In particular, the representation of a review is layer-wise refined by attention over the summary representation. Empirical results show that our model can better make use of user-written summaries for review sentiment analysis, and is also more effective compared to existing methods when the user summary is replaced with summary generated by an automatic summarization system. <<</Abstract>>> <<<Introduction>>> Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. <<</Introduction>>> <<<Related Work>>> The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. <<</Related Work>>> <<<Method>>> In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. <<<Problem Formulation>>> The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. <<</Problem Formulation>>> <<<Model Overview>>> Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. <<</Model Overview>>> <<<Summary Encoder>>> Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. <<</Summary Encoder>>> <<<Hierarchically-Refined Review Encoder>>> The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. <<<Sequence Encoding Layer>>> Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. <<</Sequence Encoding Layer>>> <<<Attention Inference Layer>>> In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. <<</Attention Inference Layer>>> <<</Hierarchically-Refined Review Encoder>>> <<<Output Layer>>> Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. <<</Output Layer>>> <<<Training>>> Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. <<</Training>>> <<</Method>>> <<<Experiments>>> We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. <<<Datasets>>> We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. <<</Datasets>>> <<<Experimental Settings>>> We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. <<</Experimental Settings>>> <<<Baselines>>> <<<HSSC @!START@BIBREF6@!END@.>>> This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. <<</HSSC @!START@BIBREF6@!END@.>>> <<<SAHSSC @!START@BIBREF7@!END@.>>> This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. <<</SAHSSC @!START@BIBREF7@!END@.>>> <<<BiLSTM+Pooling.>>> For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. <<</BiLSTM+Pooling.>>> <<<BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. <<</BiLSTM+Self-attention @!START@BIBREF13@!END@.>>> <<<BiLSTM+Hard Attention>>> To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. <<</BiLSTM+Hard Attention>>> <<</Baselines>>> <<<Development Experiments>>> We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. <<<Self-attention Baseline>>> We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. <<</Self-attention Baseline>>> <<<Hidden Size>>> We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. <<</Hidden Size>>> <<<Number of Layers>>> As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. <<</Number of Layers>>> <<</Development Experiments>>> <<<Results>>> Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. <<<Review Length>>> Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. <<</Review Length>>> <<<Case Study>>> Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. <<</Case Study>>> <<</Results>>> <<</Experiments>>> <<<Conclusion>>> We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset. <<</Conclusion>>> <<</Title>>>
{ "references": [ "SNAP (Stanford Network Analysis Project)" ], "type": "extractive" }
1910.13890
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the three languages studied in the paper? Context: <<<Title>>> A Latent Morphology Model for Open-Vocabulary Neural Machine Translation <<<Abstract>>> Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings. <<</Abstract>>> <<<Introduction>>> Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5. In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT. <<</Introduction>>> <<<Evaluation>>> <<<Models>>> We evaluate our model by comparing it in machine translation against three baselines which constitute the conventional open-vocabulary NMT methods, including architectures using atomic parameterization either with subword units segmented with BPE BIBREF0 or characters, and the hierarchical parameterization method employed for generating all words in the output. We implement all architectures using Pytorch BIBREF6 within the OpenNMT-py framework BIBREF7. <<</Models>>> <<<Data and Languages>>> In order to evaluate our model we design two sets of experiments. The experiments in §SECREF8 aim to evaluate different methods under low-resource settings, for languages with different morphological typology. We model the machine translation task from English into three languages with distinct morphological characteristics: Arabic (templatic), Czech (fusional), and Turkish (agglutinative). We use the TED Talks corpora BIBREF8 for training the NMT models for these experiments. In §SECREF10, we conduct more experiments in Turkish to demonstrate the case of increased data sparsity using multi-domain training corpora, where we extend the training set using corpora from EU Bookshop BIBREF9, Global Voices, Gnome, Tatoeba, Ubuntu BIBREF10, KDE4 BIBREF11, Open Subtitles BIBREF12 and SETIMES BIBREF13. The statistical characteristics of the training sets are given in Tables TABREF16 and TABREF17. We use the official evaluation sets of the IWSLT for validating and testing the accuracy of the models. In order to increase the number of unknown and rare words in the evaluation sets we measure accuracy on large test sets combining evaluation sets from many years (Table TABREF18 presents the evaluation sets used for development and testing). The accuracy of each model output is measured using BLEU BIBREF15 and chrF3 BIBREF16 metrics, whereas the significance of the improvements are computed using bootstrap hypothesis testing BIBREF17. <<</Data and Languages>>> <<<Training Settings>>> All models are implemented using gated recurrent units (GRU) BIBREF18, and have a single-layer bi-RNN encoder. The source sides of the data used for training all NMT models, and the target sides of the data used in training the subword-level NMT models are segmented using BPE with 16,000 merge rules. We implement all decoders using a comparable number of GRU parameters, including 3-layer stacked-GRU subword and character-level decoders, where the attention is computed after the 1st layer BIBREF19 and a 3-layer hierarchical decoder which implements the attention mechanism after the 2nd layer. All models use an embedding dimension and GRU size of 512. The latent morphology model uses the same hierarchical GRU architecture, where the middle layer is augmented using 4 multi-layer perceptrons with 256 hidden units. We use a lemma vector dimension of 150, 10 inflectional features (See §SECREF21 for experiments conducted to tune the feature dimensions) and set the regularization constant to $\rho =0.4$. All models are trained using the Adam optimizer BIBREF20 with a batch size of 100, dropout rate of 0.2, learning rate of 0.0004 and learning rate decay of 0.8, applied when the perplexity does not decrease at a given epoch. Translations are generated with beam search with a beam size of 5, where the hierarchical models implement the hierarchical beam search BIBREF21. <<</Training Settings>>> <<<Results>>> <<<The Effect of Morphological Typology>>> The experiment results given in Table TABREF9 shows the performance of each model in translating English into Arabic, Czech and Turkish. In Turkish, the most sparse target language in our benchmark, using character-based decoding shows to be more advantageous compared to the subword-level and hierarchical models, due to the fact that reduced granularity in the vocabulary units might aid in better predicting words under conditions of high data sparsity. In Arabic, on the other hand, using a hierarchical decoding model shows to be advantageous compared to the character-level decoder, as it might be useful in better learning syntactic dependencies, whereas it also outperforms the subword-level decoder. Using the latent morphology model provides improvements of 0.51 and 0.30 BLEU points in Arabic and Turkish over the best performing baselines, respectively. The fact that our model can efficiently work in both Arabic and Turkish suggests that it can handle the generation of both concatenative and non-concatenative morphological transformations. The results in the English-to-Czech translation direction do not indicate a specific advantage of using either method for generating fusional morphology, where morphemes are already optimized at the surface level, although our model is still able to achieve translation accuracy comparable to the character-level model. <<</The Effect of Morphological Typology>>> <<<The Effect of Data Size>>> The experiment conducted in the English-to-Turkish translation direction by increasing the amount of training data with multi-domain corpora demonstrates a more challenging case, where there is a greater possibility of observing rare words, either in the form of morphological inflections due to the complex agglutinative morphology of Turkish, or ambiguous terminology raising from the multi-domain characteristics. In this experiment, the character-level model experiences a drop in performance and its accuracy is much lower than the subword-level one, suggesting that its capacity cannot cope with the increased amount of sparsity. Empirical results suggest that with increased capacity, character-level models carry the potential to reach comparable performance to subword-level models BIBREF4. Our model reaches a much larger improvement of 0.82 BLEU points over the subword-level and 2.54 BLEU points over the character-level decoders, suggesting that it could make use of the increased sparsity in learning more accurate representations. <<</The Effect of Data Size>>> <<<Predicting Unseen Words>>> In addition to general evaluation using automatic metrics, we perform a more focused analysis to illustrate the performance of different methods in predicting unseen words. We sample the sentences from the development sets which contain out-of-vocabulary words, and compute the average perplexity per character on these sentences using different NMT models, as suggested by BIBREF22. In general, the highest perplexities are obtained using the subword-based model, suggesting that generating unseen words using subword units is indeed increasing the difficulty of prediction, compared to the character-level which obtains the lowest perplexity. This result indicates that increased granularity aids in reducing the uncertainty during prediction. Similar to the results in §SECREF8, in Czech the values are almost comparable. Due to its stochastic nature, our model yields higher perplexity values compared to the hierarchical model, whereas the values range between subword and character-based models, possibly finding an optimal level of granularity between the two solutions. <<</Predicting Unseen Words>>> <<<Feature Variations>>> In order to understand whether the latent inflectional features in fact capture information about variations related to morphological transformations, we try generating different surface forms of the same lemma by assigning different values to the inflectional features. We use the latent morphology model based decoder to translate the English word `go', and after sampling the lemma, we fix its value and vary the values of the inflectional features at random positions for generating different outputs. Table TABREF14 presents different sets of feature values and the corresponding outputs generated by the decoder. The model generates different surface forms for different sets of features, confirming that latent variables encode information related to the infinitive form of the verb, as well as its formality conditions, prepositions, person, number and tense. We also observe that many trials based on different feature combinations may result in the same outputs, although some feature values may not be set in a single-word context. Varying the features individually does not necessarily yield distinct changes in the output, suggesting that some features may act jointly in determining the word form. <<</Feature Variations>>> <<</Results>>> <<</Evaluation>>> <<<Conclusion>>> In this paper we presented a novel decoding architecture for NMT employing a hierarchical latent variable model to promote sparsity in lexical representations, which demonstrated promising application for morphologically-rich and low-resource languages. Our model generates words one character at a time by composing two latent features representing their lemmas and inflectional features. We evaluate our model against conventional open-vocabulary NMT solutions such as subword and character-level decoding methods in translationg English into three morphologically-rich languages with different morphological typologies under low to mid-resource settings. Our results show that our model can significantly outperform subword-level NMT models, whereas demonstrates better capacity than character-level models in coping with increased amounts of data sparsity. We also conduct ablation studies on the effect of feature variations to the predictions, which prove that despite being completely unsupervised, our model can in fact capture morphosyntactic information and generalize to different surface forms of words. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Arabic, Czech and Turkish" ], "type": "extractive" }
1909.01492
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which dataset do they use? Context: <<<Title>>> Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation <<<Abstract>>> Neural networks are part of many contemporary NLP systems, yet their empirical successes come at the price of vulnerability to adversarial attacks. Previous work has used adversarial training and data augmentation to partially mitigate such brittleness, but these are unlikely to find worst-case adversaries due to the complexity of the search space arising from discrete text perturbations. In this work, we approach the problem from the opposite direction: to formally verify a system's robustness against a predefined class of adversarial attacks. We study text classification under synonym replacements or character flip perturbations. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation -- a formal model verification method. We modify the conventional log-likelihood training objective to train models that can be efficiently verified, which would otherwise come with exponential search complexity. The resulting models show only little difference in terms of nominal accuracy, but have much improved verified accuracy under perturbations and come with an efficiently computable formal guarantee on worst case adversaries. <<</Abstract>>> <<<Introduction>>> Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training BIBREF5 and data augmentation BIBREF3, BIBREF8, which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks BIBREF9, BIBREF10, BIBREF11. Rather than continuing the race with adversaries, formal verification BIBREF12, BIBREF13, BIBREF14 offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered – but semantically invariant – input change. In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy. The contributions of this paper are twofold: To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing (§SECREF3). Through a series of experiments (§SECREF4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds. <<</Introduction>>> <<<Related Work>>> <<<Adversarial Examples in NLP.>>> Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word- and character-level HotFlip attacks BIBREF5 – which consist of character and synonym replacements – on text classification tasks. We compare our verifiable approach to other defenses including adversarial training BIBREF20 and data augmentation BIBREF8, BIBREF3. Note that some existing adversarial perturbations such as syntactically controlled paraphrasing BIBREF7, exploiting backtranslation systems BIBREF6, or using targeted keyword attack BIBREF21 are beyond the specification in this paper. <<</Adversarial Examples in NLP.>>> <<<Formal Verification of Neural Networks.>>> Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) BIBREF22, BIBREF23 or Satisfiability Modulo Theory (SMT) BIBREF14, BIBREF24, and incomplete methods that solve a convex relaxation of the verification problem BIBREF25, BIBREF26, BIBREF27. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable BIBREF28, BIBREF26, BIBREF19, BIBREF17. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge. <<</Formal Verification of Neural Networks.>>> <<<Representations of Combinatorial Spaces.>>> Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring BIBREF29, machine translation of ASR outputs BIBREF30, paraphrase variants BIBREF31, and word segmentation alternatives BIBREF32. The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration. <<</Representations of Combinatorial Spaces.>>> <<</Related Work>>> <<<Methodology>>> We assume a fixed initial vector representation $\mathbf {z} _0$ of a given input sentence $z$ (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations $h_k$: where $\mathbf {z} _k$ is the vector of activations in the $k$-th layer and the final output $\mathbf {z} _K$ consists of the logits for each class. Typically each $h_k$ will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector $\mathbf {z} _k$. <<<Verification>>> Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input $\mathbf {x} _0$: $\forall \mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0):~ \mathbf {z} _K \in \mathcal {X}_\mathrm {out}$, where $\mathcal {X}_\mathrm {out}$ characterizes a constraint on the outputs, and $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ defines a neighbourhood of $\mathbf {x} _0$ throughout which the constraint should be satisfied. In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to $\delta $ words, or character flips up to $\delta $ characters) of the original sentence $x$. The attack space $\mathcal {X}_\mathrm {in} (\mathbf {x} _0)$ is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by $z_{K,y}$ the logit of label $y$, we formulate the output constraint that for all classes $y: z_{K,y_\textrm {true}} \ge z_{K,y}$. This specification establishes that the prediction of all perturbed sentences $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ should correspond to the correct label $y_\textrm {true}$. This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class $y$ where $\mathbf {e}_{i}$ is a one-hot vector with 1 in the $i$-th position. In other words, the true class logit should be greater or equal than those for all other classes $y$, which means the prediction remains constant. <<</Verification>>> <<<Verification as Optimisation>>> Verifying the specification in Eq. (DISPLAY_FORM10) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it: where $\mathbf {c} $ is a vector with entries $c_y = 1$, $c_{y_\textrm {true}} = -1$ and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq. (DISPLAY_FORM10) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack. <<</Verification as Optimisation>>> <<<Modeling Input Perturbations using Simplices>>> In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation space $\bar{\mathcal {X}}_\mathrm {in} \supseteq \mathcal {X}_\mathrm {in}$. Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification. In the domain of image classification, $\mathcal {X}_\mathrm {in}$ is often modeled as an $L_\infty $-ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number $\delta $ of symbol substitutions. Although we could construct an axis-aligned bounding box $\bar{\mathcal {X}}_\mathrm {in}$ in embedding space that encompasses all of $\mathcal {X}_\mathrm {in}$, it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously. To remedy this, we propose a tighter over-approximation in the form of a `simplex' in embedding space. We first define this for the special case $\delta =1$, in which $\mathcal {X}_\mathrm {in} = \lbrace \mathbf {x} _0\rbrace \cup \lbrace \mathbf {p} ^{(m)}_0 : 1\le m\le M\rbrace $ consists of the representations of all $M$ sentences $p^{(m)}$ derived from $x$ by performing a single synonym (or character) substitution, together with the unperturbed sentence $x$ itself. In this case we define $\bar{\mathcal {X}}_\mathrm {in}$ to be the convex hull $\mathcal {S}_1$ of $\mathcal {X}_\mathrm {in}$. Note we are not considering contextual embeddings BIBREF33 here. Each `vertex' $\mathbf {p} ^{(m)}_0$ is a sequence of embedding vectors that differs from $\mathbf {x} _0$ at only one word (or character) position. For a larger perturbation radius $\delta >1$, the cardinality of $\mathcal {X}_\mathrm {in}$ grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating $\mathcal {S}_1$ centered at $\mathbf {x} _0$, scaling it up by a factor of $\delta $, yields a simplex $\mathcal {S}_\delta $ with $M+1$ vertices that contains $\mathcal {X}_\mathrm {in}$. More formally, we define a region in the input embedding space based on the $M$ `elementary' perturbations $\lbrace \mathbf {p} ^{(m)}_0: m = 1 \ldots M\rbrace $ of $\mathbf {x} _0$ defined earlier for the $\delta =1$ case. For perturbations of up to $\delta $ substitutions, we define $\bar{\mathcal {X}}_\mathrm {in}(\mathbf {x} _0)$ as the convex hull of $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $, where $\mathbf {z} ^{(0)}_0=\mathbf {x} _0$ denotes the original (unperturbed) sentence representation and, for $m\ge 1$, $\mathbf {z} ^{(m)}_0 = \mathbf {x} _0+\delta \cdot (\mathbf {p} ^{(m)}_0-\mathbf {x} _0)$. The convex hull is an over-approximation of $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$: it contains the representations of all sentences derived from $x$ by performing up to $\delta $ substitutions at distinct word (or character) positions. <<</Modeling Input Perturbations using Simplices>>> <<<Interval Bound Propagation>>> To estimate the optimal value of the problem (DISPLAY_FORM12), given an input $\mathbf {z} _0$, we can propagate the upper/lower bounds on the activations $\mathbf {z} _k$ of each layer using interval arithmetic BIBREF17. We begin by computing interval bounds on the first layer's activations. Recall that any input $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}$ will lie within the convex hull of certain vertices $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $. Then, assuming that the first layer $h_1$ is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components $z_{1,i}$ of the first layer's activations $\mathbf {z} _1$ are as follows: Note that these bounds are efficient to compute (by passing each perturbation $\mathbf {z} ^{(m)}_0$ through the first layer); in particular there is no need to compute the convex hull polytope. For subsequent layers $k>1$, the bounds on the components $z_{k,i}$ of $\mathbf {z} _k$ are: The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in IBP. Finally, the lower and upper bounds of the output logits $\mathbf {z} _K$ can be used to construct an upper bound on the solution of (DISPLAY_FORM12): <<<Verifiable Training.>>> The upper bound in (DISPLAY_FORM17) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (DISPLAY_FORM17) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class $y \ne y_\textrm {true} $: $\hat{\mathbf {z}}_{K,y}(\delta ) = \overline{\mathbf {z}}_{K,y} (\delta ) $, and $\hat{\mathbf {z}}_{K,y_\textrm {true}}(\delta ) = \underline{\mathbf {z}}_{K,y_\textrm {true}} (\delta ) $. The training loss can then be formulated as where $\ell $ is the cross-entropy loss, $\kappa $ a hyperparameter that controls the relative weights between the classification loss $L_\textrm {normal}$ and specification loss $L_\textrm {spec}$. If $\delta = 0$ then $\mathbf {z} _K = \hat{\mathbf {z}}_K(\delta )$, and thus $L$ reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with $\kappa $=1 and linearly decreasing to 0.25, is effective for verifiable training. <<</Verifiable Training.>>> <<</Interval Bound Propagation>>> <<</Methodology>>> <<<Experiments>>> We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) BIBREF15 and AG News corpus, processed in BIBREF16. We focus on word-level and character-level experiments on SST and character-level experiments on AG News. Our specification is that models should preserve their prediction against up to $\delta $ synonym substitutions or character typos, respectively. <<<A Motivating Example>>> We provide an example from Table TABREF29 to highlight different evaluation metrics and training methods. Given a sentence, “you ' ve seen them a million times .”, that is predicted correctly (called Nominal Accuracy) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to $\delta =3$ typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any $\delta =3$ character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq. (DISPLAY_FORM12), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy). There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (DISPLAY_FORM19) (called Verifiable Training). <<</A Motivating Example>>> <<<Baselines>>> In this section we detail our baseline models. <<<Adversarial Training.>>> In adversarial training BIBREF34, BIBREF20, the goal is to optimise the following saddle point problem: where the inner maximisation problem is to find an adversarial perturbation $\mathbf {z} _0\in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ that can maximise the loss. In the inner maximisation problem, we use HotFlip BIBREF5 with perturbation budget $\delta $ to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (DISPLAY_FORM24) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk. <<</Adversarial Training.>>> <<<Data Augmentation Training.>>> In the data augmentation setup, we randomly sample a valid perturbation $z$ with perturbation budget $\delta $ from a normal input $x$, and minimise the cross-entropy loss given the perturbed sample $z$ (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5. <<</Data Augmentation Training.>>> <<<Normal Training.>>> In normal training, we use the likelihood-based training using the normal training input $x$. <<</Normal Training.>>> <<</Baselines>>> <<<Setup>>> We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as BIBREF15 (85.4%) and BIBREF35 (84.3%) in SST and BIBREF16 (87.18%) in AG News. During training, we set the maximum number of perturbations to $\delta =3$, and evaluate performance with the maximum number of perturbations from $\delta =1$ to 6 at test time. For word-level experiments, we construct the synonym pairs using the PPDB database BIBREF36 and filter the synonyms with fine-grained part-of-speech tags using Spacy BIBREF37. For character-level experiments, we use synthetic keyboard typos from BIBREF3, and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table TABREF48. <<</Setup>>> <<<Evaluation Metrics>>> We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget $\delta $, where a sample is verifiably robust if the prediction is unchanged under all valid perturbations. <<</Evaluation Metrics>>> <<<Results>>> Table TABREF28 shows the results of IBP training and baseline models under $\delta =3$ and $\delta =2$ perturbations on SST and AG News, respectively. Figures FIGREF31 and FIGREF36 show the character- and word-level results with $\delta $ between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material. <<<Oracle Accuracy and Adversarial Accuracy.>>> In Table TABREF28, comparing adversarial accuracy with exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table TABREF29. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security. <<</Oracle Accuracy and Adversarial Accuracy.>>> <<<Effectiveness of Simplex Bounds with IBP.>>> Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table TABREF28). At test time, IBP allows for constant-time verification with arbitrary $\delta $, whereas exhaustive verification requires evaluation over an exponentially growing search space. <<</Effectiveness of Simplex Bounds with IBP.>>> <<<Perturbation Space Size.>>> In Table TABREF28, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness (oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%). <<</Perturbation Space Size.>>> <<<Perturbation Budget.>>> In Figures FIGREF31 and FIGREF36, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes. <<</Perturbation Budget.>>> <<<Computational Cost of Exhaustive Verification.>>> The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in} (\mathbf {x} _0)$, and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of $\mathcal {X}_\mathrm {in}$ grows exponentially with $\delta $, and exhaustive verification quickly becomes prohibitively expensive. In Table TABREF48, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii $\delta $. This number grows exponentially as $\delta $ increases. To further illustrate this, Figure FIGREF49 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample – one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces. <<</Computational Cost of Exhaustive Verification.>>> <<</Results>>> <<<Counter-Fitted Embeddings>>> As shown in Figures FIGREF31 and FIGREF36, although IBP can verify arbitrary networks in theory, the verification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow BIBREF38 and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database BIBREF36 and WordNet BIBREF39. We repeat the word level experiments with these counter-fitted embeddings, Figures FIGREF36 and FIGREF36 show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for $\delta =1, 2, 3$. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, $\delta =1$). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples. <<</Counter-Fitted Embeddings>>> <<</Experiments>>> <<<Discussion>>> Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers. We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section SECREF50). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations BIBREF40, and could potentially be extended to the addition of pre/postfixes BIBREF2, BIBREF41. Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP. <<</Discussion>>> <<<Conclusion>>> We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16" ], "type": "extractive" }
1908.06006
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Do they compare to other models appart from HAN? Context: <<<Title>>> Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding <<<Abstract>>> The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available. <<</Abstract>>> <<<Introduction>>> Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. <<<Observed problem>>> HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. <<</Observed problem>>> <<<Context-aware HAN>>> In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. <<</Context-aware HAN>>> <<</Introduction>>> <<<HAN>>> The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. <<<Notation>>> Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. <<</Notation>>> <<<Sentence encoder>>> First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. <<</Sentence encoder>>> <<<Document encoder>>> The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. <<</Document encoder>>> <<</HAN>>> <<<Proposed architecture: CAHAN>>> As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. <<<Summed context (CAHAN-SUM)>>> We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. <<<Left-to-right (LR)>>> In the LR case, the context vector is computed as the sum of the preceding sentence representations: <<</Left-to-right (LR)>>> <<<Bidirectional (BI)>>> In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. <<</Bidirectional (BI)>>> <<<Centroid version (@!START@$\mu $@!END@)>>> $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: <<</Centroid version (@!START@$\mu $@!END@)>>> <<</Summed context (CAHAN-SUM)>>> <<<Recurrent Context (CAHAN-RNN)>>> Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. <<</Recurrent Context (CAHAN-RNN)>>> <<<Gated context>>> In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). <<</Gated context>>> <<<Complexity and sequentiality>>> Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). <<</Complexity and sequentiality>>> <<</Proposed architecture: CAHAN>>> <<<Experimental setup>>> <<<Datasets>>> We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. <<</Datasets>>> <<<Model configuration>>> This subsection describes the preprocessing and hyperparameter setting we used. <<<Preprocessing and word embeddings>>> For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. <<</Preprocessing and word embeddings>>> <<<Hyperparameters>>> We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. <<</Hyperparameters>>> <<</Model configuration>>> <<<Training details>>> We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. <<<SGD with cyclical learning rate>>> To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. <<</SGD with cyclical learning rate>>> <<</Training details>>> <<</Experimental setup>>> <<<Results>>> As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. <<<Summing vs. averaging>>> In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. <<</Summing vs. averaging>>> <<<Gating>>> As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. <<</Gating>>> <<<CAHAN-RNN-BI>>> The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. <<</CAHAN-RNN-BI>>> <<<Runtimes>>> We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). <<</Runtimes>>> <<</Results>>> <<<Related work>>> In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. <<</Related work>>> <<<Discussion and next steps>>> While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. <<</Discussion and next steps>>> <<<Conclusion>>> In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1908.06006
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the datasets used Context: <<<Title>>> Bidirectional Context-Aware Hierarchical Attention Network for Document Understanding <<<Abstract>>> The Hierarchical Attention Network (HAN) has made great strides, but it suffers a major limitation: at level 1, each sentence is encoded in complete isolation. In this work, we propose and compare several modifications of HAN in which the sentence encoder is able to make context-aware attentional decisions (CAHAN). Furthermore, we propose a bidirectional document encoder that processes the document forwards and backwards, using the preceding and following sentences as context. Experiments on three large-scale sentiment and topic classification datasets show that the bidirectional version of CAHAN outperforms HAN everywhere, with only a modest increase in computation time. While results are promising, we expect the superiority of CAHAN to be even more evident on tasks requiring a deeper understanding of the input documents, such as abstractive summarization. Code is publicly available. <<</Abstract>>> <<<Introduction>>> Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. <<<Observed problem>>> HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. <<</Observed problem>>> <<<Context-aware HAN>>> In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. <<</Context-aware HAN>>> <<</Introduction>>> <<<HAN>>> The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. <<<Notation>>> Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. <<</Notation>>> <<<Sentence encoder>>> First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. <<</Sentence encoder>>> <<<Document encoder>>> The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. <<</Document encoder>>> <<</HAN>>> <<<Proposed architecture: CAHAN>>> As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. <<<Summed context (CAHAN-SUM)>>> We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. <<<Left-to-right (LR)>>> In the LR case, the context vector is computed as the sum of the preceding sentence representations: <<</Left-to-right (LR)>>> <<<Bidirectional (BI)>>> In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. <<</Bidirectional (BI)>>> <<<Centroid version (@!START@$\mu $@!END@)>>> $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: <<</Centroid version (@!START@$\mu $@!END@)>>> <<</Summed context (CAHAN-SUM)>>> <<<Recurrent Context (CAHAN-RNN)>>> Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. <<</Recurrent Context (CAHAN-RNN)>>> <<<Gated context>>> In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). <<</Gated context>>> <<<Complexity and sequentiality>>> Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). <<</Complexity and sequentiality>>> <<</Proposed architecture: CAHAN>>> <<<Experimental setup>>> <<<Datasets>>> We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. <<</Datasets>>> <<<Model configuration>>> This subsection describes the preprocessing and hyperparameter setting we used. <<<Preprocessing and word embeddings>>> For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. <<</Preprocessing and word embeddings>>> <<<Hyperparameters>>> We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. <<</Hyperparameters>>> <<</Model configuration>>> <<<Training details>>> We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. <<<SGD with cyclical learning rate>>> To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. <<</SGD with cyclical learning rate>>> <<</Training details>>> <<</Experimental setup>>> <<<Results>>> As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. <<<Summing vs. averaging>>> In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. <<</Summing vs. averaging>>> <<<Gating>>> As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. <<</Gating>>> <<<CAHAN-RNN-BI>>> The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. <<</CAHAN-RNN-BI>>> <<<Runtimes>>> We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). <<</Runtimes>>> <<</Results>>> <<<Related work>>> In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. <<</Related work>>> <<<Discussion and next steps>>> While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. <<</Discussion and next steps>>> <<<Conclusion>>> In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. <<</Conclusion>>> <<</Title>>>
{ "references": [ "large-scale document classification datasets introduced by BIBREF14" ], "type": "extractive" }
1909.02776
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What dataset is used for this task? Context: <<<Title>>> Features in Extractive Supervised Single-document Summarization: Case of Persian News <<<Abstract>>> Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries. <<</Abstract>>> <<<Introduction>>> From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. <<</Introduction>>> <<<Related works>>> Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. <<</Related works>>> <<<Incorporating Document Features>>> As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. <<<Learning Phase>>> The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. <<<Feature Extraction>>> Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. <<<Document-unaware Features>>> Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. <<</Document-unaware Features>>> <<<Document-aware Features>>> Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: <<</Document-aware Features>>> <<<Explicit Document Features>>> In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. <<</Explicit Document Features>>> <<</Feature Extraction>>> <<<Target Assignment>>> Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. <<</Target Assignment>>> <<<Training Model>>> Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). <<</Training Model>>> <<</Learning Phase>>> <<<Summarization Phase>>> Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). <<<Sentence Ranking>>> In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. <<</Sentence Ranking>>> <<<Sentence Selection>>> By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. <<</Sentence Selection>>> <<</Summarization Phase>>> <<<Evaluation Measures>>> In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. <<</Evaluation Measures>>> <<</Incorporating Document Features>>> <<<Experiments>>> Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. <<<Dataset>>> We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. <<</Dataset>>> <<<Extracting Features and Scaling>>> All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. <<</Extracting Features and Scaling>>> <<</Experiments>>> <<<Results and Discussion>>> In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. <<</Results and Discussion>>> <<<Conclusion>>> This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo. <<</Conclusion>>> <<</Title>>>
{ "references": [ "the Pasokh dataset BIBREF42 " ], "type": "extractive" }
1909.02776
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What features of the document are integrated into vectors of every sentence? Context: <<<Title>>> Features in Extractive Supervised Single-document Summarization: Case of Persian News <<<Abstract>>> Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries. <<</Abstract>>> <<<Introduction>>> From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. <<</Introduction>>> <<<Related works>>> Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. <<</Related works>>> <<<Incorporating Document Features>>> As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. <<<Learning Phase>>> The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. <<<Feature Extraction>>> Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. <<<Document-unaware Features>>> Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. <<</Document-unaware Features>>> <<<Document-aware Features>>> Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: <<</Document-aware Features>>> <<<Explicit Document Features>>> In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. <<</Explicit Document Features>>> <<</Feature Extraction>>> <<<Target Assignment>>> Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. <<</Target Assignment>>> <<<Training Model>>> Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). <<</Training Model>>> <<</Learning Phase>>> <<<Summarization Phase>>> Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). <<<Sentence Ranking>>> In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. <<</Sentence Ranking>>> <<<Sentence Selection>>> By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. <<</Sentence Selection>>> <<</Summarization Phase>>> <<<Evaluation Measures>>> In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. <<</Evaluation Measures>>> <<</Incorporating Document Features>>> <<<Experiments>>> Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. <<<Dataset>>> We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. <<</Dataset>>> <<<Extracting Features and Scaling>>> All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. <<</Extracting Features and Scaling>>> <<</Experiments>>> <<<Results and Discussion>>> In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. <<</Results and Discussion>>> <<<Conclusion>>> This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Ordinal position,Length of sentence,The Ratio of Nouns,The Ratio of Numerical entities,Cue Words,Cosine position,Relative Length,TF-ISF,POS features,Document sentences,Document words,Topical category,Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs" ], "type": "extractive" }
1909.02776
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is new approach tested against state of the art? Context: <<<Title>>> Features in Extractive Supervised Single-document Summarization: Case of Persian News <<<Abstract>>> Text summarization has been one of the most challenging areas of research in NLP. Much effort has been made to overcome this challenge by using either the abstractive or extractive methods. Extractive methods are more popular, due to their simplicity compared with the more elaborate abstractive methods. In extractive approaches, the system will not generate sentences. Instead, it learns how to score sentences within the text by using some textual features and subsequently selecting those with the highest-rank. Therefore, the core objective is ranking and it highly depends on the document. This dependency has been unnoticed by many state-of-the-art solutions. In this work, the features of the document are integrated into vectors of every sentence. In this way, the system becomes informed about the context, increases the precision of the learned model and consequently produces comprehensive and brief summaries. <<</Abstract>>> <<<Introduction>>> From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. <<</Introduction>>> <<<Related works>>> Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. <<</Related works>>> <<<Incorporating Document Features>>> As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. <<<Learning Phase>>> The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. <<<Feature Extraction>>> Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. <<<Document-unaware Features>>> Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. <<</Document-unaware Features>>> <<<Document-aware Features>>> Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: <<</Document-aware Features>>> <<<Explicit Document Features>>> In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. <<</Explicit Document Features>>> <<</Feature Extraction>>> <<<Target Assignment>>> Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. <<</Target Assignment>>> <<<Training Model>>> Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). <<</Training Model>>> <<</Learning Phase>>> <<<Summarization Phase>>> Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). <<<Sentence Ranking>>> In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. <<</Sentence Ranking>>> <<<Sentence Selection>>> By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. <<</Sentence Selection>>> <<</Summarization Phase>>> <<<Evaluation Measures>>> In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. <<</Evaluation Measures>>> <<</Incorporating Document Features>>> <<<Experiments>>> Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. <<<Dataset>>> We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. <<</Dataset>>> <<<Extracting Features and Scaling>>> All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. <<</Extracting Features and Scaling>>> <<</Experiments>>> <<<Results and Discussion>>> In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. <<</Results and Discussion>>> <<<Conclusion>>> This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1909.09018
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are all machine learning approaches compared in this work? Context: <<<Title>>> Corporate IT-Support Help-Desk Process Hybrid-Automation Solution with Machine Learning Approach <<<Abstract>>> Comprehensive IT support teams in large scale organizations require more man power for handling engagement and requests of employees from different channels on a 24×7 basis. Automated email technical queries help desk is proposed to have instant real-time quick solutions and email categorisation. Email topic modelling with various machine learning, deep-learning approaches are compared with different features for a scalable, generalised solution along with sure-shot static rules. Email's title, body, attachment, OCR text, and some feature engineered custom features are given as input elements. XGBoost cascaded hierarchical models, Bi-LSTM model with word embeddings perform well showing 77.3 overall accuracy For the real world corporate email data set. By introducing the thresholding techniques, the overall automation system architecture provides 85.6 percentage of accuracy for real world corporate emails. Combination of quick fixes, static rules, ML categorization as a low cost inference solution reduces 81 percentage of the human effort in the process of automation and real time implementation. <<</Abstract>>> <<<Introduction>>> In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc. The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase. Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data. <<</Introduction>>> <<<Related Work>>> WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA. Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6. The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models. In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio. Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results. <<</Related Work>>> <<<Method>>> We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution. <<<Manual Process>>> Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world. Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3. Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well. <<</Manual Process>>> <<<Proposed Automation System>>> In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages Quick fixes Static rules Email classifier All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments. <<<Quickfixes>>> Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets. If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants. The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified. Different types of emails are treated in a different way for efficient execution and to reduce the error. <<</Quickfixes>>> <<<First mail>>> Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation. <<</First mail>>> <<<Forwarded mail>>> If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow. Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo. <<</Forwarded mail>>> <<</Proposed Automation System>>> <<</Method>>> <<<Email classifier using machine learning>>> <<<Preprocessing>>> Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails. OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case. 260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3. <<</Preprocessing>>> <<<Feature selection>>> Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1). By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role. <<</Feature selection>>> <<<Machine learning approach>>> Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks. <<<Random forest>>> Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14. <<</Random forest>>> <<<XGBoost>>> XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5. <<</XGBoost>>> <<<Hierarchical Model>>> Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5. <<</Hierarchical Model>>> <<</Machine learning approach>>> <<<Deep learning approach>>> <<<LSTM>>> Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15. <<</LSTM>>> <<<BERT>>> Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution. <<</BERT>>> <<</Deep learning approach>>> <<<Threshold Selection>>> In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage. In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics. <<</Threshold Selection>>> <<</Email classifier using machine learning>>> <<<Results and Analysis>>> Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant. Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy. Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process. <<</Results and Analysis>>> <<<Conclusion>>> Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Feature selection,Random forest,XGBoost,Hierarchical Model" ], "type": "extractive" }
1911.03154
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Has there been previous work on SNMT? Context: <<<Title>>> How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation? <<<Abstract>>> Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements. In this paper, we propose a general framework to improve simultaneous translation with a pretrained consecutive neural machine translation (CNMT) model. Our framework contains two parts: prefix translation that utilizes a pretrained CNMT model to better translate source prefixes and a stopping criterion that determines when to stop the prefix translation. Experiments on three translation corpora and two language pairs show the efficacy of the proposed framework on balancing the quality and latency in simultaneous translation. <<</Abstract>>> <<<Introduction>>> Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5. Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage. In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores. <<</Introduction>>> <<<Background>>> Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context: where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$: The distribution of next target word is defined as: where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process. <<</Background>>> <<<Simultaneous NMT>>> In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step. More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps: The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$. Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$. The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps. For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time. For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns: Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens? Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes? <<<Prefix Translation>>> At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder: where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $. However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12: or dynamically re-building all encoder hidden states BIBREF5: On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5: or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding: To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context. <<</Prefix Translation>>> <<<Stopping Criterion>>> In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold: where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words. To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation. <<<Length and EOS Control>>> In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system. More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying: We call this stopping criterion as Length and EOS (LE) stopping controller. <<</Length and EOS Control>>> <<<Learning When to Stop>>> Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration. At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage: where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer. To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying: In the following, we talk about the details of the reward function and the training detail with policy gradient. <<<Reward>>> To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as: where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality: where is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives. Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency: where $l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives. <<</Reward>>> <<<Policy Gradient>>> We train the TN controller with policy gradientBIBREF18, and the gradients are: where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity. <<</Policy Gradient>>> <<</Learning When to Stop>>> <<</Stopping Criterion>>> <<</Simultaneous NMT>>> <<<Experiments>>> <<<Settings>>> <<<Dataset>>> We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$. <<</Dataset>>> <<<Pretrained NMT Model>>> We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation. <<</Pretrained NMT Model>>> <<<TN Controller>>> To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller. <<</TN Controller>>> <<<Baseline>>> We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation: test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$. SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$. BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay. We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy). <<</Baseline>>> <<</Settings>>> <<<Results>>> We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases. We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively. <<</Results>>> <<<Analyze>>> We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation. Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule. We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$. In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this: Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios. <<</Analyze>>> <<<Translation Examples>>> Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context. <<</Translation Examples>>> <<</Experiments>>> <<<Related Work>>> A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26. Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria. <<</Related Work>>> <<<Conclusion>>> We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1911.03154
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which languages do they experiment on? Context: <<<Title>>> How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation? <<<Abstract>>> Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements. In this paper, we propose a general framework to improve simultaneous translation with a pretrained consecutive neural machine translation (CNMT) model. Our framework contains two parts: prefix translation that utilizes a pretrained CNMT model to better translate source prefixes and a stopping criterion that determines when to stop the prefix translation. Experiments on three translation corpora and two language pairs show the efficacy of the proposed framework on balancing the quality and latency in simultaneous translation. <<</Abstract>>> <<<Introduction>>> Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5. Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage. In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores. <<</Introduction>>> <<<Background>>> Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context: where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$: The distribution of next target word is defined as: where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process. <<</Background>>> <<<Simultaneous NMT>>> In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step. More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps: The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$. Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$. The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps. For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time. For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns: Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens? Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes? <<<Prefix Translation>>> At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder: where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $. However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12: or dynamically re-building all encoder hidden states BIBREF5: On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5: or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding: To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context. <<</Prefix Translation>>> <<<Stopping Criterion>>> In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold: where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words. To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation. <<<Length and EOS Control>>> In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system. More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying: We call this stopping criterion as Length and EOS (LE) stopping controller. <<</Length and EOS Control>>> <<<Learning When to Stop>>> Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration. At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage: where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer. To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying: In the following, we talk about the details of the reward function and the training detail with policy gradient. <<<Reward>>> To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as: where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality: where is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives. Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency: where $l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives. <<</Reward>>> <<<Policy Gradient>>> We train the TN controller with policy gradientBIBREF18, and the gradients are: where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity. <<</Policy Gradient>>> <<</Learning When to Stop>>> <<</Stopping Criterion>>> <<</Simultaneous NMT>>> <<<Experiments>>> <<<Settings>>> <<<Dataset>>> We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$. <<</Dataset>>> <<<Pretrained NMT Model>>> We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation. <<</Pretrained NMT Model>>> <<<TN Controller>>> To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller. <<</TN Controller>>> <<<Baseline>>> We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation: test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$. SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$. BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay. We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy). <<</Baseline>>> <<</Settings>>> <<<Results>>> We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases. We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively. <<</Results>>> <<<Analyze>>> We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation. Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule. We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$. In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this: Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios. <<</Analyze>>> <<<Translation Examples>>> Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context. <<</Translation Examples>>> <<</Experiments>>> <<<Related Work>>> A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26. Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria. <<</Related Work>>> <<<Conclusion>>> We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. <<</Conclusion>>> <<</Title>>>
{ "references": [ "German,English,Chinese" ], "type": "extractive" }
1911.03154
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What corpora is used? Context: <<<Title>>> How to Do Simultaneous Translation Better with Consecutive Neural Machine Translation? <<<Abstract>>> Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements. In this paper, we propose a general framework to improve simultaneous translation with a pretrained consecutive neural machine translation (CNMT) model. Our framework contains two parts: prefix translation that utilizes a pretrained CNMT model to better translate source prefixes and a stopping criterion that determines when to stop the prefix translation. Experiments on three translation corpora and two language pairs show the efficacy of the proposed framework on balancing the quality and latency in simultaneous translation. <<</Abstract>>> <<<Introduction>>> Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5. Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage. In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores. <<</Introduction>>> <<<Background>>> Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context: where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$: The distribution of next target word is defined as: where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process. <<</Background>>> <<<Simultaneous NMT>>> In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step. More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps: The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$. Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$. The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps. For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time. For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns: Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens? Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes? <<<Prefix Translation>>> At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder: where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $. However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12: or dynamically re-building all encoder hidden states BIBREF5: On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5: or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding: To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context. <<</Prefix Translation>>> <<<Stopping Criterion>>> In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold: where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words. To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation. <<<Length and EOS Control>>> In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system. More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying: We call this stopping criterion as Length and EOS (LE) stopping controller. <<</Length and EOS Control>>> <<<Learning When to Stop>>> Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration. At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage: where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer. To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying: In the following, we talk about the details of the reward function and the training detail with policy gradient. <<<Reward>>> To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as: where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality: where is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives. Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency: where $l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives. <<</Reward>>> <<<Policy Gradient>>> We train the TN controller with policy gradientBIBREF18, and the gradients are: where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity. <<</Policy Gradient>>> <<</Learning When to Stop>>> <<</Stopping Criterion>>> <<</Simultaneous NMT>>> <<<Experiments>>> <<<Settings>>> <<<Dataset>>> We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$. <<</Dataset>>> <<<Pretrained NMT Model>>> We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation. <<</Pretrained NMT Model>>> <<<TN Controller>>> To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller. <<</TN Controller>>> <<<Baseline>>> We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation: test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$. SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$. BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay. We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy). <<</Baseline>>> <<</Settings>>> <<<Results>>> We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases. We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively. <<</Results>>> <<<Analyze>>> We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation. Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule. We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$. In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this: Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios. <<</Analyze>>> <<<Translation Examples>>> Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context. <<</Translation Examples>>> <<</Experiments>>> <<<Related Work>>> A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26. Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria. <<</Related Work>>> <<<Conclusion>>> We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. <<</Conclusion>>> <<</Title>>>
{ "references": [ "IWSLT16,WMT15,NIST" ], "type": "extractive" }
1909.05360
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is this the first paper to propose a joint model for event and temporal relation extraction? Context: <<<Title>>> Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction <<<Abstract>>> We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F1 improved by 10% and 6.8% on two benchmark datasets respectively. <<</Abstract>>> <<<Introduction>>> The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation). As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event. Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion. Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task. <<</Introduction>>> <<<Related Work>>> In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead. Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution. Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5. In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method. <<</Related Work>>> <<<Joint Event-Relation Extraction Model>>> In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$. <<<Neural SSVM>>> Our neural SSVM adapts the SSVM loss as: where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss. The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end. <<</Neural SSVM>>> <<<Multi-Tasking Neural Scoring Function>>> The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity. The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer. The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events. Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections. <<</Multi-Tasking Neural Scoring Function>>> <<<MAP Inference>>> A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4. <<<Objective Function>>> The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14: where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations. <<</Objective Function>>> <<<Constraints>>> We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph. <<<Event-Relation Consistency.>>> Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property, where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A. <<</Event-Relation Consistency.>>> <<<Symmetry and Transitivity Constraint.>>> We also explore the symmetry and transitivity constraints of relations. They are specified as follows: Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25. <<</Symmetry and Transitivity Constraint.>>> <<</Constraints>>> <<</MAP Inference>>> <<<Learning>>> We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4. <<</Learning>>> <<</Joint Event-Relation Extraction Model>>> <<<Implementation Details>>> In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best. <<<Baselines>>> We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation. <<</Baselines>>> <<<End-to-End Event Temporal Relation Extraction>>> <<<Single-Task Model.>>> The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction. <<</Single-Task Model.>>> <<<Multi-Task Model.>>> This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE. <<</Multi-Task Model.>>> <<<Pipeline Joint Model.>>> This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23. <<</Pipeline Joint Model.>>> <<<Structured Joint Model.>>> This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss. To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES. <<</Structured Joint Model.>>> <<<Hyper-Parameters.>>> All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix. <<</Hyper-Parameters.>>> <<</End-to-End Event Temporal Relation Extraction>>> <<</Implementation Details>>> <<<Experimental Setup>>> In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end. <<<Temporal Relation Data>>> Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33. <<</Temporal Relation Data>>> <<<Evaluation Metrics>>> To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics. <<</Evaluation Metrics>>> <<</Experimental Setup>>> <<<Results and Analysis>>> The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative. Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs. <<<End-to-End Relation Extraction>>> <<<TB-Dense.>>> The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%. <<</TB-Dense.>>> <<<MATRES.>>> Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix). <<</MATRES.>>> <<</End-to-End Relation Extraction>>> <<<Event Extraction>>> <<</Event Extraction>>> <<<Relation Extraction with Gold Events>>> <<</Relation Extraction with Gold Events>>> <<<Discussion>>> <<<Label Imbalance.>>> One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss. Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general. <<</Label Imbalance.>>> <<<Global Constraints.>>> In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs. <<</Global Constraints.>>> <<<Error Analysis.>>> By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples. <<</Error Analysis.>>> <<<Type 1.>>> Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE. <<</Type 1.>>> <<<Type 2.>>> In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example. <<</Type 2.>>> <<<Type 3.>>> The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task. Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations. <<</Type 3.>>> <<</Discussion>>> <<</Results and Analysis>>> <<<Conclusion>>> In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively. Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1909.05360
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What datasets were used for this work? Context: <<<Title>>> Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction <<<Abstract>>> We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and the temporal relation labels jointly. Experiments show that the proposed method can improve both event extraction and temporal relation extraction over state-of-the-art systems, with the end-to-end F1 improved by 10% and 6.8% on two benchmark datasets respectively. <<</Abstract>>> <<<Introduction>>> The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation). As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event. Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion. Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task. <<</Introduction>>> <<<Related Work>>> In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead. Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution. Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5. In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method. <<</Related Work>>> <<<Joint Event-Relation Extraction Model>>> In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$. <<<Neural SSVM>>> Our neural SSVM adapts the SSVM loss as: where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss. The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end. <<</Neural SSVM>>> <<<Multi-Tasking Neural Scoring Function>>> The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity. The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer. The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events. Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections. <<</Multi-Tasking Neural Scoring Function>>> <<<MAP Inference>>> A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4. <<<Objective Function>>> The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14: where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations. <<</Objective Function>>> <<<Constraints>>> We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph. <<<Event-Relation Consistency.>>> Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property, where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A. <<</Event-Relation Consistency.>>> <<<Symmetry and Transitivity Constraint.>>> We also explore the symmetry and transitivity constraints of relations. They are specified as follows: Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25. <<</Symmetry and Transitivity Constraint.>>> <<</Constraints>>> <<</MAP Inference>>> <<<Learning>>> We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4. <<</Learning>>> <<</Joint Event-Relation Extraction Model>>> <<<Implementation Details>>> In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best. <<<Baselines>>> We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation. <<</Baselines>>> <<<End-to-End Event Temporal Relation Extraction>>> <<<Single-Task Model.>>> The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction. <<</Single-Task Model.>>> <<<Multi-Task Model.>>> This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE. <<</Multi-Task Model.>>> <<<Pipeline Joint Model.>>> This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23. <<</Pipeline Joint Model.>>> <<<Structured Joint Model.>>> This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss. To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES. <<</Structured Joint Model.>>> <<<Hyper-Parameters.>>> All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix. <<</Hyper-Parameters.>>> <<</End-to-End Event Temporal Relation Extraction>>> <<</Implementation Details>>> <<<Experimental Setup>>> In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end. <<<Temporal Relation Data>>> Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33. <<</Temporal Relation Data>>> <<<Evaluation Metrics>>> To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics. <<</Evaluation Metrics>>> <<</Experimental Setup>>> <<<Results and Analysis>>> The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative. Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs. <<<End-to-End Relation Extraction>>> <<<TB-Dense.>>> The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%. <<</TB-Dense.>>> <<<MATRES.>>> Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix). <<</MATRES.>>> <<</End-to-End Relation Extraction>>> <<<Event Extraction>>> <<</Event Extraction>>> <<<Relation Extraction with Gold Events>>> <<</Relation Extraction with Gold Events>>> <<<Discussion>>> <<<Label Imbalance.>>> One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss. Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general. <<</Label Imbalance.>>> <<<Global Constraints.>>> In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs. <<</Global Constraints.>>> <<<Error Analysis.>>> By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples. <<</Error Analysis.>>> <<<Type 1.>>> Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE. <<</Type 1.>>> <<<Type 2.>>> In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example. <<</Type 2.>>> <<<Type 3.>>> The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task. Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations. <<</Type 3.>>> <<</Discussion>>> <<</Results and Analysis>>> <<<Conclusion>>> In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively. Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "TB-Dense, MATRES" ], "type": "extractive" }
2003.12738
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What baselines other than standard transformers are used in experiments? Context: <<<Title>>> Variational Transformers for Diverse Response Generation <<<Abstract>>> Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural network (RNN)-based conditional variational autoencoder (CVAE). However, the autoregressive computation of the RNN limits the training efficiency. Therefore, we propose the Variational Transformer (VT), a variational self-attentive feed-forward sequence model. The VT combines the parallelizability and global receptive field of the Transformer with the variational nature of the CVAE by incorporating stochastic latent variables into Transformers. We explore two types of the VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of fine-grained latent variables. Then, the proposed models are evaluated on three conversational datasets with both automatic metric and human evaluation. The experimental results show that our models improve standard Transformers and other baselines in terms of diversity, semantic relevance, and human judgment. <<</Abstract>>> <<<Introduction>>> Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training. In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder. The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses. <<</Introduction>>> <<<Related work>>> <<<Neural Conversational Models>>> Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation. <<</Neural Conversational Models>>> <<<Conditional Variational Autoencoders>>> Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer. <<</Conditional Variational Autoencoders>>> <<<Fully Attentional Networks>>> Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process. <<</Fully Attentional Networks>>> <<</Related work>>> <<<Preliminaries>>> <<<Conditional Variational Autoencoder for Dialogue Generation>>> The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to: The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior. In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$. The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16. <<</Conditional Variational Autoencoder for Dialogue Generation>>> <<<CVAE with Transformer>>> The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state. The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$: Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information. This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows: <<</CVAE with Transformer>>> <<</Preliminaries>>> <<<Sequential Variational Transformer>>> In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables. As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path. <<<Prior Path>>> The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$. We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes: where <<</Prior Path>>> <<<Posterior Path>>> The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as: where During the training, the posterior path guides the learning of prior path via KL divergence constraint: In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15. During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is: <<</Posterior Path>>> <<<Auxiliary Loss>>> As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by: where $f_{aux}$ is a feed-forward neural network with the softmax output. <<</Auxiliary Loss>>> <<<Learning>>> The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position: We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows: where, <<</Learning>>> <<</Sequential Variational Transformer>>> <<<Experiments>>> <<<Dataset>>> We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26. <<<MojiTalk>>> dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set. <<</MojiTalk>>> <<<PersonaChat & Empathetic-Dialogues>>> are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets. <<</PersonaChat & Empathetic-Dialogues>>> <<</Dataset>>> <<<Baselines>>> We compare the proposed models with the following baselines: <<<Seq2Seq.>>> An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. <<</Seq2Seq.>>> <<<CVAE.>>> An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16. <<</CVAE.>>> <<<Transformer.>>> A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT. <<</Transformer.>>> <<</Baselines>>> <<<Hyper-parameters and Training Setup>>> We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models. <<</Hyper-parameters and Training Setup>>> <<<Automatic Evaluation>>> <<<PPL & KLD.>>> The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27. <<</PPL & KLD.>>> <<<Diversity.>>> To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation. <<</Diversity.>>> <<<Embeddings Similarity.>>> This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$. <<</Embeddings Similarity.>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard. <<</Human Evaluation>>> <<</Experiments>>> <<<Results>>> <<<Quantitative Analysis>>> The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3. Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness. <<</Quantitative Analysis>>> <<<Qualitative Analysis>>> Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses. <<</Qualitative Analysis>>> <<</Results>>> <<<Conclusion>>> This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "attention-based sequence-to-sequence model ,CVAE" ], "type": "extractive" }
2003.12738
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What three conversational datasets are used for evaluation? Context: <<<Title>>> Variational Transformers for Diverse Response Generation <<<Abstract>>> Despite the great promise of Transformers in many sequence modeling tasks (e.g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation. Previous work proposes to capture the variability of dialogue responses with a recurrent neural network (RNN)-based conditional variational autoencoder (CVAE). However, the autoregressive computation of the RNN limits the training efficiency. Therefore, we propose the Variational Transformer (VT), a variational self-attentive feed-forward sequence model. The VT combines the parallelizability and global receptive field of the Transformer with the variational nature of the CVAE by incorporating stochastic latent variables into Transformers. We explore two types of the VT: 1) modeling the discourse-level diversity with a global latent variable; and 2) augmenting the Transformer decoder with a sequence of fine-grained latent variables. Then, the proposed models are evaluated on three conversational datasets with both automatic metric and human evaluation. The experimental results show that our models improve standard Transformers and other baselines in terms of diversity, semantic relevance, and human judgment. <<</Abstract>>> <<<Introduction>>> Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training. In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder. The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses. <<</Introduction>>> <<<Related work>>> <<<Neural Conversational Models>>> Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation. <<</Neural Conversational Models>>> <<<Conditional Variational Autoencoders>>> Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer. <<</Conditional Variational Autoencoders>>> <<<Fully Attentional Networks>>> Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process. <<</Fully Attentional Networks>>> <<</Related work>>> <<<Preliminaries>>> <<<Conditional Variational Autoencoder for Dialogue Generation>>> The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to: The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior. In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$. The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16. <<</Conditional Variational Autoencoder for Dialogue Generation>>> <<<CVAE with Transformer>>> The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state. The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$: Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information. This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows: <<</CVAE with Transformer>>> <<</Preliminaries>>> <<<Sequential Variational Transformer>>> In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables. As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path. <<<Prior Path>>> The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$. We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes: where <<</Prior Path>>> <<<Posterior Path>>> The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as: where During the training, the posterior path guides the learning of prior path via KL divergence constraint: In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15. During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is: <<</Posterior Path>>> <<<Auxiliary Loss>>> As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by: where $f_{aux}$ is a feed-forward neural network with the softmax output. <<</Auxiliary Loss>>> <<<Learning>>> The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position: We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows: where, <<</Learning>>> <<</Sequential Variational Transformer>>> <<<Experiments>>> <<<Dataset>>> We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26. <<<MojiTalk>>> dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set. <<</MojiTalk>>> <<<PersonaChat & Empathetic-Dialogues>>> are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets. <<</PersonaChat & Empathetic-Dialogues>>> <<</Dataset>>> <<<Baselines>>> We compare the proposed models with the following baselines: <<<Seq2Seq.>>> An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16. <<</Seq2Seq.>>> <<<CVAE.>>> An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16. <<</CVAE.>>> <<<Transformer.>>> A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT. <<</Transformer.>>> <<</Baselines>>> <<<Hyper-parameters and Training Setup>>> We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models. <<</Hyper-parameters and Training Setup>>> <<<Automatic Evaluation>>> <<<PPL & KLD.>>> The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27. <<</PPL & KLD.>>> <<<Diversity.>>> To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation. <<</Diversity.>>> <<<Embeddings Similarity.>>> This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$. <<</Embeddings Similarity.>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard. <<</Human Evaluation>>> <<</Experiments>>> <<<Results>>> <<<Quantitative Analysis>>> The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3. Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL. On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness. <<</Quantitative Analysis>>> <<<Qualitative Analysis>>> Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses. <<</Qualitative Analysis>>> <<</Results>>> <<<Conclusion>>> This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "MojiTalk ,PersonaChat ,Empathetic-Dialogues" ], "type": "extractive" }
1909.03544
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What previous approaches did this method outperform? Context: <<<Title>>> Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER <<<Abstract>>> Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora. <<</Abstract>>> <<<Introduction>>> Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models. Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks. Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks. <<</Introduction>>> <<<Related Work>>> As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques . In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time. For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15. <<</Related Work>>> <<<Datasets>>> <<<Prague Dependency Treebank 3.5>>> The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7. A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2. In evaluation, we compute: [noitemsep,topsep=0pt] POS tagging accuracy, lemmatization accuracy, unlabeled attachment score (UAS), labeled attachment score (LAS). <<</Prague Dependency Treebank 3.5>>> <<<Universal Dependencies>>> The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels. To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics: [noitemsep,topsep=0pt] UPOS – universal POS tags accuracy, XPOS – language-specific POS tags accuracy, UFeats – universal subset of morphological features accuracy, Lemmas – lemmatization accuracy, UAS – unlabeled attachment score, LAS – labeled attachment score, MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score. <<</Universal Dependencies>>> <<<Czech Named Entity Corpus>>> The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities. The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities. We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes. <<</Czech Named Entity Corpus>>> <<</Datasets>>> <<<Neural Architectures>>> All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36). <<<POS Tagging, Lemmatization, and Dependency Parsing>>> We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19. <<<POS Tagging and Lemmatization>>> The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task. We construct a lemma generation rule from a given form and lemma as follows: [noitemsep,topsep=0pt] We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class. If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c). All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma. <<</POS Tagging and Lemmatization>>> <<<Dependency Parsing>>> The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees. In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing: [noitemsep,topsep=0pt] not using them at all; adding predicted POS tags and lemmas on input; perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser. <<</Dependency Parsing>>> <<<Input Embeddings>>> In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task. Our architecture can optionally employ the following additional inputs [noitemsep,topsep=0pt] pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data. BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word. Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. <<</Input Embeddings>>> <<<POS Tags and Lemmas Decoding>>> Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model. <<</POS Tags and Lemmas Decoding>>> <<</POS Tagging, Lemmatization, and Dependency Parsing>>> <<<Named Entity Recognition>>> We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels. The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted. We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In this model, we use the following word- and character-level word embeddings: [noitemsep,topsep=0pt] pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model. end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters. Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16). <<</Named Entity Recognition>>> <<</Neural Architectures>>> <<<Results>>> <<<POS Tagging and Lemmatization on PDT 3.5>>> The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations. The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results. Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$. <<</POS Tagging and Lemmatization on PDT 3.5>>> <<<Dependency Parsing on PDT 3.5>>> The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings. When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43. Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS. <<</Dependency Parsing on PDT 3.5>>> <<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines. We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods. In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47). Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme. <<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> <<</Results>>> <<<Conclusion>>> We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Table TABREF44,Table TABREF44,Table TABREF47,Table TABREF47" ], "type": "extractive" }
1909.03544
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What data is used to build the embeddings? Context: <<<Title>>> Czech Text Processing with Contextual Embeddings: POS Tagging, Lemmatization, Parsing and NER <<<Abstract>>> Contextualized embeddings, which capture appropriate word meaning depending on context, have recently been proposed. We evaluate two meth ods for precomputing such embeddings, BERT and Flair, on four Czech text processing tasks: part-of-speech (POS) tagging, lemmatization, dependency pars ing and named entity recognition (NER). The first three tasks, POS tagging, lemmatization and dependency parsing, are evaluated on two corpora: the Prague Dependency Treebank 3.5 and the Universal Dependencies 2.3. The named entity recognition (NER) is evaluated on the Czech Named Entity Corpus 1.1 and 2.0. We report state-of-the-art results for the above mentioned tasks and corpora. <<</Abstract>>> <<<Introduction>>> Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models. Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks. Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks. <<</Introduction>>> <<<Related Work>>> As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques . In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time. For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15. <<</Related Work>>> <<<Datasets>>> <<<Prague Dependency Treebank 3.5>>> The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7. A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2. In evaluation, we compute: [noitemsep,topsep=0pt] POS tagging accuracy, lemmatization accuracy, unlabeled attachment score (UAS), labeled attachment score (LAS). <<</Prague Dependency Treebank 3.5>>> <<<Universal Dependencies>>> The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels. To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics: [noitemsep,topsep=0pt] UPOS – universal POS tags accuracy, XPOS – language-specific POS tags accuracy, UFeats – universal subset of morphological features accuracy, Lemmas – lemmatization accuracy, UAS – unlabeled attachment score, LAS – labeled attachment score, MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score. <<</Universal Dependencies>>> <<<Czech Named Entity Corpus>>> The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities. The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities. We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes. <<</Czech Named Entity Corpus>>> <<</Datasets>>> <<<Neural Architectures>>> All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36). <<<POS Tagging, Lemmatization, and Dependency Parsing>>> We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19. <<<POS Tagging and Lemmatization>>> The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task. We construct a lemma generation rule from a given form and lemma as follows: [noitemsep,topsep=0pt] We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class. If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c). All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma. <<</POS Tagging and Lemmatization>>> <<<Dependency Parsing>>> The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees. In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing: [noitemsep,topsep=0pt] not using them at all; adding predicted POS tags and lemmas on input; perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser. <<</Dependency Parsing>>> <<<Input Embeddings>>> In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task. Our architecture can optionally employ the following additional inputs [noitemsep,topsep=0pt] pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data. BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word. Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096. <<</Input Embeddings>>> <<<POS Tags and Lemmas Decoding>>> Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model. <<</POS Tags and Lemmas Decoding>>> <<</POS Tagging, Lemmatization, and Dependency Parsing>>> <<<Named Entity Recognition>>> We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels. The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted. We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search. In this model, we use the following word- and character-level word embeddings: [noitemsep,topsep=0pt] pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model. end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot). end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters. Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16). <<</Named Entity Recognition>>> <<</Neural Architectures>>> <<<Results>>> <<<POS Tagging and Lemmatization on PDT 3.5>>> The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations. The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results. Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$. <<</POS Tagging and Lemmatization on PDT 3.5>>> <<<Dependency Parsing on PDT 3.5>>> The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings. When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43. Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS. <<</Dependency Parsing on PDT 3.5>>> <<<POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines. We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods. In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47). Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme. <<</POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies>>> <<</Results>>> <<<Conclusion>>> We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition. <<</Conclusion>>> <<</Title>>>
{ "references": [ "large raw Czech corpora available from the LINDAT/CLARIN repository,Czech Wikipedia" ], "type": "extractive" }
1909.12642
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What is the performance of the model for the German sub-task A? Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "macro F1 score of 0.62" ], "type": "extractive" }
1909.12642
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is the model tested for language identification? Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1909.12642
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is the model compared to a baseline model? Context: <<<Title>>> HateMonitors: Language Agnostic Abuse Detection in Social Media <<<Abstract>>> Reducing hateful and offensive content in online social media pose a dual problem for the moderators. On the one hand, rigid censorship on social media cannot be imposed. On the other, the free flow of such content cannot be allowed. Hence, we require efficient abusive language detection system to detect such harmful content in social media. In this paper, we present our machine learning model, HateMonitor, developed for Hate Speech and Offensive Content Identification in Indo-European Languages (HASOC), a shared task at FIRE 2019. We have used a Gradient Boosting model, along with BERT and LASER embeddings, to make the system language agnostic. Our model came at First position for the German sub-task A. We have also made our model public at this https URL . <<</Abstract>>> <<<Introduction>>> In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society. Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity. Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language. For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language. In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification. <<</Introduction>>> <<<Related works>>> Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum. Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22. One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently. <<</Related works>>> <<<Dataset and Task description>>> The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. <<<Datasets>>> We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced. <<</Datasets>>> <<<Tasks>>> Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask. Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task. Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset. <<</Tasks>>> <<</Dataset and Task description>>> <<<System Description>>> In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system. <<<Feature Generation>>> <<<Preprocessing:>>> We preprocess the tweets before performing the feature extraction. The following steps were followed: We remove all the URLs. Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters. We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders. Any numerical figure was normalized to a string `number'. We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact. <<</Preprocessing:>>> <<<Feature vectors:>>> The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier. Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768. LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31. We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model. <<</Feature vectors:>>> <<</Feature Generation>>> <<<Our Model>>> The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition. <<</Our Model>>> <<</System Description>>> <<<Results>>> The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively. <<</Results>>> <<<Discussion>>> In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21. <<</Discussion>>> <<<Conclusion>>> In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How does framework automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs." ], "type": "extractive" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What human judgement metrics are used? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "coherence, logical consistency, fluency and diversity" ], "type": "extractive" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What automatic evaluation metrics are used? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "BLEU,embedding-based metrics (Average, Extrema, Greedy and Coherence),, entropy-based metrics (Ent-{1,2}),distinct metrics (Dist-{1,2,3} and Intra-{1,2,3})" ], "type": "extractive" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What state of the art models were used in experiments? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "SEQ2SEQ,CVAE,Transformer,HRED,DialogWAE" ], "type": "extractive" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What five dialogue attributes were analyzed? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Model Confidence,Continuity,Query-relatedness,Repetitiveness,Specificity" ], "type": "extractive" }
2003.00639
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What three publicly available coropora are used? Context: <<<Title>>> Learning from Easy to Complex: Adaptive Multi-curricula Learning for Neural Dialogue Generation <<<Abstract>>> Current state-of-the-art neural dialogue systems are mainly data-driven and are trained on human-generated responses. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. What is more, so far, there are no unified dialogue complexity measurements, and the dialogue complexity embodies multiple aspects of attributes---specificity, repetitiveness, relevance, etc. Inspired by human behaviors of learning to converse, where children learn from easy dialogues to complex ones and dynamically adjust their learning progress, in this paper, we first analyze five dialogue attributes to measure the dialogue complexity in multiple perspectives on three publicly available corpora. Then, we propose an adaptive multi-curricula learning framework to schedule a committee of the organized curricula. The framework is established upon the reinforcement learning paradigm, which automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model. Extensive experiments conducted on five state-of-the-art models demonstrate its learning efficiency and effectiveness with respect to 13 automatic evaluation metrics and human judgments. <<</Abstract>>> <<<Introduction>>> Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7. However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models. Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly. Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog. <<</Introduction>>> <<<Curriculum Plausibility>>> Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively. <<<Conversational Attributes>>> <<<Specificity>>> A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1): where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$. <<</Specificity>>> <<<Repetitiveness>>> Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as: where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise. <<</Repetitiveness>>> <<<Query-relatedness>>> A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively. <<</Query-relatedness>>> <<<Continuity>>> A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them. <<</Continuity>>> <<<Model Confidence>>> Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model. <<</Model Confidence>>> <<</Conversational Attributes>>> <<<Dialogue Analysis>>> <<<Distributions among Attributes>>> The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat. <<</Distributions among Attributes>>> <<<Attributes Independence>>> So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives. <<</Attributes Independence>>> <<</Dialogue Analysis>>> <<</Curriculum Plausibility>>> <<<Curriculum Dialogue Learning>>> We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model. <<<Single Curriculum Dialogue Learning>>> We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum. <<</Single Curriculum Dialogue Learning>>> <<<Adaptive Multi-curricula Learning>>> Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model. The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges. More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments: where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$. The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient: where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $. <<</Adaptive Multi-curricula Learning>>> <<</Curriculum Dialogue Learning>>> <<<Experiments>>> <<<Experiment Settings>>> We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6. <<</Experiment Settings>>> <<<Implementation and Reproducibility>>> Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same. <<</Implementation and Reproducibility>>> <<<Overall Performance and Human Evaluation>>> The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm. We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects. <<</Overall Performance and Human Evaluation>>> <<<Model Analysis>>> <<<Single vs Multi-curricula>>> To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance. <<</Single vs Multi-curricula>>> <<<Effects of Adaptive Multi-curricula Learning>>> Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner. <<</Effects of Adaptive Multi-curricula Learning>>> <<<Learning Efficiency>>> Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases. <<</Learning Efficiency>>> <<<Multi-curricula Learning Route>>> To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior. <<</Multi-curricula Learning Route>>> <<<Examples with Different Learning Frequencies>>> As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework. <<</Examples with Different Learning Frequencies>>> <<</Model Analysis>>> <<</Experiments>>> <<<Related Work>>> Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality. Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity. <<</Related Work>>> <<<Conclusion>>> In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "PersonaChat BIBREF12,DailyDialog BIBREF13,OpenSubtitles BIBREF7" ], "type": "extractive" }
1909.13668
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What different properties of the posterior distribution are explored in the paper? Context: <<<Title>>> On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation <<<Abstract>>> Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model. <<</Abstract>>> <<<Introduction>>> Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation. The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$: where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution. With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13. All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training. In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments. <<</Introduction>>> <<<Kullback-Leibler Divergence in VAE>>> We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17. <<<Reconstruction vs. KL>>> The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$. BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased. <<</Reconstruction vs. KL>>> <<<Explicit KL Control via @!START@$\beta $@!END@-VAE>>> Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term, where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22. <<</Explicit KL Control via @!START@$\beta $@!END@-VAE>>> <<</Kullback-Leibler Divergence in VAE>>> <<<Experiments>>> We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied. <<<Corpora>>> We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab. <<</Corpora>>> <<<Models>>> We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state. <<</Models>>> <<<Rate and Distortion>>> To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue. The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$. As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores. <<</Rate and Distortion>>> <<<Aggregated Posterior>>> To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior. We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows. The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section. <<</Aggregated Posterior>>> <<<Text Generation>>> To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$. During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$. <<<Qualitative Analysis>>> We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $. Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding. <<<Sensitivity of Decoder>>> To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder. <<</Sensitivity of Decoder>>> <<<Coherence of Sequences>>> We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s. <<</Coherence of Sequences>>> <<</Qualitative Analysis>>> <<<Quantitative Analysis>>> Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences. We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora. In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus. The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE. In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE. <<</Quantitative Analysis>>> <<</Text Generation>>> <<<Syntactic Test>>> In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability. As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$. However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences. As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes. <<</Syntactic Test>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder. The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied. We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments. In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences. Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "interdependence between rate and distortion,impact of KL on the sharpness of the approximated posteriors,demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities,some experiments to find if any form of syntactic information is encoded in the latent space" ], "type": "extractive" }
1909.13668
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Why does proposed term help to avoid posterior collapse? Context: <<<Title>>> On the Importance of the Kullback-Leibler Divergence Term in Variational Autoencoders for Text Generation <<<Abstract>>> Variational Autoencoders (VAEs) are known to suffer from learning uninformative latent representation of the input due to issues such as approximated posterior collapse, or entanglement of the latent space. We impose an explicit constraint on the Kullback-Leibler (KL) divergence term inside the VAE objective function. While the explicit constraint naturally avoids posterior collapse, we use it to further understand the significance of the KL term in controlling the information transmitted through the VAE channel. Within this framework, we explore different properties of the estimated posterior distribution, and highlight the trade-off between the amount of information encoded in a latent code during training, and the generative capacity of the model. <<</Abstract>>> <<<Introduction>>> Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation. The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$: where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution. With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13. All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training. In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments. <<</Introduction>>> <<<Kullback-Leibler Divergence in VAE>>> We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17. <<<Reconstruction vs. KL>>> The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$. BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased. <<</Reconstruction vs. KL>>> <<<Explicit KL Control via @!START@$\beta $@!END@-VAE>>> Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term, where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22. <<</Explicit KL Control via @!START@$\beta $@!END@-VAE>>> <<</Kullback-Leibler Divergence in VAE>>> <<<Experiments>>> We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied. <<<Corpora>>> We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab. <<</Corpora>>> <<<Models>>> We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state. <<</Models>>> <<<Rate and Distortion>>> To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue. The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$. As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores. <<</Rate and Distortion>>> <<<Aggregated Posterior>>> To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior. We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows. The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section. <<</Aggregated Posterior>>> <<<Text Generation>>> To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$. During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$. <<<Qualitative Analysis>>> We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $. Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding. <<<Sensitivity of Decoder>>> To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder. <<</Sensitivity of Decoder>>> <<<Coherence of Sequences>>> We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s. <<</Coherence of Sequences>>> <<</Qualitative Analysis>>> <<<Quantitative Analysis>>> Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences. We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora. In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus. The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE. In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE. <<</Quantitative Analysis>>> <<</Text Generation>>> <<<Syntactic Test>>> In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability. As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$. However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences. As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes. <<</Syntactic Test>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder. The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied. We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments. In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences. Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "by setting a non-zero positive constraint ($C\\ge 0$) on the KL term ($|D_{KL}\\big (q_\\phi ({z}|{x}) || p({z})\\big )-C|$)" ], "type": "extractive" }
2003.01472
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Did they experiment with the tool? Context: <<<Title>>> Seshat: A tool for managing and verifying annotation campaigns of audio data <<<Abstract>>> We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies. <<</Abstract>>> <<<Introduction>>> Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2. In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers. In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7. <<</Introduction>>> <<<Related Work>>> Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8. Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking. On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them. Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community. <<</Related Work>>> <<<Overview of Seshat>>> Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow: [font=, leftmargin=1cm, style=nextline] A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3. An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign. It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel). A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations. Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators. Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software. If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks. Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology). The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme. Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts. Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ). <<</Overview of Seshat>>> <<<Development>>> <<<Engineering choices>>> Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database. <<<Back-end Choices>>> The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format. The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system. <<</Back-end Choices>>> <<<Front-end Choices>>> The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability. <<</Front-end Choices>>> <<</Engineering choices>>> <<<UX/UI Choices>>> The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content. <<</UX/UI Choices>>> <<</Development>>> <<<Using Seshat>>> <<<Installation and Setup>>> Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation). Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface. <<</Installation and Setup>>> <<<Launching and monitoring an annotation campaign>>> The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created. Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task. <<</Launching and monitoring an annotation campaign>>> <<<Scripting API>>> For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts. A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language. <<</Scripting API>>> <<<Annotation Parser Customisation>>> We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system. As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class: check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything. distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations. <<</Annotation Parser Customisation>>> <<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures). First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance: If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as: This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept. To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as: This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file. <<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> <<</Using Seshat>>> <<<Use cases>>> We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings. <<<Clinical interviews>>> Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance'). To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34. Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions. <<</Clinical interviews>>> <<<In-the-wild child-centered recordings>>> The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS'). These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates. <<</In-the-wild child-centered recordings>>> <<</Use cases>>> <<<Conclusion and Future work>>> Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
2003.01472
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is this software available to the public? Context: <<<Title>>> Seshat: A tool for managing and verifying annotation campaigns of audio data <<<Abstract>>> We introduce Seshat, a new, simple and open-source software to efficiently manage annotations of speech corpora. The Seshat software allows users to easily customise and manage annotations of large audio corpora while ensuring compliance with the formatting and naming conventions of the annotated output files. In addition, it includes procedures for checking the content of annotations following specific rules are implemented in personalised parsers. Finally, we propose a double-annotation mode, for which Seshat computes automatically an associated inter-annotator agreement with the $\gamma$ measure taking into account the categorisation and segmentation discrepancies. <<</Abstract>>> <<<Introduction>>> Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2. In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers. In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7. <<</Introduction>>> <<<Related Work>>> Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8. Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking. On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them. Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community. <<</Related Work>>> <<<Overview of Seshat>>> Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow: [font=, leftmargin=1cm, style=nextline] A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3. An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign. It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel). A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations. Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators. Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software. If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks. Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology). The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme. Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts. Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ). <<</Overview of Seshat>>> <<<Development>>> <<<Engineering choices>>> Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database. <<<Back-end Choices>>> The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format. The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system. <<</Back-end Choices>>> <<<Front-end Choices>>> The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability. <<</Front-end Choices>>> <<</Engineering choices>>> <<<UX/UI Choices>>> The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content. <<</UX/UI Choices>>> <<</Development>>> <<<Using Seshat>>> <<<Installation and Setup>>> Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation). Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface. <<</Installation and Setup>>> <<<Launching and monitoring an annotation campaign>>> The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created. Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task. <<</Launching and monitoring an annotation campaign>>> <<<Scripting API>>> For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts. A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language. <<</Scripting API>>> <<<Annotation Parser Customisation>>> We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system. As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class: check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything. distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations. <<</Annotation Parser Customisation>>> <<<Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures). First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance: If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as: This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept. To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as: This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file. <<</Inter-rater agreement: the @!START@$\gamma $@!END@ measure>>> <<</Using Seshat>>> <<<Use cases>>> We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings. <<<Clinical interviews>>> Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance'). To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34. Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions. <<</Clinical interviews>>> <<<In-the-wild child-centered recordings>>> The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS'). These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates. <<</In-the-wild child-centered recordings>>> <<</Use cases>>> <<<Conclusion and Future work>>> Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations. <<</Conclusion and Future work>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
2004.01980
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which state-of-the-art model is surpassed by 9.68% attraction score? Context: <<<Title>>> Hooks in the Headline: Learning to Generate Headlines with Controlled Styles <<<Abstract>>> Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. <<</Abstract>>> <<<Introduction>>> Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.” To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. <<</Introduction>>> <<<Related Work>>> Our work is related to summarization and text style transfer. <<<Headline Generation as Summarization>>> Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27. Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. <<</Headline Generation as Summarization>>> <<<Text Style Transfer>>> Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. <<</Text Style Transfer>>> <<</Related Work>>> <<<Methods>>> <<<Problem Formulation>>> The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$. Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$. <<</Problem Formulation>>> <<<Seq2Seq Model Architecture>>> For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. <<</Seq2Seq Model Architecture>>> <<<Multitask Training Scheme>>> To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10). <<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows: where $L$ is the sequence length. <<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> <<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$: where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes where $\lambda $ is a hyper-parameter. <<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> <<</Multitask Training Scheme>>> <<<Parameter-Sharing Scheme>>> More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. <<<Type 1. Style Layer Normalization>>> Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$: where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. <<</Type 1. Style Layer Normalization>>> <<<Type 2. Style-Guided Encoder Attention>>> Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns. <<</Type 2. Style-Guided Encoder Attention>>> <<</Parameter-Sharing Scheme>>> <<</Methods>>> <<<Experiments>>> <<<Datasets>>> We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. <<<Source Dataset>>> The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs. <<</Source Dataset>>> <<<Three Target Style Corpora>>> <<<Humor and Romance>>> For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. <<</Humor and Romance>>> <<<Clickbait>>> We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use. Some examples from each style corpus are listed in Table TABREF32. <<</Clickbait>>> <<</Three Target Style Corpora>>> <<</Datasets>>> <<<Baselines>>> We compared the proposed TitleStylist against the following five strong baseline approaches. <<<Neural Headline Generation (NHG)>>> We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data. <<</Neural Headline Generation (NHG)>>> <<<Gigaword-MASS>>> We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles. <<</Gigaword-MASS>>> <<<Neural Story Teller (NST)>>> It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website. <<</Neural Story Teller (NST)>>> <<<Fine-Tuned>>> We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. <<</Fine-Tuned>>> <<<Multitask>>> We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. <<</Multitask>>> <<</Baselines>>> <<<Evaluation Metrics>>> To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. <<<Setup of Human Evaluation>>> We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. <<</Setup of Human Evaluation>>> <<<Setup of Automatic Evaluation>>> Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. <<<Summarization Quality>>> We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit. <<</Summarization Quality>>> <<<Language Fluency>>> We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. <<</Language Fluency>>> <<</Setup of Automatic Evaluation>>> <<</Evaluation Metrics>>> <<<Experimental Details>>> We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $. <<</Experimental Details>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Human Evaluation Results>>> The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters. <<<Relevance>>> We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. <<</Relevance>>> <<<Attraction>>> In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. <<</Attraction>>> <<<Fluency>>> The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability. <<</Fluency>>> <<<Style Strength>>> We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57. <<</Style Strength>>> <<</Human Evaluation Results>>> <<<Automatic Evaluation Results>>> Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. <<</Automatic Evaluation Results>>> <<<Extension to Multi-Style>>> We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature. <<</Extension to Multi-Style>>> <<</Results and Discussion>>> <<<Conclusion>>> We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "pure summarization model NHG" ], "type": "extractive" }
2004.01980
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How is attraction score measured? Context: <<<Title>>> Hooks in the Headline: Learning to Generate Headlines with Controlled Styles <<<Abstract>>> Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. <<</Abstract>>> <<<Introduction>>> Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.” To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. <<</Introduction>>> <<<Related Work>>> Our work is related to summarization and text style transfer. <<<Headline Generation as Summarization>>> Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27. Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. <<</Headline Generation as Summarization>>> <<<Text Style Transfer>>> Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. <<</Text Style Transfer>>> <<</Related Work>>> <<<Methods>>> <<<Problem Formulation>>> The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$. Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$. <<</Problem Formulation>>> <<<Seq2Seq Model Architecture>>> For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. <<</Seq2Seq Model Architecture>>> <<<Multitask Training Scheme>>> To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10). <<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows: where $L$ is the sequence length. <<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> <<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$: where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes where $\lambda $ is a hyper-parameter. <<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> <<</Multitask Training Scheme>>> <<<Parameter-Sharing Scheme>>> More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. <<<Type 1. Style Layer Normalization>>> Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$: where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. <<</Type 1. Style Layer Normalization>>> <<<Type 2. Style-Guided Encoder Attention>>> Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns. <<</Type 2. Style-Guided Encoder Attention>>> <<</Parameter-Sharing Scheme>>> <<</Methods>>> <<<Experiments>>> <<<Datasets>>> We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. <<<Source Dataset>>> The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs. <<</Source Dataset>>> <<<Three Target Style Corpora>>> <<<Humor and Romance>>> For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. <<</Humor and Romance>>> <<<Clickbait>>> We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use. Some examples from each style corpus are listed in Table TABREF32. <<</Clickbait>>> <<</Three Target Style Corpora>>> <<</Datasets>>> <<<Baselines>>> We compared the proposed TitleStylist against the following five strong baseline approaches. <<<Neural Headline Generation (NHG)>>> We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data. <<</Neural Headline Generation (NHG)>>> <<<Gigaword-MASS>>> We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles. <<</Gigaword-MASS>>> <<<Neural Story Teller (NST)>>> It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website. <<</Neural Story Teller (NST)>>> <<<Fine-Tuned>>> We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. <<</Fine-Tuned>>> <<<Multitask>>> We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. <<</Multitask>>> <<</Baselines>>> <<<Evaluation Metrics>>> To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. <<<Setup of Human Evaluation>>> We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. <<</Setup of Human Evaluation>>> <<<Setup of Automatic Evaluation>>> Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. <<<Summarization Quality>>> We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit. <<</Summarization Quality>>> <<<Language Fluency>>> We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. <<</Language Fluency>>> <<</Setup of Automatic Evaluation>>> <<</Evaluation Metrics>>> <<<Experimental Details>>> We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $. <<</Experimental Details>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Human Evaluation Results>>> The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters. <<<Relevance>>> We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. <<</Relevance>>> <<<Attraction>>> In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. <<</Attraction>>> <<<Fluency>>> The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability. <<</Fluency>>> <<<Style Strength>>> We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57. <<</Style Strength>>> <<</Human Evaluation Results>>> <<<Automatic Evaluation Results>>> Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. <<</Automatic Evaluation Results>>> <<<Extension to Multi-Style>>> We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature. <<</Extension to Multi-Style>>> <<</Results and Discussion>>> <<<Conclusion>>> We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "annotators are asked how attractive the headlines are,Likert scale from 1 to 10 (integer values)" ], "type": "extractive" }
2004.01980
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How is presence of three target styles detected? Context: <<<Title>>> Hooks in the Headline: Learning to Generate Headlines with Controlled Styles <<<Abstract>>> Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. <<</Abstract>>> <<<Introduction>>> Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.” To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. <<</Introduction>>> <<<Related Work>>> Our work is related to summarization and text style transfer. <<<Headline Generation as Summarization>>> Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27. Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. <<</Headline Generation as Summarization>>> <<<Text Style Transfer>>> Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. <<</Text Style Transfer>>> <<</Related Work>>> <<<Methods>>> <<<Problem Formulation>>> The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$. Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$. <<</Problem Formulation>>> <<<Seq2Seq Model Architecture>>> For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. <<</Seq2Seq Model Architecture>>> <<<Multitask Training Scheme>>> To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10). <<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows: where $L$ is the sequence length. <<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> <<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$: where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes where $\lambda $ is a hyper-parameter. <<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> <<</Multitask Training Scheme>>> <<<Parameter-Sharing Scheme>>> More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. <<<Type 1. Style Layer Normalization>>> Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$: where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. <<</Type 1. Style Layer Normalization>>> <<<Type 2. Style-Guided Encoder Attention>>> Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns. <<</Type 2. Style-Guided Encoder Attention>>> <<</Parameter-Sharing Scheme>>> <<</Methods>>> <<<Experiments>>> <<<Datasets>>> We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. <<<Source Dataset>>> The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs. <<</Source Dataset>>> <<<Three Target Style Corpora>>> <<<Humor and Romance>>> For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. <<</Humor and Romance>>> <<<Clickbait>>> We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use. Some examples from each style corpus are listed in Table TABREF32. <<</Clickbait>>> <<</Three Target Style Corpora>>> <<</Datasets>>> <<<Baselines>>> We compared the proposed TitleStylist against the following five strong baseline approaches. <<<Neural Headline Generation (NHG)>>> We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data. <<</Neural Headline Generation (NHG)>>> <<<Gigaword-MASS>>> We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles. <<</Gigaword-MASS>>> <<<Neural Story Teller (NST)>>> It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website. <<</Neural Story Teller (NST)>>> <<<Fine-Tuned>>> We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. <<</Fine-Tuned>>> <<<Multitask>>> We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. <<</Multitask>>> <<</Baselines>>> <<<Evaluation Metrics>>> To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. <<<Setup of Human Evaluation>>> We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. <<</Setup of Human Evaluation>>> <<<Setup of Automatic Evaluation>>> Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. <<<Summarization Quality>>> We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit. <<</Summarization Quality>>> <<<Language Fluency>>> We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. <<</Language Fluency>>> <<</Setup of Automatic Evaluation>>> <<</Evaluation Metrics>>> <<<Experimental Details>>> We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $. <<</Experimental Details>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Human Evaluation Results>>> The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters. <<<Relevance>>> We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. <<</Relevance>>> <<<Attraction>>> In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. <<</Attraction>>> <<<Fluency>>> The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability. <<</Fluency>>> <<<Style Strength>>> We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57. <<</Style Strength>>> <<</Human Evaluation Results>>> <<<Automatic Evaluation Results>>> Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. <<</Automatic Evaluation Results>>> <<<Extension to Multi-Style>>> We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature. <<</Extension to Multi-Style>>> <<</Results and Discussion>>> <<<Conclusion>>> We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "human evaluation task about the style strength" ], "type": "extractive" }
2004.01980
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How is fluency automatically evaluated? Context: <<<Title>>> Hooks in the Headline: Learning to Generate Headlines with Controlled Styles <<<Abstract>>> Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure. We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers. With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework. We also introduced a novel parameter sharing scheme to further disentangle the style from the text. Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait. The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9.68%, and even outperforms human-written references. <<</Abstract>>> <<<Introduction>>> Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.” To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others. SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style. In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data. Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones. Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box. <<</Introduction>>> <<<Related Work>>> Our work is related to summarization and text style transfer. <<<Headline Generation as Summarization>>> Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27. Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles. <<</Headline Generation as Summarization>>> <<<Text Style Transfer>>> Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem. <<</Text Style Transfer>>> <<</Related Work>>> <<<Methods>>> <<<Problem Formulation>>> The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$. Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$. <<</Problem Formulation>>> <<<Seq2Seq Model Architecture>>> For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG. <<</Seq2Seq Model Architecture>>> <<<Multitask Training Scheme>>> To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10). <<<Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows: where $L$ is the sequence length. <<</Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@>>> <<<DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$: where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes where $\lambda $ is a hyper-parameter. <<</DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@>>> <<</Multitask Training Scheme>>> <<<Parameter-Sharing Scheme>>> More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below. <<<Type 1. Style Layer Normalization>>> Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$: where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data. Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers. <<</Type 1. Style Layer Normalization>>> <<<Type 2. Style-Guided Encoder Attention>>> Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns. <<</Type 2. Style-Guided Encoder Attention>>> <<</Parameter-Sharing Scheme>>> <<</Methods>>> <<<Experiments>>> <<<Datasets>>> We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively. <<<Source Dataset>>> The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set. We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs. We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs. <<</Source Dataset>>> <<<Three Target Style Corpora>>> <<<Humor and Romance>>> For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets. <<</Humor and Romance>>> <<<Clickbait>>> We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use. Some examples from each style corpus are listed in Table TABREF32. <<</Clickbait>>> <<</Three Target Style Corpora>>> <<</Datasets>>> <<<Baselines>>> We compared the proposed TitleStylist against the following five strong baseline approaches. <<<Neural Headline Generation (NHG)>>> We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data. <<</Neural Headline Generation (NHG)>>> <<<Gigaword-MASS>>> We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles. <<</Gigaword-MASS>>> <<<Neural Story Teller (NST)>>> It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website. <<</Neural Story Teller (NST)>>> <<<Fine-Tuned>>> We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training. <<</Fine-Tuned>>> <<<Multitask>>> We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG. <<</Multitask>>> <<</Baselines>>> <<<Evaluation Metrics>>> To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation. <<<Setup of Human Evaluation>>> We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices. <<</Setup of Human Evaluation>>> <<<Setup of Automatic Evaluation>>> Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness. <<<Summarization Quality>>> We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit. <<</Summarization Quality>>> <<<Language Fluency>>> We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs. <<</Language Fluency>>> <<</Setup of Automatic Evaluation>>> <<</Evaluation Metrics>>> <<<Experimental Details>>> We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $. <<</Experimental Details>>> <<</Experiments>>> <<<Results and Discussion>>> <<<Human Evaluation Results>>> The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters. <<<Relevance>>> We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity. <<</Relevance>>> <<<Attraction>>> In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores. <<</Attraction>>> <<<Fluency>>> The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability. <<</Fluency>>> <<<Style Strength>>> We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57. <<</Style Strength>>> <<</Human Evaluation Results>>> <<<Automatic Evaluation Results>>> Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability. Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body. From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability. In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation. We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization. It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation. We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines. <<</Automatic Evaluation Results>>> <<<Extension to Multi-Style>>> We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature. <<</Extension to Multi-Style>>> <<</Results and Discussion>>> <<<Conclusion>>> We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models. <<</Conclusion>>> <<</Title>>>
{ "references": [ "fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs" ], "type": "extractive" }
1911.03597
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What multilingual parallel data is used for training proposed model? Context: <<<Title>>> Zero-Shot Paraphrase Generation with Multilingual Language Models <<<Abstract>>> Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism of denoising auto-encoder (DAE) to improve diversity and robustness of the model. Experimental results show that our model surpasses the pivoting method in terms of relevance, diversity, fluency and efficiency. <<</Abstract>>> <<<Introduction>>> Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. <<</Introduction>>> <<<Methodology>>> <<<Transformer-based Language Model>>> Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: <<</Transformer-based Language Model>>> <<<Zero-shot Paraphrase Generation>>> We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. <<<Language Embeddings>>> The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. <<</Language Embeddings>>> <<<Pre-Training on Monolingual Corpora>>> Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. <<</Pre-Training on Monolingual Corpora>>> <<<Denoising Auto-Encoder>>> We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. <<</Denoising Auto-Encoder>>> <<</Zero-shot Paraphrase Generation>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets>>> We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. <<</Datasets>>> <<<Experimental Settings>>> We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. <<</Experimental Settings>>> <<<Automatic Evaluation>>> We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. <<<Comparison with Baseline>>> First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. <<</Comparison with Baseline>>> <<<Multilingual Models>>> As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. <<</Multilingual Models>>> <<<Monolingual Pre-Training>>> As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. <<</Monolingual Pre-Training>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. <<</Human Evaluation>>> <<<Case Studies>>> We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. <<</Case Studies>>> <<</Experiments>>> <<<Related Work>>> Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. <<</Related Work>>> <<<Conclusions>>> In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future. <<</Conclusions>>> <<</Title>>>
{ "references": [ "MultiUN BIBREF20,OpenSubtitles BIBREF21" ], "type": "extractive" }
1911.03597
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How much better are results of proposed model compared to pivoting method? Context: <<<Title>>> Zero-Shot Paraphrase Generation with Multilingual Language Models <<<Abstract>>> Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited. Round-trip translation, also known as the pivoting method, is a typical approach to this end. However, we notice that the pivoting process involves multiple machine translation models and is likely to incur semantic drift during the two-step translations. In this paper, inspired by the Transformer-based language models, we propose a simple and unified paraphrasing model, which is purely trained on multilingual parallel data and can conduct zero-shot paraphrase generation in one step. Compared with the pivoting approach, paraphrases generated by our model is more semantically similar to the input sentence. Moreover, since our model shares the same architecture as GPT (Radford et al., 2018), we are able to pre-train the model on large-scale unparallel corpus, which further improves the fluency of the output sentences. In addition, we introduce the mechanism of denoising auto-encoder (DAE) to improve diversity and robustness of the model. Experimental results show that our model surpasses the pivoting method in terms of relevance, diversity, fluency and efficiency. <<</Abstract>>> <<<Introduction>>> Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. <<</Introduction>>> <<<Methodology>>> <<<Transformer-based Language Model>>> Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: <<</Transformer-based Language Model>>> <<<Zero-shot Paraphrase Generation>>> We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. <<<Language Embeddings>>> The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. <<</Language Embeddings>>> <<<Pre-Training on Monolingual Corpora>>> Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. <<</Pre-Training on Monolingual Corpora>>> <<<Denoising Auto-Encoder>>> We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. <<</Denoising Auto-Encoder>>> <<</Zero-shot Paraphrase Generation>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets>>> We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. <<</Datasets>>> <<<Experimental Settings>>> We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. <<</Experimental Settings>>> <<<Automatic Evaluation>>> We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. <<<Comparison with Baseline>>> First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. <<</Comparison with Baseline>>> <<<Multilingual Models>>> As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. <<</Multilingual Models>>> <<<Monolingual Pre-Training>>> As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. <<</Monolingual Pre-Training>>> <<</Automatic Evaluation>>> <<<Human Evaluation>>> 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. <<</Human Evaluation>>> <<<Case Studies>>> We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. <<</Case Studies>>> <<</Experiments>>> <<<Related Work>>> Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. <<</Related Work>>> <<<Conclusions>>> In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future. <<</Conclusions>>> <<</Title>>>
{ "references": [ "our method outperforms the baseline in both relevance and fluency significantly." ], "type": "extractive" }
2003.08132
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What representations are presented by this paper? Context: <<<Title>>> Gender Representation in Open Source Speech Resources <<<Abstract>>> With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. <<</Abstract>>> <<<>>> 1.1em <<</>>> <<<Introduction>>> The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work <<</Introduction>>> <<<OpenSLR>>> Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. <<</OpenSLR>>> <<<Methodology>>> In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. <<<Speaker Information and Lack of Meta-Data>>> The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. <<</Speaker Information and Lack of Meta-Data>>> <<<Speech Time Information and Data Consistency>>> The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. <<</Speech Time Information and Data Consistency>>> <<<Corpora Characteristics>>> The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) <<</Corpora Characteristics>>> <<</Methodology>>> <<<Analysis>>> <<<Gender Information Availability>>> Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. <<</Gender Information Availability>>> <<<Gender Distribution Among Speakers>>> <<<Elicited vs Non-Elicited Data>>> Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. <<</Elicited vs Non-Elicited Data>>> <<<High-resource vs Low-resource Languages>>> In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. <<</High-resource vs Low-resource Languages>>> <<<“How Can I Help?": Spoken Language Tasks>>> Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. <<</“How Can I Help?": Spoken Language Tasks>>> <<</Gender Distribution Among Speakers>>> <<<Speech Time and Gender>>> Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. <<</Speech Time and Gender>>> <<<Evolution over Time>>> When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. <<</Evolution over Time>>> <<</Analysis>>> <<<Recommendations>>> The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. <<<On the Importance of Meta-Data>>> The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. <<</On the Importance of Meta-Data>>> <<<Transparency in Evaluation>>> Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. <<</Transparency in Evaluation>>> <<</Recommendations>>> <<<Conclusion>>> In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. <<</Conclusion>>> <<</Title>>>
{ "references": [ "the number of speakers of each gender category,their speech duration" ], "type": "extractive" }
2001.02380
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Are some models evaluated using this metric, what are the findings? Context: <<<Title>>> A Neural Approach to Discourse Relation Signal Detection <<<Abstract>>> Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. <<</Abstract>>> <<<Introduction>>> The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. <<</Introduction>>> <<<Previous Work>>> <<<Data-driven Approaches>>> A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. <<</Data-driven Approaches>>> <<<Discourse Relation Signal Annotations>>> Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. <<</Discourse Relation Signal Annotations>>> <<</Previous Work>>> <<<Data>>> <<<Anchored Signals in the GUM Corpus>>> In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. <<</Anchored Signals in the GUM Corpus>>> <<<A Taxonomy of Anchored Signals>>> From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. <<</A Taxonomy of Anchored Signals>>> <<</Data>>> <<<Automatic Signal Extraction>>> <<<A Contextless Frequentist Approach>>> To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. <<</A Contextless Frequentist Approach>>> <<<A Contextualized Neural Model>>> <<<Task and Model Architecture>>> Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. <<</Task and Model Architecture>>> <<<Relation Classification Performance>>> Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. <<</Relation Classification Performance>>> <<<Signaling Metric>>> The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. <<</Signaling Metric>>> <<</A Contextualized Neural Model>>> <<</Automatic Signal Extraction>>> <<<Evaluation and Error Analysis>>> <<<Evaluation Metric>>> To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. <<</Evaluation Metric>>> <<<Qualitative Analysis>>> Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. <<</Qualitative Analysis>>> <<<Performance on Signal Types>>> To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 <<</Performance on Signal Types>>> <<</Evaluation and Error Analysis>>> <<<Discussion>>> This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset. <<</Discussion>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
2001.02380
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Where does proposed metric differ from juman judgement? Context: <<<Title>>> A Neural Approach to Discourse Relation Signal Detection <<<Abstract>>> Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. <<</Abstract>>> <<<Introduction>>> The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. <<</Introduction>>> <<<Previous Work>>> <<<Data-driven Approaches>>> A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. <<</Data-driven Approaches>>> <<<Discourse Relation Signal Annotations>>> Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. <<</Discourse Relation Signal Annotations>>> <<</Previous Work>>> <<<Data>>> <<<Anchored Signals in the GUM Corpus>>> In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. <<</Anchored Signals in the GUM Corpus>>> <<<A Taxonomy of Anchored Signals>>> From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. <<</A Taxonomy of Anchored Signals>>> <<</Data>>> <<<Automatic Signal Extraction>>> <<<A Contextless Frequentist Approach>>> To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. <<</A Contextless Frequentist Approach>>> <<<A Contextualized Neural Model>>> <<<Task and Model Architecture>>> Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. <<</Task and Model Architecture>>> <<<Relation Classification Performance>>> Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. <<</Relation Classification Performance>>> <<<Signaling Metric>>> The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. <<</Signaling Metric>>> <<</A Contextualized Neural Model>>> <<</Automatic Signal Extraction>>> <<<Evaluation and Error Analysis>>> <<<Evaluation Metric>>> To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. <<</Evaluation Metric>>> <<<Qualitative Analysis>>> Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. <<</Qualitative Analysis>>> <<<Performance on Signal Types>>> To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 <<</Performance on Signal Types>>> <<</Evaluation and Error Analysis>>> <<<Discussion>>> This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset. <<</Discussion>>> <<</Title>>>
{ "references": [ "model points out plausible signals which were passed over by an annotator,it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action" ], "type": "extractive" }
2001.02380
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Where does proposed metric overlap with juman judgement? Context: <<<Title>>> A Neural Approach to Discourse Relation Signal Detection <<<Abstract>>> Previous data-driven work investigating the types and distributions of discourse relation signals, including discourse markers such as 'however' or phrases such as 'as a result' has focused on the relative frequencies of signal words within and outside text from each discourse relation. Such approaches do not allow us to quantify the signaling strength of individual instances of a signal on a scale (e.g. more or less discourse-relevant instances of 'and'), to assess the distribution of ambiguity for signals, or to identify words that hinder discourse relation identification in context ('anti-signals' or 'distractors'). In this paper we present a data-driven approach to signal detection using a distantly supervised neural network and develop a metric, {\Delta}s (or 'delta-softmax'), to quantify signaling strength. Ranging between -1 and 1 and relying on recent advances in contextualized words embeddings, the metric represents each word's positive or negative contribution to the identifiability of a relation in specific instances in context. Based on an English corpus annotated for discourse relations using Rhetorical Structure Theory and signal type annotations anchored to specific tokens, our analysis examines the reliability of the metric, the places where it overlaps with and differs from human judgments, and the implications for identifying features that neural models may need in order to perform better on automatic discourse relation classification. <<</Abstract>>> <<<Introduction>>> The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. <<</Introduction>>> <<<Previous Work>>> <<<Data-driven Approaches>>> A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. <<</Data-driven Approaches>>> <<<Discourse Relation Signal Annotations>>> Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. <<</Discourse Relation Signal Annotations>>> <<</Previous Work>>> <<<Data>>> <<<Anchored Signals in the GUM Corpus>>> In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. <<</Anchored Signals in the GUM Corpus>>> <<<A Taxonomy of Anchored Signals>>> From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. <<</A Taxonomy of Anchored Signals>>> <<</Data>>> <<<Automatic Signal Extraction>>> <<<A Contextless Frequentist Approach>>> To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. <<</A Contextless Frequentist Approach>>> <<<A Contextualized Neural Model>>> <<<Task and Model Architecture>>> Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. <<</Task and Model Architecture>>> <<<Relation Classification Performance>>> Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. <<</Relation Classification Performance>>> <<<Signaling Metric>>> The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. <<</Signaling Metric>>> <<</A Contextualized Neural Model>>> <<</Automatic Signal Extraction>>> <<<Evaluation and Error Analysis>>> <<<Evaluation Metric>>> To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. <<</Evaluation Metric>>> <<<Qualitative Analysis>>> Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. <<</Qualitative Analysis>>> <<<Performance on Signal Types>>> To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 <<</Performance on Signal Types>>> <<</Evaluation and Error Analysis>>> <<<Discussion>>> This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset. <<</Discussion>>> <<</Title>>>
{ "references": [ "influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments" ], "type": "extractive" }
2002.00317
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which baseline performs best? Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "IR methods perform better than the best neural models" ], "type": "extractive" }
2002.00317
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which baselines are explored? Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "GPT2,SciBERT model of BIBREF11" ], "type": "extractive" }
2002.00317
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What is the size of the corpus? Context: <<<Title>>> Citation Text Generation <<<Abstract>>> We introduce the task of citation text generation: given a pair of scientific documents, explain their relationship in natural language text in the manner of a citation from one text to the other. This task encourages systems to learn rich relationships between scientific texts and to express them concretely in natural language. Models for citation text generation will require robust document understanding including the capacity to quickly adapt to new vocabulary and to reason about document content. We believe this challenging direction of research will benefit high-impact applications such as automatic literature review or scientific writing assistance systems. In this paper we establish the task of citation text generation with a standard evaluation corpus and explore several baseline models. <<</Abstract>>> <<<Introduction>>> The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. <<</Introduction>>> <<<Task>>> Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. <<</Task>>> <<<Models>>> We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. <<<Neural Text Generation>>> Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. <<<Context>>> The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. <<</Context>>> <<</Neural Text Generation>>> <<<Retrieval with Approximate Nearest Neighbors>>> While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. <<</Retrieval with Approximate Nearest Neighbors>>> <<<Language Model Pretraining>>> GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. <<</Language Model Pretraining>>> <<</Models>>> <<<Evaluation>>> We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. <<</Evaluation>>> <<<Analysis>>> In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. <<<Errors>>> In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. <<</Errors>>> <<<Examples>>> Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. <<</Examples>>> <<<Future Work>>> The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. <<</Future Work>>> <<</Analysis>>> <<<Related Work>>> The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. <<</Related Work>>> <<<Conclusion>>> We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. <<</Conclusion>>> <<</Title>>>
{ "references": [ "8.1 million scientific documents,154K computer science articles,622K citing sentences" ], "type": "extractive" }
2004.04228
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What models are evaluated with QAGS? Context: <<<Title>>> Asking and Answering Questions to Evaluate the Factual Consistency of Summaries <<<Abstract>>> Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced"kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text. <<</Abstract>>> <<<Introduction>>> Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. <<</Introduction>>> <<<Background: Automatically Evaluating Machine Generated Text>>> Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. <<</Background: Automatically Evaluating Machine Generated Text>>> <<<A Framework for Automatically Evaluating Factual Consistency>>> We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. <<</A Framework for Automatically Evaluating Factual Consistency>>> <<<QAGS>>> Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. <<<Question Generation>>> To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. <<</Question Generation>>> <<<Question Answering>>> We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. <<</Question Answering>>> <<<Answer Similarity>>> We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as <<</Answer Similarity>>> <<<The QAGS Score>>> Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. <<</The QAGS Score>>> <<</QAGS>>> <<<Experiments>>> <<<Human Evaluation>>> We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. <<<Datasets>>> We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. <<</Datasets>>> <<<Annotation Protocol>>> We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. <<</Annotation Protocol>>> <<</Human Evaluation>>> <<<Experimental Details>>> <<<Baselines>>> We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. <<</Baselines>>> <<</Experimental Details>>> <<<Results>>> We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. <<</Results>>> <<<Ablations>>> A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. <<<Model Quality>>> We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . <<</Model Quality>>> <<<Domain Effects>>> Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. <<</Domain Effects>>> <<<Number of Questions>>> Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. <<</Number of Questions>>> <<<Answer Similarity Metric>>> Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. <<</Answer Similarity Metric>>> <<</Ablations>>> <<</Experiments>>> <<<Re-ranking with QAGS>>> Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. <<</Re-ranking with QAGS>>> <<<Qualitative Analysis>>> <<<Interpreting QAGS>>> The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. <<</Interpreting QAGS>>> <<<Error Analysis>>> The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. <<</Error Analysis>>> <<<Limitations>>> We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. <<</Limitations>>> <<</Qualitative Analysis>>> <<<Related Work>>> Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. <<</Related Work>>> <<<Conclusion>>> We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "bert-large-wwm,bert-base,bert-large" ], "type": "extractive" }
2004.04228
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Do they use crowdsourcing to collect human judgements? Context: <<<Title>>> Asking and Answering Questions to Evaluate the Factual Consistency of Summaries <<<Abstract>>> Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced"kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text. <<</Abstract>>> <<<Introduction>>> Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. <<</Introduction>>> <<<Background: Automatically Evaluating Machine Generated Text>>> Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. <<</Background: Automatically Evaluating Machine Generated Text>>> <<<A Framework for Automatically Evaluating Factual Consistency>>> We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. <<</A Framework for Automatically Evaluating Factual Consistency>>> <<<QAGS>>> Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. <<<Question Generation>>> To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. <<</Question Generation>>> <<<Question Answering>>> We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. <<</Question Answering>>> <<<Answer Similarity>>> We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as <<</Answer Similarity>>> <<<The QAGS Score>>> Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. <<</The QAGS Score>>> <<</QAGS>>> <<<Experiments>>> <<<Human Evaluation>>> We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. <<<Datasets>>> We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. <<</Datasets>>> <<<Annotation Protocol>>> We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. <<</Annotation Protocol>>> <<</Human Evaluation>>> <<<Experimental Details>>> <<<Baselines>>> We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. <<</Baselines>>> <<</Experimental Details>>> <<<Results>>> We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. <<</Results>>> <<<Ablations>>> A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. <<<Model Quality>>> We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . <<</Model Quality>>> <<<Domain Effects>>> Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. <<</Domain Effects>>> <<<Number of Questions>>> Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. <<</Number of Questions>>> <<<Answer Similarity Metric>>> Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. <<</Answer Similarity Metric>>> <<</Ablations>>> <<</Experiments>>> <<<Re-ranking with QAGS>>> Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. <<</Re-ranking with QAGS>>> <<<Qualitative Analysis>>> <<<Interpreting QAGS>>> The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. <<</Interpreting QAGS>>> <<<Error Analysis>>> The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. <<</Error Analysis>>> <<<Limitations>>> We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. <<</Limitations>>> <<</Qualitative Analysis>>> <<<Related Work>>> Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. <<</Related Work>>> <<<Conclusion>>> We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1909.00161
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Do they use pretrained models? Context: <<<Title>>> Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach <<<Abstract>>> Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. ::: This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the ``topic'' aspect includes ``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy'' and ``anger''; the ``situation'' aspect includes ``medical assistance'' and ``water shortage''. ii) We extend the existing evaluation setup (label-partially-unseen) -- given a dataset, train on some labels, test on all labels -- to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way. ::: Code & Data: this https URL <<</Abstract>>> <<<Introduction>>> Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. <<<First problem.>>> The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. <<</First problem.>>> <<<Second problem.>>> Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. <<</Second problem.>>> <<<Third problem.>>> Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: <<</Third problem.>>> <<<Dataset.>>> We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. <<</Dataset.>>> <<<Evaluation.>>> Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. <<</Evaluation.>>> <<<Entailment approach.>>> Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. <<</Entailment approach.>>> <<</Introduction>>> <<<Related Work>>> $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. <<</Related Work>>> <<<Benchmark the dataset>>> In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. <<<Topic detection>>> <<<Yahoo.>>> We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. <<</Yahoo.>>> <<</Topic detection>>> <<<Emotion detection>>> <<<UnifyEmotion.>>> This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. <<</UnifyEmotion.>>> <<</Emotion detection>>> <<<Situation detection>>> The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. <<<Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. <<</Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> <<</Situation detection>>> <<</Benchmark the dataset>>> <<<Benchmark the evaluation>>> How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. <<<Label-partially-unseen.>>> This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. <<</Label-partially-unseen.>>> <<<Label-fully-unseen.>>> In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. <<</Label-fully-unseen.>>> <<</Benchmark the evaluation>>> <<<An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. <<<Convert labels into hypotheses.>>> The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. <<</Convert labels into hypotheses.>>> <<<Convert classification data into entailment data.>>> For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. <<</Convert classification data into entailment data.>>> <<<Entailment model learning.>>> In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. <<</Entailment model learning.>>> <<<Harsh policy in testing.>>> Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. <<</Harsh policy in testing.>>> <<</An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> <<<Experiments>>> <<<Label-partially-unseen evaluation>>> In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. <<<Baselines.>>> Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. <<</Baselines.>>> <<<Discussion.>>> The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. <<</Discussion.>>> <<</Label-partially-unseen evaluation>>> <<<Label-fully-unseen evaluation>>> Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. <<</Label-fully-unseen evaluation>>> <<<How do the generated hypotheses influence>>> In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. <<</How do the generated hypotheses influence>>> <<</Experiments>>> <<<Summary>>> In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. <<</Summary>>> <<<Acknowledgments>>> The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. <<</Acknowledgments>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1909.00161
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are their baseline models? Context: <<<Title>>> Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach <<<Abstract>>> Zero-shot text classification (0Shot-TC) is a challenging NLU problem to which little attention has been paid by the research community. 0Shot-TC aims to associate an appropriate label with a piece of text, irrespective of the text domain and the aspect (e.g., topic, emotion, event, etc.) described by the label. And there are only a few articles studying 0Shot-TC, all focusing only on topical categorization which, we argue, is just the tip of the iceberg in 0Shot-TC. In addition, the chaotic experiments in literature make no uniform comparison, which blurs the progress. ::: This work benchmarks the 0Shot-TC problem by providing unified datasets, standardized evaluations, and state-of-the-art baselines. Our contributions include: i) The datasets we provide facilitate studying 0Shot-TC relative to conceptually different and diverse aspects: the ``topic'' aspect includes ``sports'' and ``politics'' as labels; the ``emotion'' aspect includes ``joy'' and ``anger''; the ``situation'' aspect includes ``medical assistance'' and ``water shortage''. ii) We extend the existing evaluation setup (label-partially-unseen) -- given a dataset, train on some labels, test on all labels -- to include a more challenging yet realistic evaluation label-fully-unseen 0Shot-TC (Chang et al., 2008), aiming at classifying text snippets without seeing task specific training data at all. iii) We unify the 0Shot-TC of diverse aspects within a textual entailment formulation and study it this way. ::: Code & Data: this https URL <<</Abstract>>> <<<Introduction>>> Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. <<<First problem.>>> The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. <<</First problem.>>> <<<Second problem.>>> Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. <<</Second problem.>>> <<<Third problem.>>> Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: <<</Third problem.>>> <<<Dataset.>>> We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. <<</Dataset.>>> <<<Evaluation.>>> Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. <<</Evaluation.>>> <<<Entailment approach.>>> Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. <<</Entailment approach.>>> <<</Introduction>>> <<<Related Work>>> $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. <<</Related Work>>> <<<Benchmark the dataset>>> In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. <<<Topic detection>>> <<<Yahoo.>>> We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. <<</Yahoo.>>> <<</Topic detection>>> <<<Emotion detection>>> <<<UnifyEmotion.>>> This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. <<</UnifyEmotion.>>> <<</Emotion detection>>> <<<Situation detection>>> The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. <<<Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. <<</Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets.>>> <<</Situation detection>>> <<</Benchmark the dataset>>> <<<Benchmark the evaluation>>> How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. <<<Label-partially-unseen.>>> This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. <<</Label-partially-unseen.>>> <<<Label-fully-unseen.>>> In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. <<</Label-fully-unseen.>>> <<</Benchmark the evaluation>>> <<<An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. <<<Convert labels into hypotheses.>>> The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. <<</Convert labels into hypotheses.>>> <<<Convert classification data into entailment data.>>> For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. <<</Convert classification data into entailment data.>>> <<<Entailment model learning.>>> In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. <<</Entailment model learning.>>> <<<Harsh policy in testing.>>> Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. <<</Harsh policy in testing.>>> <<</An entailment model for @!START@$\textsc {0shot-tc}$@!END@>>> <<<Experiments>>> <<<Label-partially-unseen evaluation>>> In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. <<<Baselines.>>> Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. <<</Baselines.>>> <<<Discussion.>>> The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. <<</Discussion.>>> <<</Label-partially-unseen evaluation>>> <<<Label-fully-unseen evaluation>>> Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. <<</Label-fully-unseen evaluation>>> <<<How do the generated hypotheses influence>>> In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. <<</How do the generated hypotheses influence>>> <<</Experiments>>> <<<Summary>>> In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. <<</Summary>>> <<<Acknowledgments>>> The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. <<</Acknowledgments>>> <<</Title>>>
{ "references": [ "Majority,ESA,Word2Vec ,Binary-BERT" ], "type": "extractive" }
1909.08167
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How are different domains weighted in WDIRL? Context: <<<Title>>> Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis <<<Abstract>>> Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution. <<</Abstract>>> <<<Introduction>>> Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. <<</Introduction>>> <<<Preliminary and Related Work>>> <<<Domain Adaptation>>> For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. <<</Domain Adaptation>>> <<<Domain Invariant Representation Learning>>> Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. <<</Domain Invariant Representation Learning>>> <<</Preliminary and Related Work>>> <<<Problem of Domain-Invariant Representation Learning>>> In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. <<<Remark.>>> According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. <<</Remark.>>> <<</Problem of Domain-Invariant Representation Learning>>> <<<Weighted Domain Invariant Representation Learning>>> According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. <<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. <<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> <<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. <<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> <<</Weighted Domain Invariant Representation Learning>>> <<<Experiment>>> <<<Experiment Design>>> Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. <<</Experiment Design>>> <<<Dataset and Task Design>>> We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. <<<Binary-Class.>>> From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. <<</Binary-Class.>>> <<<Multi-Class.>>> We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. <<</Multi-Class.>>> <<</Dataset and Task Design>>> <<<Implementation Detail>>> For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. <<</Implementation Detail>>> <<<Main Result>>> Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. <<</Main Result>>> <<</Experiment>>> <<<Conclusion>>> In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. <<</Conclusion>>> <<</Title>>>
{ "references": [ "To achieve this purpose, we introduce a trainable class weight $\\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\\mathbf {w}_i > 0$" ], "type": "extractive" }
1909.08167
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How is DIRL evaluated? Context: <<<Title>>> Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis <<<Abstract>>> Cross-domain sentiment analysis is currently a hot topic in the research and engineering areas. One of the most popular frameworks in this field is the domain-invariant representation learning (DIRL) paradigm, which aims to learn a distribution-invariant feature representation across domains. However, in this work, we find out that applying DIRL may harm domain adaptation when the label distribution $\rm{P}(\rm{Y})$ changes across domains. To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework. We show that it is easy to transfer existing SOTA DIRL models to WDIRL. Empirical studies on extensive cross-domain sentiment analysis tasks verified our statements and showed the effectiveness of our proposed solution. <<</Abstract>>> <<<Introduction>>> Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. <<</Introduction>>> <<<Preliminary and Related Work>>> <<<Domain Adaptation>>> For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. <<</Domain Adaptation>>> <<<Domain Invariant Representation Learning>>> Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. <<</Domain Invariant Representation Learning>>> <<</Preliminary and Related Work>>> <<<Problem of Domain-Invariant Representation Learning>>> In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. <<<Remark.>>> According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. <<</Remark.>>> <<</Problem of Domain-Invariant Representation Learning>>> <<<Weighted Domain Invariant Representation Learning>>> According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. <<<Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. <<</Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight>>> <<<Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. <<</Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight>>> <<</Weighted Domain Invariant Representation Learning>>> <<<Experiment>>> <<<Experiment Design>>> Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. <<</Experiment Design>>> <<<Dataset and Task Design>>> We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. <<<Binary-Class.>>> From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. <<</Binary-Class.>>> <<<Multi-Class.>>> We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. <<</Multi-Class.>>> <<</Dataset and Task Design>>> <<<Implementation Detail>>> For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. <<</Implementation Detail>>> <<<Main Result>>> Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. <<</Main Result>>> <<</Experiment>>> <<<Conclusion>>> In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from." ], "type": "extractive" }
1909.04181
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Does the paper report F1-scores for the age and language variety tasks? Context: <<<Title>>> BERT-Based Arabic Social Media Author Profiling <<<Abstract>>> We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks. <<</Abstract>>> <<<Introduction>>> The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude. <<</Introduction>>> <<<Data>>> For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. <<</Data>>> <<<Experiments>>> As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models. <<<Tweet-Level Models>>> <<<Baseline GRU.>>> Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. <<</Baseline GRU.>>> <<<BERT.>>> For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender. <<</BERT.>>> <<<Data Augmentation.>>> To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender. <<</Data Augmentation.>>> <<</Tweet-Level Models>>> <<<User-Level Models>>> Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender. <<</User-Level Models>>> <<<APDA@FIRE2019 submission>>> First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV. Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy. <<</APDA@FIRE2019 submission>>> <<</Experiments>>> <<<Conclusion>>> In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods. <<</Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1909.04181
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Are the models compared to some baseline models? Context: <<<Title>>> BERT-Based Arabic Social Media Author Profiling <<<Abstract>>> We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks. <<</Abstract>>> <<<Introduction>>> The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude. <<</Introduction>>> <<<Data>>> For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. <<</Data>>> <<<Experiments>>> As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models. <<<Tweet-Level Models>>> <<<Baseline GRU.>>> Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. <<</Baseline GRU.>>> <<<BERT.>>> For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender. <<</BERT.>>> <<<Data Augmentation.>>> To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender. <<</Data Augmentation.>>> <<</Tweet-Level Models>>> <<<User-Level Models>>> Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender. <<</User-Level Models>>> <<<APDA@FIRE2019 submission>>> First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV. Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy. <<</APDA@FIRE2019 submission>>> <<</Experiments>>> <<<Conclusion>>> In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1909.04181
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the in-house data employed? Context: <<<Title>>> BERT-Based Arabic Social Media Author Profiling <<<Abstract>>> We report our models for detecting age, language variety, and gender from social media data in the context of the Arabic author profiling and deception detection shared task (APDA). We build simple models based on pre-trained bidirectional encoders from transformers (BERT). We first fine-tune the pre-trained BERT model on each of the three datasets with shared task released data. Then we augment shared task data with in-house data for gender and dialect, showing the utility of augmenting training data. Our best models on the shared task test data are acquired with a majority voting of various BERT models trained under different data conditions. We acquire 54.72% accuracy for age, 93.75% for dialect, 81.67% for gender, and 40.97% joint accuracy across the three tasks. <<</Abstract>>> <<<Introduction>>> The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers. In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude. <<</Introduction>>> <<<Data>>> For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}. <<</Data>>> <<<Experiments>>> As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models. <<<Tweet-Level Models>>> <<<Baseline GRU.>>> Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs. <<</Baseline GRU.>>> <<<BERT.>>> For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender. <<</BERT.>>> <<<Data Augmentation.>>> To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender. <<</Data Augmentation.>>> <<</Tweet-Level Models>>> <<<User-Level Models>>> Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender. <<</User-Level Models>>> <<<APDA@FIRE2019 submission>>> First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV. Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy. <<</APDA@FIRE2019 submission>>> <<</Experiments>>> <<<Conclusion>>> In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods. <<</Conclusion>>> <<</Title>>>
{ "references": [ "we manually label an in-house dataset of 1,100 users with gender tags,we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task" ], "type": "extractive" }
1911.06171
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which future direction in NLG are discussed? Context: <<<Title>>> Unsupervised Pre-training for Natural Language Generation: A Literature Review <<<Abstract>>> Recently, unsupervised pre-training is gaining increasing popularity in the realm of computational linguistics, thanks to its surprising success in advancing natural language understanding (NLU) and the potential to effectively exploit large-scale unlabelled corpus. However, regardless of the success in NLU, the power of unsupervised pre-training is only partially excavated when it comes to natural language generation (NLG). The major obstacle stems from an idiosyncratic nature of NLG: Texts are usually generated based on certain context, which may vary with the target applications. As a result, it is intractable to design a universal architecture for pre-training as in NLU scenarios. Moreover, retaining the knowledge learned from pre-training when learning on the target task is also a non-trivial problem. This review summarizes the recent efforts to enhance NLG systems with unsupervised pre-training, with a special focus on the methods to catalyse the integration of pre-trained models into downstream tasks. They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle. Discussions are also provided to give further insights into the relationship between these two lines of work, some informative empirical phenomenons, as well as some possible directions where future work can be devoted to. <<</Abstract>>> <<<Introduction>>> Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU. The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0. In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network. The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review. <<</Introduction>>> <<<Background: Unsupervised Pre-training for NLU>>> Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training. Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm: where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$. The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks . <<</Background: Unsupervised Pre-training for NLU>>> <<<Unsupervised Pre-training and Parameter Initialization for NLG>>> NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation. BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines. BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework. <<</Unsupervised Pre-training and Parameter Initialization for NLG>>> <<<Architecture-based Methods>>> <<<Inducing Task-Specific Architecture in Pre-training>>> Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs). <<<Denoising Autoencoder>>> An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order. <<</Denoising Autoencoder>>> <<<Conditional Masked Language Model>>> CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks. <<</Conditional Masked Language Model>>> <<<Sequence to Sequence Language Model>>> Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective. <<</Sequence to Sequence Language Model>>> <<</Inducing Task-Specific Architecture in Pre-training>>> <<<Encoder-Agnostic Architectures for Adaptation>>> Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows: where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model. Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation. <<</Encoder-Agnostic Architectures for Adaptation>>> <<</Architecture-based Methods>>> <<<Strategy-based Methods>>> <<<Fine-tuning Schedules for Adaption>>> When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps: where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively. <<</Fine-tuning Schedules for Adaption>>> <<<Proxy Tasks for Adaption>>> Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned. For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better. <<</Proxy Tasks for Adaption>>> <<<Knowledge Distillation for Adaption>>> The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network. Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function : where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations. The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss: where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain. Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize: where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions. <<</Knowledge Distillation for Adaption>>> <<</Strategy-based Methods>>> <<<Discussions>>> <<<The Relationship between Architecture- and Strategy-based Methods>>> We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture. <<</The Relationship between Architecture- and Strategy-based Methods>>> <<<Experimental Phenomenons>>> Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons: The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18. Fixed representations yield better results than fine-tuning in some cases BIBREF24. Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16. The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning. The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators. <<</Experimental Phenomenons>>> <<<Future Directions>>> The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation? NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge. In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9. Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications. Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work. <<</Future Directions>>> <<</Discussions>>> <<<Conclusion>>> Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. <<</Conclusion>>> <<</Title>>>
{ "references": [ "1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context?,2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks?,3) How to reduce the computing resources required for large-scale pre-training?,4) What aspect of knowledge do the pre-trained models provide for better language generation?" ], "type": "extractive" }
1911.06171
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What experimental phenomena are presented? Context: <<<Title>>> Unsupervised Pre-training for Natural Language Generation: A Literature Review <<<Abstract>>> Recently, unsupervised pre-training is gaining increasing popularity in the realm of computational linguistics, thanks to its surprising success in advancing natural language understanding (NLU) and the potential to effectively exploit large-scale unlabelled corpus. However, regardless of the success in NLU, the power of unsupervised pre-training is only partially excavated when it comes to natural language generation (NLG). The major obstacle stems from an idiosyncratic nature of NLG: Texts are usually generated based on certain context, which may vary with the target applications. As a result, it is intractable to design a universal architecture for pre-training as in NLU scenarios. Moreover, retaining the knowledge learned from pre-training when learning on the target task is also a non-trivial problem. This review summarizes the recent efforts to enhance NLG systems with unsupervised pre-training, with a special focus on the methods to catalyse the integration of pre-trained models into downstream tasks. They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle. Discussions are also provided to give further insights into the relationship between these two lines of work, some informative empirical phenomenons, as well as some possible directions where future work can be devoted to. <<</Abstract>>> <<<Introduction>>> Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU. The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0. In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network. The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review. <<</Introduction>>> <<<Background: Unsupervised Pre-training for NLU>>> Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training. Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm: where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$. The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks . <<</Background: Unsupervised Pre-training for NLU>>> <<<Unsupervised Pre-training and Parameter Initialization for NLG>>> NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation. BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines. BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework. <<</Unsupervised Pre-training and Parameter Initialization for NLG>>> <<<Architecture-based Methods>>> <<<Inducing Task-Specific Architecture in Pre-training>>> Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs). <<<Denoising Autoencoder>>> An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order. <<</Denoising Autoencoder>>> <<<Conditional Masked Language Model>>> CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks. <<</Conditional Masked Language Model>>> <<<Sequence to Sequence Language Model>>> Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective. <<</Sequence to Sequence Language Model>>> <<</Inducing Task-Specific Architecture in Pre-training>>> <<<Encoder-Agnostic Architectures for Adaptation>>> Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows: where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model. Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation. <<</Encoder-Agnostic Architectures for Adaptation>>> <<</Architecture-based Methods>>> <<<Strategy-based Methods>>> <<<Fine-tuning Schedules for Adaption>>> When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps: where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively. <<</Fine-tuning Schedules for Adaption>>> <<<Proxy Tasks for Adaption>>> Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned. For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better. <<</Proxy Tasks for Adaption>>> <<<Knowledge Distillation for Adaption>>> The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network. Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function : where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations. The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss: where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain. Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize: where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions. <<</Knowledge Distillation for Adaption>>> <<</Strategy-based Methods>>> <<<Discussions>>> <<<The Relationship between Architecture- and Strategy-based Methods>>> We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture. <<</The Relationship between Architecture- and Strategy-based Methods>>> <<<Experimental Phenomenons>>> Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons: The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18. Fixed representations yield better results than fine-tuning in some cases BIBREF24. Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16. The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning. The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators. <<</Experimental Phenomenons>>> <<<Future Directions>>> The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation? NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge. In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9. Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications. Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work. <<</Future Directions>>> <<</Discussions>>> <<<Conclusion>>> Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. <<</Conclusion>>> <<</Title>>>
{ "references": [ "The advantage of pre-training gradually diminishes with the increase of labeled data,Fixed representations yield better results than fine-tuning in some cases,pre-training the Seq2Seq encoder outperforms pre-training the decoder" ], "type": "extractive" }
1911.06171
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How strategy-based methods handle obstacles in NLG? Context: <<<Title>>> Unsupervised Pre-training for Natural Language Generation: A Literature Review <<<Abstract>>> Recently, unsupervised pre-training is gaining increasing popularity in the realm of computational linguistics, thanks to its surprising success in advancing natural language understanding (NLU) and the potential to effectively exploit large-scale unlabelled corpus. However, regardless of the success in NLU, the power of unsupervised pre-training is only partially excavated when it comes to natural language generation (NLG). The major obstacle stems from an idiosyncratic nature of NLG: Texts are usually generated based on certain context, which may vary with the target applications. As a result, it is intractable to design a universal architecture for pre-training as in NLU scenarios. Moreover, retaining the knowledge learned from pre-training when learning on the target task is also a non-trivial problem. This review summarizes the recent efforts to enhance NLG systems with unsupervised pre-training, with a special focus on the methods to catalyse the integration of pre-trained models into downstream tasks. They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle. Discussions are also provided to give further insights into the relationship between these two lines of work, some informative empirical phenomenons, as well as some possible directions where future work can be devoted to. <<</Abstract>>> <<<Introduction>>> Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU. The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0. In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network. The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review. <<</Introduction>>> <<<Background: Unsupervised Pre-training for NLU>>> Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training. Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm: where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$. The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks . <<</Background: Unsupervised Pre-training for NLU>>> <<<Unsupervised Pre-training and Parameter Initialization for NLG>>> NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation. BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines. BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework. <<</Unsupervised Pre-training and Parameter Initialization for NLG>>> <<<Architecture-based Methods>>> <<<Inducing Task-Specific Architecture in Pre-training>>> Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs). <<<Denoising Autoencoder>>> An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order. <<</Denoising Autoencoder>>> <<<Conditional Masked Language Model>>> CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks. <<</Conditional Masked Language Model>>> <<<Sequence to Sequence Language Model>>> Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective. <<</Sequence to Sequence Language Model>>> <<</Inducing Task-Specific Architecture in Pre-training>>> <<<Encoder-Agnostic Architectures for Adaptation>>> Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows: where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model. Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation. <<</Encoder-Agnostic Architectures for Adaptation>>> <<</Architecture-based Methods>>> <<<Strategy-based Methods>>> <<<Fine-tuning Schedules for Adaption>>> When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps: where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively. <<</Fine-tuning Schedules for Adaption>>> <<<Proxy Tasks for Adaption>>> Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned. For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better. <<</Proxy Tasks for Adaption>>> <<<Knowledge Distillation for Adaption>>> The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network. Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function : where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations. The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss: where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain. Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize: where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions. <<</Knowledge Distillation for Adaption>>> <<</Strategy-based Methods>>> <<<Discussions>>> <<<The Relationship between Architecture- and Strategy-based Methods>>> We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture. <<</The Relationship between Architecture- and Strategy-based Methods>>> <<<Experimental Phenomenons>>> Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons: The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18. Fixed representations yield better results than fine-tuning in some cases BIBREF24. Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16. The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning. The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators. <<</Experimental Phenomenons>>> <<<Future Directions>>> The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation? NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge. In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9. Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications. Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work. <<</Future Directions>>> <<</Discussions>>> <<<Conclusion>>> Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. <<</Conclusion>>> <<</Title>>>
{ "references": [ "fine-tuning schedules that elaborately design the control of learning rates for optimization,proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution,knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network" ], "type": "extractive" }
1911.06171
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How architecture-based method handle obstacles in NLG? Context: <<<Title>>> Unsupervised Pre-training for Natural Language Generation: A Literature Review <<<Abstract>>> Recently, unsupervised pre-training is gaining increasing popularity in the realm of computational linguistics, thanks to its surprising success in advancing natural language understanding (NLU) and the potential to effectively exploit large-scale unlabelled corpus. However, regardless of the success in NLU, the power of unsupervised pre-training is only partially excavated when it comes to natural language generation (NLG). The major obstacle stems from an idiosyncratic nature of NLG: Texts are usually generated based on certain context, which may vary with the target applications. As a result, it is intractable to design a universal architecture for pre-training as in NLU scenarios. Moreover, retaining the knowledge learned from pre-training when learning on the target task is also a non-trivial problem. This review summarizes the recent efforts to enhance NLG systems with unsupervised pre-training, with a special focus on the methods to catalyse the integration of pre-trained models into downstream tasks. They are classified into architecture-based methods and strategy-based methods, based on their way of handling the above obstacle. Discussions are also provided to give further insights into the relationship between these two lines of work, some informative empirical phenomenons, as well as some possible directions where future work can be devoted to. <<</Abstract>>> <<<Introduction>>> Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU. The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0. In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network. The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review. <<</Introduction>>> <<<Background: Unsupervised Pre-training for NLU>>> Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training. Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm: where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$. The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks . <<</Background: Unsupervised Pre-training for NLU>>> <<<Unsupervised Pre-training and Parameter Initialization for NLG>>> NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation. BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines. BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework. <<</Unsupervised Pre-training and Parameter Initialization for NLG>>> <<<Architecture-based Methods>>> <<<Inducing Task-Specific Architecture in Pre-training>>> Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs). <<<Denoising Autoencoder>>> An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order. <<</Denoising Autoencoder>>> <<<Conditional Masked Language Model>>> CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks. <<</Conditional Masked Language Model>>> <<<Sequence to Sequence Language Model>>> Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective. <<</Sequence to Sequence Language Model>>> <<</Inducing Task-Specific Architecture in Pre-training>>> <<<Encoder-Agnostic Architectures for Adaptation>>> Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows: where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model. Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation. <<</Encoder-Agnostic Architectures for Adaptation>>> <<</Architecture-based Methods>>> <<<Strategy-based Methods>>> <<<Fine-tuning Schedules for Adaption>>> When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps: where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively. <<</Fine-tuning Schedules for Adaption>>> <<<Proxy Tasks for Adaption>>> Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned. For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better. <<</Proxy Tasks for Adaption>>> <<<Knowledge Distillation for Adaption>>> The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network. Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function : where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations. The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss: where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain. Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize: where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions. <<</Knowledge Distillation for Adaption>>> <<</Strategy-based Methods>>> <<<Discussions>>> <<<The Relationship between Architecture- and Strategy-based Methods>>> We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture. <<</The Relationship between Architecture- and Strategy-based Methods>>> <<<Experimental Phenomenons>>> Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons: The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18. Fixed representations yield better results than fine-tuning in some cases BIBREF24. Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16. The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning. The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators. <<</Experimental Phenomenons>>> <<<Future Directions>>> The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation? NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge. In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9. Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications. Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work. <<</Future Directions>>> <<</Discussions>>> <<<Conclusion>>> Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. <<</Conclusion>>> <<</Title>>>
{ "references": [ "task-specific architecture during pre-training (task-specific methods),aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods)" ], "type": "extractive" }
2002.06053
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Are datasets publicly available? Context: <<<Title>>> Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery <<<Abstract>>> Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in textual representations of these biochemical entities and then use it to construct models to predict molecular properties or to design novel molecules. This review outlines the impact made by these advances on drug discovery and aims to further the dialogue between medicinal chemists and computer scientists. <<</Abstract>>> <<<Introduction>>> The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest. The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts. We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18. Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines. With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences. <<<NLP Basics>>> BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections. All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review. <<</NLP Basics>>> <<</Introduction>>> <<<Biochemical Language Processing>>> The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes). To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems. <<<Textual Chemical Data>>> Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information. Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP. <<<IUPAC name>>> The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/). <<</IUPAC name>>> <<<Chemical Formula>>> The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound. <<</Chemical Formula>>> <<<SMILES>>> The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html). SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28). Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@"). <<</SMILES>>> <<<DeepSMILES>>> DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences. <<</DeepSMILES>>> <<<SELFIES>>> SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES. <<</SELFIES>>> <<<InChI>>> InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol. The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence. <<</InChI>>> <<<SMARTS>>> SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html). <<</SMARTS>>> <<<SMIRKS>>> SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41. <<</SMIRKS>>> <<</Textual Chemical Data>>> <<<Identification of Words/Tokens>>> Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages. <<<@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>> One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43. $k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49. <<</@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>> <<<Longest Common Subsequences>>> The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43. <<</Longest Common Subsequences>>> <<<Maximum Common Substructure>>> BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words". <<</Maximum Common Substructure>>> <<<Minimum Description Length>>> Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective. <<</Minimum Description Length>>> <<<Byte-Pair Encoding>>> Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words. <<</Byte-Pair Encoding>>> <<<Pattern-based words>>> Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences. Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence. SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering. To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community. <<</Pattern-based words>>> <<</Identification of Words/Tokens>>> <<<Text representation>>> The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors. Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics. <<<Bag-of-words representation>>> In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C( C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer. Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73. <<</Bag-of-words representation>>> <<<TF-IDF>>> The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative. TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking. <<</TF-IDF>>> <<<One-hot representation>>> In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78. <<</One-hot representation>>> <<<Distributed representations>>> The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play. The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word. The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms. The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers. FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89. <<</Distributed representations>>> <<</Text representation>>> <<<Text generation>>> Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language. Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation. RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model. The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3). Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108. <<<Machine Translation>>> Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance. The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114. Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118. A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments. Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results. <<</Machine Translation>>> <<</Text generation>>> <<</Biochemical Language Processing>>> <<<Future Perspectives>>> The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges. <<<Challenges>>> The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions. <<<Benchmarking>>> There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82. <<</Benchmarking>>> <<<Reproducibility>>> Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126. <<</Reproducibility>>> <<<Bias in data>>> The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty. <<</Bias in data>>> <<<Interpretability>>> The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community. <<</Interpretability>>> <<</Challenges>>> <<<Opportunities>>> The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141. Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics. Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation. Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems. With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation. <<</Opportunities>>> <<</Future Perspectives>>> <<<Acknowledgement>>> This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical. <<</Acknowledgement>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
2002.06053
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery? Context: <<<Title>>> Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery <<<Abstract>>> Text-based representations of chemicals and proteins can be thought of as unstructured languages codified by humans to describe domain-specific knowledge. Advances in natural language processing (NLP) methodologies in the processing of spoken languages accelerated the application of NLP to elucidate hidden knowledge in textual representations of these biochemical entities and then use it to construct models to predict molecular properties or to design novel molecules. This review outlines the impact made by these advances on drug discovery and aims to further the dialogue between medicinal chemists and computer scientists. <<</Abstract>>> <<<Introduction>>> The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest. The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts. We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18. Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines. With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences. <<<NLP Basics>>> BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections. All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review. <<</NLP Basics>>> <<</Introduction>>> <<<Biochemical Language Processing>>> The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes). To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems. <<<Textual Chemical Data>>> Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information. Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP. <<<IUPAC name>>> The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/). <<</IUPAC name>>> <<<Chemical Formula>>> The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound. <<</Chemical Formula>>> <<<SMILES>>> The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html). SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28). Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@"). <<</SMILES>>> <<<DeepSMILES>>> DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences. <<</DeepSMILES>>> <<<SELFIES>>> SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES. <<</SELFIES>>> <<<InChI>>> InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol. The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence. <<</InChI>>> <<<SMARTS>>> SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html). <<</SMARTS>>> <<<SMIRKS>>> SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41. <<</SMIRKS>>> <<</Textual Chemical Data>>> <<<Identification of Words/Tokens>>> Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages. <<<@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>> One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43. $k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49. <<</@!START@$k$@!END@-mers (@!START@$n$@!END@-grams)>>> <<<Longest Common Subsequences>>> The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43. <<</Longest Common Subsequences>>> <<<Maximum Common Substructure>>> BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words". <<</Maximum Common Substructure>>> <<<Minimum Description Length>>> Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective. <<</Minimum Description Length>>> <<<Byte-Pair Encoding>>> Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words. <<</Byte-Pair Encoding>>> <<<Pattern-based words>>> Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences. Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence. SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering. To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community. <<</Pattern-based words>>> <<</Identification of Words/Tokens>>> <<<Text representation>>> The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors. Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics. <<<Bag-of-words representation>>> In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C( C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer. Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73. <<</Bag-of-words representation>>> <<<TF-IDF>>> The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative. TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking. <<</TF-IDF>>> <<<One-hot representation>>> In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78. <<</One-hot representation>>> <<<Distributed representations>>> The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play. The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word. The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms. The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers. FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89. <<</Distributed representations>>> <<</Text representation>>> <<<Text generation>>> Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language. Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation. RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model. The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3). Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108. <<<Machine Translation>>> Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance. The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114. Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118. A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments. Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results. <<</Machine Translation>>> <<</Text generation>>> <<</Biochemical Language Processing>>> <<<Future Perspectives>>> The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges. <<<Challenges>>> The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions. <<<Benchmarking>>> There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82. <<</Benchmarking>>> <<<Reproducibility>>> Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126. <<</Reproducibility>>> <<<Bias in data>>> The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty. <<</Bias in data>>> <<<Interpretability>>> The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community. <<</Interpretability>>> <<</Challenges>>> <<<Opportunities>>> The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141. Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics. Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation. Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems. With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation. <<</Opportunities>>> <<</Future Perspectives>>> <<<Acknowledgement>>> This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical. <<</Acknowledgement>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1912.07976
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How much better is performance of the proposed model compared to the state of the art in these various experiments? Context: <<<Title>>> A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction <<<Abstract>>> Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask. <<</Abstract>>> <<<Introduction>>> Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively. Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support. The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers. Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens. Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis. The main contributions of this article are as follows: For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction. This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task. The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset. We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model. <<</Introduction>>> <<<Related Works>>> Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts. <<<Aspect Term Extraction>>> The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically. Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process. For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction. Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task. <<</Aspect Term Extraction>>> <<<Aspect Polarity Classification>>> Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques. The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context. BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously. BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved. <<</Aspect Polarity Classification>>> <<</Related Works>>> <<<Methodology>>> Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC. This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy. <<<Task Definition>>> <<</Task Definition>>> <<<Model Architecture>>> Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect. Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context. <<<BERT-Shared Layer>>> The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features. $O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively. <<</BERT-Shared Layer>>> <<</Model Architecture>>> <<<Multi-Head Self-Attention>>> Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows: $Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12. The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability. <<</Multi-Head Self-Attention>>> <<<Local Context Focus>>> <<<Semantic-Relative Distance>>> The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information. SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as: where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect. Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact. <<</Semantic-Relative Distance>>> <<<Context-features Dynamic Mask>>> Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position. According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows, To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors. Finally the local context features learned by the CDM layer are delivered as $O^{l}$. <<</Context-features Dynamic Mask>>> <<<Context-features Dynamic Weighting>>> Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect. where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation. $O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context. $W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features. <<</Context-features Dynamic Weighting>>> <<</Local Context Focus>>> <<<Feature Interactive Learning>>> LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification. $O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$. <<</Feature Interactive Learning>>> <<<Aspect Polarity Classifier>>> Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity. where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier. <<</Aspect Polarity Classifier>>> <<<Aspect Term Extractor>>> Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$, where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier. <<</Aspect Term Extractor>>> <<<Training Details>>> The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC. Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively. The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task, where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows: <<</Training Details>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets and Hyperparameters Setting>>> To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments. The table demonstrates the details of these datasets. The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority. Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD. <<</Datasets and Hyperparameters Setting>>> <<<Compared Methods>>> We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks. ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets. ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity. GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets. AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification. BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task. BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model. BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking. BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset. LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism. LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task. LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process. <<</Compared Methods>>> <<<Results Analysis>>> The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied. <<<Performance on Chinese Review Datasets>>> Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets. <<</Performance on Chinese Review Datasets>>> <<<Performance on SemEval-2014 task4>>> Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models. The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models. <<</Performance on SemEval-2014 task4>>> <<</Results Analysis>>> <<<Overall Performance Analysis>>> Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance. The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time. We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%. ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models. <<<Effectiveness of Multi-task Learning>>> Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model . The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks. <<</Effectiveness of Multi-task Learning>>> <<<Domain-adaption for LCF-ATEPC>>> The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning. <<</Domain-adaption for LCF-ATEPC>>> <<<SRD Sensitivity on Different Datasets>>> We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process. For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7. For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process. <<</SRD Sensitivity on Different Datasets>>> <<</Overall Performance Analysis>>> <<</Experiments>>> <<<Conclusion>>> The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks. <<</Conclusion>>> <<</Title>>>
{ "references": [ "significantly improves the accuracy and F1 score of aspect polarity classification" ], "type": "extractive" }
1912.07976
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What was state of the art on SemEval-2014 task4 Restaurant and Laptop dataset? Context: <<<Title>>> A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction <<<Abstract>>> Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask. <<</Abstract>>> <<<Introduction>>> Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively. Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support. The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers. Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens. Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis. The main contributions of this article are as follows: For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction. This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task. The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset. We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model. <<</Introduction>>> <<<Related Works>>> Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts. <<<Aspect Term Extraction>>> The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically. Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process. For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction. Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task. <<</Aspect Term Extraction>>> <<<Aspect Polarity Classification>>> Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques. The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context. BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously. BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved. <<</Aspect Polarity Classification>>> <<</Related Works>>> <<<Methodology>>> Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC. This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy. <<<Task Definition>>> <<</Task Definition>>> <<<Model Architecture>>> Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect. Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context. <<<BERT-Shared Layer>>> The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features. $O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively. <<</BERT-Shared Layer>>> <<</Model Architecture>>> <<<Multi-Head Self-Attention>>> Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows: $Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12. The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability. <<</Multi-Head Self-Attention>>> <<<Local Context Focus>>> <<<Semantic-Relative Distance>>> The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information. SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as: where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect. Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact. <<</Semantic-Relative Distance>>> <<<Context-features Dynamic Mask>>> Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position. According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows, To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors. Finally the local context features learned by the CDM layer are delivered as $O^{l}$. <<</Context-features Dynamic Mask>>> <<<Context-features Dynamic Weighting>>> Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect. where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation. $O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context. $W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features. <<</Context-features Dynamic Weighting>>> <<</Local Context Focus>>> <<<Feature Interactive Learning>>> LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification. $O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$. <<</Feature Interactive Learning>>> <<<Aspect Polarity Classifier>>> Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity. where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier. <<</Aspect Polarity Classifier>>> <<<Aspect Term Extractor>>> Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$, where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier. <<</Aspect Term Extractor>>> <<<Training Details>>> The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC. Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively. The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task, where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows: <<</Training Details>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets and Hyperparameters Setting>>> To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments. The table demonstrates the details of these datasets. The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority. Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD. <<</Datasets and Hyperparameters Setting>>> <<<Compared Methods>>> We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks. ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets. ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity. GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets. AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification. BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task. BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model. BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking. BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset. LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism. LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task. LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process. <<</Compared Methods>>> <<<Results Analysis>>> The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied. <<<Performance on Chinese Review Datasets>>> Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets. <<</Performance on Chinese Review Datasets>>> <<<Performance on SemEval-2014 task4>>> Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models. The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models. <<</Performance on SemEval-2014 task4>>> <<</Results Analysis>>> <<<Overall Performance Analysis>>> Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance. The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time. We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%. ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models. <<<Effectiveness of Multi-task Learning>>> Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model . The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks. <<</Effectiveness of Multi-task Learning>>> <<<Domain-adaption for LCF-ATEPC>>> The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning. <<</Domain-adaption for LCF-ATEPC>>> <<<SRD Sensitivity on Different Datasets>>> We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process. For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7. For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process. <<</SRD Sensitivity on Different Datasets>>> <<</Overall Performance Analysis>>> <<</Experiments>>> <<<Conclusion>>> The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks. <<</Conclusion>>> <<</Title>>>
{ "references": [ "BERT-ADA,BERT-PT, AEN-BERT, SDGCN-BERT" ], "type": "extractive" }
1912.07976
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What was previous state-of-the-art on four Chinese reviews datasets? Context: <<<Title>>> A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction <<<Abstract>>> Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask. <<</Abstract>>> <<<Introduction>>> Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively. Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support. The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers. Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens. Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis. The main contributions of this article are as follows: For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction. This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task. The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset. We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model. <<</Introduction>>> <<<Related Works>>> Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts. <<<Aspect Term Extraction>>> The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically. Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process. For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction. Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task. <<</Aspect Term Extraction>>> <<<Aspect Polarity Classification>>> Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques. The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context. BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously. BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved. <<</Aspect Polarity Classification>>> <<</Related Works>>> <<<Methodology>>> Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC. This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy. <<<Task Definition>>> <<</Task Definition>>> <<<Model Architecture>>> Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect. Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context. <<<BERT-Shared Layer>>> The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features. $O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively. <<</BERT-Shared Layer>>> <<</Model Architecture>>> <<<Multi-Head Self-Attention>>> Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows: $Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12. The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability. <<</Multi-Head Self-Attention>>> <<<Local Context Focus>>> <<<Semantic-Relative Distance>>> The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information. SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as: where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect. Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact. <<</Semantic-Relative Distance>>> <<<Context-features Dynamic Mask>>> Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position. According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows, To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors. Finally the local context features learned by the CDM layer are delivered as $O^{l}$. <<</Context-features Dynamic Mask>>> <<<Context-features Dynamic Weighting>>> Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect. where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation. $O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context. $W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features. <<</Context-features Dynamic Weighting>>> <<</Local Context Focus>>> <<<Feature Interactive Learning>>> LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification. $O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$. <<</Feature Interactive Learning>>> <<<Aspect Polarity Classifier>>> Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity. where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier. <<</Aspect Polarity Classifier>>> <<<Aspect Term Extractor>>> Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$, where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier. <<</Aspect Term Extractor>>> <<<Training Details>>> The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC. Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively. The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task, where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows: <<</Training Details>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets and Hyperparameters Setting>>> To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments. The table demonstrates the details of these datasets. The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority. Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD. <<</Datasets and Hyperparameters Setting>>> <<<Compared Methods>>> We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks. ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets. ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity. GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets. AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification. BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task. BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model. BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking. BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset. LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism. LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task. LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process. <<</Compared Methods>>> <<<Results Analysis>>> The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied. <<<Performance on Chinese Review Datasets>>> Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets. <<</Performance on Chinese Review Datasets>>> <<<Performance on SemEval-2014 task4>>> Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models. The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models. <<</Performance on SemEval-2014 task4>>> <<</Results Analysis>>> <<<Overall Performance Analysis>>> Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance. The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time. We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%. ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models. <<<Effectiveness of Multi-task Learning>>> Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model . The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks. <<</Effectiveness of Multi-task Learning>>> <<<Domain-adaption for LCF-ATEPC>>> The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning. <<</Domain-adaption for LCF-ATEPC>>> <<<SRD Sensitivity on Different Datasets>>> We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process. For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7. For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process. <<</SRD Sensitivity on Different Datasets>>> <<</Overall Performance Analysis>>> <<</Experiments>>> <<<Conclusion>>> The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks. <<</Conclusion>>> <<</Title>>>
{ "references": [ "GANN obtained the state-of-the-art APC performance on the Chinese review datasets" ], "type": "extractive" }
1912.07976
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: In what four Chinese review datasets does LCF-ATEPC achieves state of the art? Context: <<<Title>>> A Multi-task Learning Model for Chinese-oriented Aspect Polarity Classification and Aspect Term Extraction <<<Abstract>>> Aspect-based sentiment analysis (ABSA) task is a multi-grained task of natural language processing and consists of two subtasks: aspect term extraction (ATE) and aspect polarity classification (APC). Most of the existing work focuses on the subtask of aspect term polarity inferring and ignores the significance of aspect term extraction. Besides, the existing researches do not pay attention to the research of the Chinese-oriented ABSA task. Based on the local context focus (LCF) mechanism, this paper firstly proposes a multi-task learning model for Chinese-oriented aspect-based sentiment analysis, namely LCF-ATEPC. Compared with existing models, this model equips the capability of extracting aspect term and inferring aspect term polarity synchronously, moreover, this model is effective to analyze both Chinese and English comments simultaneously and the experiment on a multilingual mixed dataset proved its availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model achieved the state-of-the-art performance of aspect term extraction and aspect polarity classification in four Chinese review datasets. Besides, the experimental results on the most commonly used SemEval-2014 task4 Restaurant and Laptop datasets outperform the state-of-the-art performance on the ATE and APC subtask. <<</Abstract>>> <<<Introduction>>> Aspect-based sentiment analysis BIBREF0, BIBREF1, BIBREF2 (ABSA) is a fine-grained task compared with traditional sentiment analysis, which requires the model to be able to automatic extract the aspects and predict the polarities of all the aspects. For example, given a restaurant review: "The dessert at this restaurant is delicious but the service is poor," the full-designed model for ABSA needs to extract the aspects "dessert" and "service" and correctly reason about their polarity. In this review, the consumers' opinions on "dessert" and "service" are not consistent, with positive and negative sentiment polarity respectively. Generally, aspects and their polarity need to be manually labeled before running the aspect polarity classification procedure in the supervised deep learning models. However, most of the proposed models for aspect-based sentiment analysis tasks only focus on improving the classification accuracy of aspect polarity and ignore the research of aspect term extraction. Therefore, when conducting transfer learning on aspect-based sentiment analysis, those proposed models often fall into the dilemma of lacking aspect extraction method on targeted tasks because there is not enough research support. The APC task is a kind of classification problem. The researches concerning APC tasks is more abundant than the ATE task, and a large number of deep learning-based models have been proposed to solve APC problems, such as the models BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 based on long short-term memory (LSTM) and the methodologies BIBREF9, BIBREF10 based on transformer BIBREF11. The purpose of the APC task is to predict the exact sentiment polarity of different aspects in their context, rather than to fuzzily analyze the overall sentiment polarity on the sentence-level or document-level. In the APC task, the polarities are most usually classified into three categories: positive, negative, and neutral. It is obvious that the sentiment polarity classified based on aspects can better mine the fine-grained emotional tendency in reviews or tweets, thus providing a more accurate reference for decision-makers. Similar to the named entity recognition BIBREF12 (NER) task, the ATE task is a sequence labeling task, which aims to extract aspects from the reviews or tweet. In most researches BIBREF13, BIBREF14, BIBREF15, the ATE task is studied independently, away from the APC task. The ATE task first segments a review into separate tokens and then infers whether the tokens belong to any aspect. The tokens may be labeled in different forms in different studies, but most of the studies have adopted the IOB label to annotate tokens. Aiming to automatically extract aspects from the text efficiently and analyze the sentiment polarity of aspects simultaneously, this paper proposes a multi-task learning model for aspect-based sentiment analysis. Multilingual processing is an important research orientation of natural language processing. The LCF-ATEPC model proposed in this paper is a novel multilingual and multi-task-oriented model. Apart from achieving state-of-the-art performance in commonly used SemEval-2014 task4 datasets, the experimental results in four Chinese review datasets also validate that this model has a strong ability to expand and adapt to the needs of multilingual task. The proposed model is based on multi-head self-attention (MHSA) and integrates the pre-trained BERT BIBREF16 and the local context focus mechanism, namely LCF-ATEPC. By training on a small amount of annotated data of aspect and their polarity, the model can be adapted to a large-scale dataset, automatically extracting the aspects and predicting the sentiment polarities. In this way, the model can discover the unknown aspects and avoids the tedious and huge cost of manually annotating all aspects and polarities. It is of great significance for the field-specific aspect-based sentiment analysis. The main contributions of this article are as follows: For the first time, this paper studies the multi-task model of APC subtask and ATE subtask for multilingual reviews, which provides a new idea for the research of Chinese aspect extraction. This paper firstly applies self-attention and local context focus techniques to aspect word extraction task, and fully explore their potential in aspect term extraction task. The LCF-ATEPC model proposed in this paper integrates the pre-trained BERT model, significantly improves both the performance of ATE task and APC subtask, and achieves new state-of-the-art performance especially the F1 score of ATE task. Besides, we adopted the domain-adapted BERT model trained on the domain-related corpus to the ABSA joint-task learning model. The experimental results show that the domain-adapted BERT model significantly promotes the performance of APC tasks on the three datasets, especially the Restaurant dataset. We designed and applied dual labels for the input sequence applicable for the SemEval-2014 and Chinese review datasets of ABSA joint-task, the aspect term label, and the sentiment polarity label, respectively. The dual label improves the learning efficiency of the proposed model. <<</Introduction>>> <<<Related Works>>> Most ABSA-oriented methodologies regard the ATE and the APC as independent tasks and major in one of them. Accordingly, this section will introduce the related works of ATE and APC in two parts. <<<Aspect Term Extraction>>> The approaches to ATE tasks are classified into two categories: the early dictionary-based or rule-based approaches, and methodologies based on machine-learning or deep learning. BIBREF17 proposed a new rule-based approach to extracting aspects from product reviews using common sense and sentence dependency trees to detect explicit and implicit aspects. BIBREF18 adopts an unsupervised and domain-independent aspect extraction method that relies on syntactic dependency rules and can selects rules automatically. Compared with manually annotating all aspects in the dataset, the models for ATE can learn the features of aspects and automatically extract aspects in the text, which greatly saves labor and time. BIBREF19 proposed a model that can extract and cluster aspects simultaneously according to the seed words provided by users for several aspect categories. By classification, synonymous aspects can be grouped into the same category. BIBREF20 proposed the first aspect-oriented deep learning model in opinion mining, which deploys a 7-layer deep convolutional neural network to mark each word in the sentences with opinions as an aspect or non-aspect word. BIBREF21 proposed a new method for aspect term extraction, which utilizes word embedding to explore the co-occurrence distribution of words and applies the attention mechanism to weaken the irrelevant words and further improves the coherence of all aspects. BIBREF22 proposed a deep neural network-based model namely coupled multilevel attention, which does not require any parser or other linguistic resources to be pre-processed and provides an end-to-end solution. Besides, the proposed model is a multi-layer attention network, where each layer deploys a pair of attentions. This model allows the aspect terms and opinion terms learned interactively and dual propagate during the training process. For the Chinese-oriented ATE task, a multi-aspect bootstrapping (MAB) method BIBREF23 is proposed to extract the aspects of Chinese restaurant reviews. BIBREF24 introduced machine learning methods to explore and extract aspect terms from Chinese hotel reviews. they chose the optimal feature-dimension, feature representation, and maximum entropy (ME) classifier according to the empirical results, and studied the integral effect of aspect extraction. Up to now, the MHSA and pre-trained model has not been applied in the ATE task. This paper explores the potential of the new techniques of deep learning and new network architecture in the ATE task. <<</Aspect Term Extraction>>> <<<Aspect Polarity Classification>>> Aspect polarity classification is another important subtask of ABSA. The approaches designed for the APC task can be categorized into traditional machine learning and recent deep learning methods.The APC task has been comprehensively turned to the the deep neural networks. Therefore, this section mainly introduces approaches based on deep learning techniques. The most commonly applied deep neural network architectures for APC task are recurrent neural networks BIBREF5, BIBREF6, BIBREF7, BIBREF25, BIBREF26 (RNNs) and convolutional neural networks (CNNs) BIBREF14, BIBREF15, BIBREF27. TD-LSTM BIBREF5 first divides the context of aspects into the left and right parts and modeling for them independently. Attention mechanism BIBREF28 has been adapted to APC task in the last few years. ATAE-LSTM takes the feature representation of aspects and context words as the input of the model and applies an attention mechanism to dynamically calculate the attention weight according to the relationship between aspects and context words, and finally predicts the polarity of aspects according to the weighted context features. Another LSTM-based model IAN BIBREF7 deployed with attention mechanism equips two independent LSTM networks to capture the features of the context and aspect, with interactively integrating and learning the inner correlation of the features of context and targeted aspects. The RAM BIBREF13 is a bi-directional LSTM-based architecture deploys a multi-layer deep neural network with dedicated memory layers. The multi-layer network utilizes the token features learned based on the attention mechanism and GRUs to finally obtain the global semantic features of the text to predict the sentiment polarities of targeted aspects. In order to retard the loss of context features during the training process, TNet BIBREF25 introduced a conventional transformation architecture based on context-preserving transformation (CPT) units. TNet integrates the bidirectional LSTM network and convolutional neural network and significantly improves the accuracy of sentiment polarity prediction. Multi-grained attention network BIBREF8 (MGAN) is a new deep neural network model, which equips with a variety of fine-grained attention mechanisms, and applies the fine-grained attention mechanisms to interactively learn the token-level features between aspects and context, making great use of the inherent semantic correlation of aspects and context. BIBREF29 proposed the methods for the Chinese language APC task, which conducted the APC task at the aspect level via three granularities. Two fusion methods for the granularities in the Chinese APC task are introduced and applied. Empirical results show that the proposed methods achieved promising performance on the most commonly used ABSA datasets and four Chinese review datasets. Meanwhile, a joint framework aimed to aspect sentiment classification subtask and aspect-opinion pair identification subtask is proposedby BIBREF30, in which the external knowledge are considered and put into the network to alleviate the problem of insufficient train data. The gated alternate neural network (GANN) BIBREF31 proposed for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the gate truncation RNN (GTR) to learn the aspect-dependent sentiment clue representations. BIBREF32 proposed an end-to-end neural network model for the ABSA task based on joint learning, and the experimental results on a Chinese review show that the proposed model works fine while conducting ATE and APC subtask simultaneously. BERT-SPC is the BERT text pair classification model, it is a variation model of Bert and is adapted to solve the ABSA task in BIBREF9 and achieve high performance. LCF-Bert BIBREF10 proposed a feature-level local context focus mechanism based on self-attention, which can be applied to aspect level emotion analysis and many other fine-grained natural language processing tasks. BERT-ADA BIBREF33 shows that although the pre-trained model based on a large universal corpus, and is easy to be applied to most tasks and improve performance. Still, it is not task-specific. For specific tasks, if the pre-trained BERT is adapted to specific tasks through the fine-tuning process on a task-related corpus, the task performance can be further improved. <<</Aspect Polarity Classification>>> <<</Related Works>>> <<<Methodology>>> Aspect-based sentiment analysis relies on the targeted aspects, and most existing studies focus on the classification of aspect polarity, leaving the problem of aspect term extraction. To propose an effective aspect-based sentiment analysis model based on multi-task learning, we adopted domain-adapted BERT model from BERT-ADA and integrated the local context focus mechanism into the proposed model. This section introduces the architecture and methodology of LCF-ATEPC. This section introduces the methodology of the APC module and the ATE module, respectively. and the contents are organized by order of the network layer hierarchy. <<<Task Definition>>> <<</Task Definition>>> <<<Model Architecture>>> Aiming at the problem of insufficient research on aspect term extraction task, a joint deep learning model is designed in this section. This model combines aspect polarity classification task and aspect term extraction task, and two independent Bert layers are adopted to model the global context and the local context respectively. For conducting multi-task training at the same time, the input sequences are tokenized into different tokens and the each token is assigned two kinds of label. The first label indicates whether the token belongs to an aspect; the second label marks the polarity of the tokens belongs to the aspect. Fig FIGREF18 is the network architecture of LCF-ATEPC. Local context feature generator (LCFG) unit is on the left and a global context feature generator (GCFG) unit is on the right. Both context feature generator units contain an independent pre-trained BERT layer, $BERT^l$ and $BERT^g$ respectively. The LCFG unit extracts the features of the local context by a local context focus layer and a MHSA encoder. The GCFG unit deploys only one MHSA encoder to learn the global context feature. The feature interactive learning (FIL) layer combines the learning of the interaction between local context features and global context features and predicts the sentiment polarity of aspects. The extraction of aspects based on the features of the global context. <<<BERT-Shared Layer>>> The pre-trained BERT model is designed to improve performance for most NLP tasks, and The LCF-ATEPC model deploys two independent BERT-Shared layers that are aimed to extract local and global context features. For pre-trained BERT, the fine-tuning learning process is indispensable. Both BERT-Shared layers are regarded as embedded layers, and the fine-tuning process is conducted independently according to the joint loss function of multi-task learning. $X^{l}$ and $X^{g}$ are used to represent the tokenized inputs of LCFG and GCFG respectively, and we can obtain the preliminary outputs of local and global context features. $O^{l}_{BERT}$ and $O^{g}_{BERT}$ are the output features of the LCFG and the GCFG, respectively. $BERT^{l}$ and $BERT^{g}$ are the corresponding BERT-shared layer embedded in the LCFG and the GCFG respectively. <<</BERT-Shared Layer>>> <<</Model Architecture>>> <<<Multi-Head Self-Attention>>> Multi-head self-attention is based on multiple scale-dot attention (SDA), which can be utilized to extract deep semantic features in the context, and the features are represented in self-attention score. The MHSA can avoids the negative influence caused by the long distance dependence of the context when learning the features. Suppose $X_{SDA}$ is the input features learned by the LCFG. The scale-dot attention is calculate as follows: $Q$, $K$ and $V$ are the abstract matrices packed from the input features of SDA by three weight matrices $W_{q} \in \mathbb {R}^{d_{h} \times d_{q}}$, $W_{k} \in \mathbb {R}^{d_{h} \times d_{k}}$, $W_{v} \in \mathbb {R}^{d_{h} \times d_{v}}$. The MHSA performs multiple scaled-dot attention in parallel and concatenate the output features, then transform the features by multiplying a vector $W^{M H}$. $h$ represents the number of the attention heads and equal to 12. The “;” means feature concatenation of each head. $W^{M H} \in \mathbb {R}^{hd_{v} \times d_{h}}$ is the parameter matrices for projection . Additionally, we apply a $\tanh $ activation function for the MHSA learning process, which significantly enhanced feature-capture capability. <<</Multi-Head Self-Attention>>> <<<Local Context Focus>>> <<<Semantic-Relative Distance>>> The determination of local context depends on semantic-relative distance (SRD), which is proposed to determine whether the context word belongs to the local context of a targeted aspect to help the model capture the local context. Local context is a new concept that can be adapted to most fine-grained NLP tasks. In the ABSA field, existing models generally segment input sequences into aspect sequences and context sequences, treat aspects and context as independent segments and model their characteristics separately. Instead of leaving the aspect alone as part of the input, this paper mines the aspect and its local context, because the empirical result shows the local context of the target aspect contains more important information. SRD is a concept based on token-aspect pairs, describing how far a token is from the aspect. It counts the number of tokens between each specific token towards a targeted aspect as the SRD of all token-aspect pairs. The SRD is calculated as: where $i$ $(1<i<n)$ is the position of the specific token, $P_{a}$ is the central position of aspect. $m$ is the length of targeted aspect, and $SRD_{i}$ represents for the SRD between the $ i $-th token and the targeted aspect. Figure FIGREF30 and Figure FIGREF31 are two implementations of the local context focus mechanism, the context-feature dynamic mask (CDM) layer and context-feature dynamic weighting (CDW) layer, respectively. The bottom and top of the figures represent the feature input and output positions (POS) corresponding to each token. The self-attention mechanism treats all tokens equally, so that each token can generate the self-attention score with other tokens through parallel matrix operation. According to the definition of MHSA, the features of the output position corresponding to each token are more closely related to itself. After calculating the output of all tokens by MHSA encoder, the output features of each output position will be masked or attenuated, except that the local context will be retained intact. <<</Semantic-Relative Distance>>> <<<Context-features Dynamic Mask>>> Apart from to the features of the local context, the CDM layer will mask non-local context's features learned by the $BERT^l$ layer. Although it is easy to directly mask the non-local context words in the input sequence, it is inevitable to discard the features of non-local context words. As the CDM layer is deployed, only a relatively small amount of the semantic context itself will be masked at the corresponding output position. The relative representation of context words and aspects with relatively few semantics is preserved in the corresponding output position. According to the CDM implementation, the features on all the positions of non-local context words will be set to zero vectors. In order to avoid the unbalanced distribution of features after the CDM operation, an MHSA encoder is utilized to learn and rebalance the masked local context features. Suppose that the $O_{BERT^l}$ is the preliminary output features of $BERT^l$, then we get the local context feature output as follows, To mask the features of non-local context, we defines a feature masking matrix $M$, and $ V_{i}^{m} $ is the mask vectors for each token in the input sequence. $\alpha $ is the SRD threshold and $n$ is the length of input sequence including aspect. Tokens whose SRD regarding to the targeted aspect is less than the threshold $\alpha $ are the local contexts. The $E \in \mathbb {R}^{d_{h}}$ represents the ones vector and $O \in \mathbb {R}^{d_{h}}$ is the zeros vectors. “$.$” denotes the dot-product operation of the vectors. Finally the local context features learned by the CDM layer are delivered as $O^{l}$. <<</Context-features Dynamic Mask>>> <<<Context-features Dynamic Weighting>>> Although empirical results show that the CDM has achieved excellent performance compared with existing models, we design the CDW to explore the potential of LCF mechanism. The CDW is another implementation of the LCF mechanism, takes a more modest strategy compared to the CDM layer, which simply drops the features of the non-local context completely. While the features of local context retained intact, the features of the non-local context words will be weighted decay according to their SRD concerning a targeted aspect. where $W$ is the constructed weight matrix and $V_{i}^{w}$ is the weight vector for each non-local context words. Consistently with CDM, $SRD_{i}$ is the SRD between the i-th context token and a targeted aspect. $n$ is the length of the input sequence. $\alpha $ is the SRD threshold. “$.$” denotes the vector dot-product operation. $O_{C D W}^{l}$ is the output of CDW layer. The CDM and CDW layers are independent, which mean they are alternative. Both the output features of CDM and CDW layers are denoted as $O^{l}$. Besides, we tried to concatenate the learned features of CDM and CDW layers and take linear transformation as the features of local context. $W^{f}$, $O^{f}$ and $b^{f}$ are weight matrix and bias vector, respectively. The model can choose one of the three approaches to learn the local context features. <<</Context-features Dynamic Weighting>>> <<</Local Context Focus>>> <<<Feature Interactive Learning>>> LCF-ATEPC does not only rely on local context features for sentiment polarity classification, but combines and learns the local context features and the global context features to conduct polarity classification. $O^{l} $ and $ O^{g}$ are the local context features and global context features, respectively. $ W^{lg} \in \mathbb {R}^{d_{h} \times 2d_{h}}$ and $ b^{lg} \in \mathbb {R}^{d_{h}}$ are the weights and bias vectors, respectively. To learn the features of the concatenated vectors, an MHSA encoding process is performed on the $O_{dense}^{l g}$. <<</Feature Interactive Learning>>> <<<Aspect Polarity Classifier>>> Aspect polarity classifier performs a head-pooling on the learned concatenated context features. Head-pooling is to extract the hidden states on the corresponding position of the first token in the input sequence. then a Softmax operation is applied to predict the sentiment polarity. where $C$ is the number of sentiment categories, and $Y_{polarity}$ represents the polarity predicted by aspect polarity classifier. <<</Aspect Polarity Classifier>>> <<<Aspect Term Extractor>>> Aspect term extractor first performs the token-level classification for each token, suppose $T_{i}$ is the features on the corresponding position of token $T$, where $N$ is the number of token categories, and $Y_{term}$ represents the token category inferred by aspect polarity classifier. <<</Aspect Term Extractor>>> <<<Training Details>>> The LCFG and the GCFG are based on the BERT-BASE and BERT-SPC models, respectively. And the BERT-SPC BIBREF9 significantly improved the performance of APC tasks. In LCF-ATEPC, BERT-SPC only refactored the input sequence form compared with BERT-BASE model. The input sequence of BERT-BASE is formed in “[CLS]” + sequence + “[SEP]”, while it is formed in “[CLS]” + sequence + “[SEP]” + aspect + “[SEP]” for BERT-SPC. Since LCF-ATEPC is a multi-task learning model, we redesigned the form of data input and adopted dual labels of sentiment polarity and token category. The Figure FIGREF55 are the input samples of BERT-BASE and BERT-SPC model, respectively. The cross-entropy loss is adopted for APC and ATE subtask and the $\mathbf {L}_{2}$ regularization is applied in LCF-ATEPC, here is the loss function for APC task, where $C$ is the number of polarity categories, $\lambda $ is the $L_{2}$ regularization parameter, and $\Theta $ is the parameter-set of the LCF-ATEPC. The loss function for ATE task is where $N$ is the number of token classes and $k$ is the sum of the tokens in each input sequence. Accordingly, the loss function of LCF-ATEPC is as follows: <<</Training Details>>> <<</Methodology>>> <<<Experiments>>> <<<Datasets and Hyperparameters Setting>>> To comprehensive evaluate the performance of the proposed model, the experiments were conducted in three most commonly used ABSA datasets, the Laptops and Restaurant datasets of SemEval-2014 Task4 subtask2 BIBREF0 and an ACL Twitter social dataset BIBREF34. To evaluate our model capability with processing the Chinese language, we also tested the performance of LCF-ATEPC on four Chinese comment datasets BIBREF35, BIBREF36, BIBREF29 (Car, Phone, Notebook, Camera). We preprocessed the seven datasets. We reformatted the origin dataset and annotated each sample with the IOB labels for ATE task and polarity labels for APC tasks, respectively. The polarity of each aspect on the Laptops, Restaurants and datasets may be positive, neutral, and negative, and the conflicting labels of polarity are not considered. The reviews in the four Chinese datasets have been purged, with each aspect may be positive or negative binary polarity. To verify the effectiveness and performance of LCF-ATEPC models on multilingual datasets, we built a multilingual dataset by mixing the 7 datasets. We adopt this dataset to conduct multilingual-oriented ATE and APC experiments. The table demonstrates the details of these datasets. The samples distribution of those datasets is not balanced. For example, most samples in the restaurant dataset are positive, while the neutral samples in the Twitter dataset account for the majority. Apart from some hyperparameters setting referred to previous researches, we also conducted the controlled trials and analyzed the experimental results to optimize the hyperparameters setting. The superior hyperparameters are listed in Table TABREF65. The default SRD setting for all experiments is 5, with additional instructions for experiments with different SRD. <<</Datasets and Hyperparameters Setting>>> <<<Compared Methods>>> We compare the LCF-ATEPC model to current state-of-the-art methods. Experimental results show that the proposed model achieves state-of-the-art performance both in the ATE and APC tasks. ATAE-LSTM BIBREF6 is a classical LSTM-based network for the APC task, which applies the attention mechanism to focus on the important words in the context. Besides, ATAE-LSTM appends aspect embedding and the learned features to make full use of the aspect features. The ATAE-LSTM can be adapted to the Chinese review datasets. ATSM-S BIBREF29 is a baseline model of the ATSM variations for Chinese language-oriented ABSA task. This model learns the sentence and aspect terms at three perspectives of granularity. GANN is novel neural network model for APC task aimed to solve the shortcomings of traditional RNNs and CNNs. The GANN applied the Gate Truncation RNN (GTR) to learn informative aspect-dependent sentiment clue representations. GANN obtained the state-of-the-art APC performance on the Chinese review datasets. AEN-BERT BIBREF9 is an attentional encoder network based on the pretrained BERT model, which aims to solve the aspect polarity classification. BERT-PT BIBREF37 is a BERT-adapted model for Review Reading Comprehension (RRC) task, a task inspired by machine reading comprehension (MRC), it could be adapted to aspect-level sentiment classification task. BERT-BASE BIBREF16 is the basic pretrained BERT model. We adapt it to ABSA multi-task learning, which equips the same ability to automatically extract aspect terms and classify aspects polarity as LCF-ATEPC model. BERT-SPC BIBREF9 is a pretrained BERT model designed for the sentence-pair classification task. Consistent with the basic BERT model, we implemented this model for ABSA multitasking. BERT-ADA BIBREF33 is a domain-adapted BERT-based model proposed for the APC task, which fine-tuned the BERT-BASE model on task-related corpus. This model obtained state-of-the-art accuracy on the Laptops dataset. LCF-ATEPC is the multi-task learning model for the ATE and APC tasks, which is based on the the BERT-SPC model and local context focus mechanism. LCF-ATE are the variations of the LCF-ATEPC model which only optimize for the ATE task. LCF-APC are the variations of LCF-ATEPC and it only optimize for the APC task during training process. <<</Compared Methods>>> <<<Results Analysis>>> The experiments are conducted in several segments. First, the baseline performance of LCF-ATEPC on all Chinese and English data sets was tested, and then the effectiveness of multi-task learning was demonstrated. Finally, the assistance of domain-adapted BERT model in improving performance was evaluated and the sensitivity of different datasets to SRD was studied. <<<Performance on Chinese Review Datasets>>> Table TABREF70 are the experimental results of LCF-ATEPC models on four Chinese review datasets. <<</Performance on Chinese Review Datasets>>> <<<Performance on SemEval-2014 task4>>> Table TABREF72 lists the main experimental results of LCF-ATEPC models to compare the performance with other ABSA-oriented models. The LCF-ATEPC models are multilingual-oriented. To demonstrate its ability to simultaneously input and analyze reviews in multiple languages, we constructed and experimented with a multilingual dataset fore-mentioned. And result on the multilingual mixed dataset illustrates the effectiveness of the LCF-ATEPC models. <<</Performance on SemEval-2014 task4>>> <<</Results Analysis>>> <<<Overall Performance Analysis>>> Many models for ABSA tasks do not take into account the ATE subtask, but there are still some joint models BIBREF38 based on the traditional neural network architecture to conduct the APC and ATE tasks simultaneously. Benefit from the joint training process, the two ABSA subtasks of APC and ATE can promote each other and improve the performance. The CDM layer works better on twitter dataset because there are a lot of non-standard grammar usage and language abbreviations within it, and the local context focus techniques can promote to infer the polarity of terms. Surprisingly, for the Laptop and Restaurant datasets, guests occasionally have a unified “global” view in a specific review. That is, if the customer is not satisfied with one aspect, it is likely to criticize the other. Things will be the same if a customer prefers a restaurant he would be tolerant of some small disamenity, so the CDW mechanism performs better because it does not completely mask the local context of the other aspect. In the multi-task learning process, the convergence rate of APC and ATE tasks is different, so the model does not achieve the optimal effect at the same time. We build a joint model for the multi-task of ATE and APC based on the BERT-BASE model. After optimizing the model parameters according to the empirical result, the joint model based on BERT-BASE achieved hopeful performance on all three datasets and even surpassed other proposed BERT based improved models on some datasets, such as BERT-PT, AEN-BERT, SDGCN-BERT, and so on. Meanwhile, we implement the joint-task model based on BERT-SPC. Compared with the BERT-BASE model, BERT-SPC significantly improves the accuracy and F1 score of aspect polarity classification. In addition, for the first time, BERT-SPC has increased the F1 score of ATE subtask on three datasets up to 99%. ATEPC-Fusion is a supplementary scheme of LCF mechanism, and it adopts a moderate approach to generate local context features. The experimental results show that its performance is also better than the existing BERT-based models. <<<Effectiveness of Multi-task Learning>>> Keeping the main architecture of the LCF-ATEPC model unchanged, we tried to only optimize parameters for a single task in the multi-task model to explore the difference between the optimal performance of a single task and the multi-task learning model . The Figure TABREF76 depicts the performance of the LCF-ATEPC model when performing an single APC or ATE task. Experimental results show that on some datasets the LCF-ATEPC model performs better concerning APC or ATE single task than conducting ABSA multi-task on some datasets. In general, the proposed model LCF-ATEPC proposed in this paper is still superior to other ABSA-oriented multi-task models and even the single-task models aim to APC or ATE. When optimizing the model parameters for through back-propagation of multiple tasks, the multi-task learning model needs to take into account multiple loss functions of the different subtasks. So sometimes the multi-task learning cannot achieve as the best effect as single-task learning does, which is also the compromise of the multi-task learning model when dealing with multiple tasks. <<</Effectiveness of Multi-task Learning>>> <<<Domain-adaption for LCF-ATEPC>>> The BERT-BASE model is trained on a large-scale general corpus, so the fine-tuning during process during training process is significant and inevitable for BERT-based models. Meanwhile, the ABSA datasets commonly benchmarked are generally small with the domain-specific characteristic, the effect of BERT-BASE model on the most ABSA datasets can be further improved through domain-adaption. Domain adaption is a effective technique while integrating the pre-trained BERT-BASE model. By further training the BERT-BASE model in a domain-related corpus similar to or homologous to the target ABSA dataset, then domain-related pretrained BERT model can be obtained. We adopted the method proposed in BIBREF33 to obtain the domain-adapted pre-trained BERT model based on the corpus of Yelp Dataset Challenge reviews and the amazon Laptops review datasetBIBREF39. Table TABREF78 shows that the performance of APC task significantly improved by domain-adapted BERT model. The accuracy benchmark in the classical Restaurant achieving more than 90%, which means that the LCF-ATEPC is the first ABSA-oriented model obtained up to 90% accuracy on the Restaurant dataset. In addition, experimental result on the Laptop dataset also prove the effectiveness of domain-adaption in multi-task learning. Besides, the experimental results on the laptop dataset also validate the effectiveness of domain-adapted BERT model for ABSA multi-task learning. <<</Domain-adaption for LCF-ATEPC>>> <<<SRD Sensitivity on Different Datasets>>> We tested the sensitivity of SRD threshold on the typical Chinese and English ABSA datasets: the Phone dataset and The Restaurant dataset, respectively. Besides, for the evaluation of the restaurant dataset, we adopted the domain-adapted BERT model as the underlying architecture of the LCF-ATEPC model. The experimental result of Figure FIGREF81, FIGREF84 are evaluated in multi-task learning process. For the Chinese Phone dataset, the LCF-ATEPC-CDM model can achieve the best APC accuracy and F1 score when the SRD threshold is about 4-5, while the best ATE task performance reaches the highest when the SRD threshold is about 1-3. The LCF-ATEPC-CDW model obtains the best APC performance on the Phone dataset when the SRD threshold is 5, while the best ATE F1 score is approximately obtained when the SRD threshold is 7. For the Restaurant dataset, the optimal APC accuracy and F1 score achieved by LCF-ATEPC-CDM while the SRD threshold is approximately between 4 and 6. While the SRD threshold for the LCF-ATEPC-CDW is set to 8, the model achieves the optimal aspect classification accuracy and F1 score. However, the F1 score of the ATE task is less sensitive to the SRD threshold, indicating that aspect polarity classification task has less assistance on it during the joint learning process. <<</SRD Sensitivity on Different Datasets>>> <<</Overall Performance Analysis>>> <<</Experiments>>> <<<Conclusion>>> The ATE and APC subtasks were treated as independent tasks in previous studies. Moreover, the multi-task learning model for ATE and APC subtasks has not attracted enough attention from researchers. Besides, the researches concerning the Chinese language-oriented ABSA task are not sufficient and urgent to be proposed and developed. To address the above problems, this paper proposes a multi-task learning model LCF-ATEPC for aspect-based sentiment analysis based on the MHSA and the LCF mechanisms and applies the pre-trained BERT to the ATE sub-tasks for the first time. Not only for the Chinese language, but the models proposed in this paper are multilingual and applicable to the classic English review sentiment analysis task, such as the SemEval-2014 task4. The proposed model can automatically extract aspects from reviews and infer aspects' polarity. Empirical results on 3 commonly English datasets and four Chinese review datasets for ABSA tasks show that, compared with all models based on basic BERT, the LCF-ATEPC model achieves state-of-the-art performance on ATE and APC tasks. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Car, Phone, Notebook, Camera" ], "type": "extractive" }
1909.09268
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What is the criteria for a good metric? Context: <<<Title>>> Towards Neural Language Evaluators <<<Abstract>>> We review three limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries. <<</Abstract>>> <<<Introduction>>> Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE. <<</Introduction>>> <<<Related Work>>> <<<BLEU, ROUGE and n-gram matching approaches>>> BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward. <<</BLEU, ROUGE and n-gram matching approaches>>> <<<Transformers, BERT and GPT>>> Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al BIBREF5. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs)BIBREF6 and Transformers BIBREF7 . RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism. <<</Transformers, BERT and GPT>>> <<<Model-based metrics>>> While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore BIBREF8 uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover’s Similarity BIBREF9 uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. Both of these methods report stronger correlations with human judgment and better results when compared to BLEU and ROUGE. While they are using word embeddings BIBREF10 to transfer their sentence in a continuous space they are still using distance metrics to evaluate that sentence. While BLEND BIBREF11 uses an SVM to combine different existing evaluation metrics. One other evaluation method proposed is RUSE BIBREF12 this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluators success can be determined. <<</Model-based metrics>>> <<</Related Work>>> <<<Challenges with BLEU and ROUGE>>> In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries. <<<High score, opposite meanings>>> Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score. <<</High score, opposite meanings>>> <<<Low score, similar meanings>>> In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant. <<</Low score, similar meanings>>> <<<High score, unintelligible sentences>>> A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1. <<</High score, unintelligible sentences>>> <<<Experiments>>> <<<Experiments with carefully crafted sentences>>> To illustrate our argument, let's consider the following pairs of sentences: In Pair 1: s1 is "For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown to be robust to criticism” s2 is "“For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown not to be robust to criticism”. They differ by adding the negation in s2. In Pair 2: s1 is "On a morning, I saw a man running in the street." and s2 is "In the early hours of the day, I observed one gentleman jogging along the road”. s2 is a paraphrase of s1. <<</Experiments with carefully crafted sentences>>> <<<Semantic similarity experiments>>> To go beyond carefully crafted sentences. We assessed how well BLEU and ROUGE correlated with human judgement of similarity between pairs of paraphrased sentences and compared their performance to a RoBERTa model finetuned for semantic similarity (Table 2). <<</Semantic similarity experiments>>> <<</Experiments>>> <<</Challenges with BLEU and ROUGE>>> <<<Towards a robust data-driven approach>>> <<<Metric Scorecard>>> In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences. <<</Metric Scorecard>>> <<<Implementing metrics satisfying scorecard>>> <<<Semantic Similarity>>> Starting from the RoBERTa large pre-trained model BIBREF13 , we finetune it to predict sentence similarity on the STS-B benchmark dataset. Given two sentences of text, s1 and s2, the systems need to compute how similar s1 and s2 are, returning a similarity score between 0 and 5. The dataset comprises naturally occurring pairs of sentences drawn from several domains and genres, annotated by crowdsourcing. The benchmark comprises 8628 sentence pairs with 5700 pairs in the training set, 1500 in the development set and 1379 in the test set. <<</Semantic Similarity>>> <<<Logical Equivalence>>> For logical inference, we start with a pretrained RoBERTa BIBREF13 model and finetune it using the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018). It is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (neutral). The training set includes 393k sentence pairs, development set includes 20k and test set includes 20k. The accuracy of the pre-trained model on the development set is 0.9060. <<</Logical Equivalence>>> <<<Sentence Intelligibility>>> We start with a pretrained roBERTa BIBREF13 model and finetune it using the Corpus of Linguistic Acceptability (CoLA) . It consists of examples of expert English sentence acceptability judgments drawn from 22 books. Each example is a single string of English words annotated with whether it is grammatically possible sentence of English. The training set for CoLA has 10k sentences and the development set includes 1k sentences. The current model gets 67.8 percent accuracy <<</Sentence Intelligibility>>> <<<Rationale for Language Models>>> The overall rationale for using language models fine tuned for specific aspects of the scorecard is that recent work has shown that language models are unsupervised multitask learners BIBREF5 and can rediscover the classical NLP pipeline. By fine tuning them on a specific task, we make them pay attention to the correct level of abstraction corresponding to the scorecard. <<</Rationale for Language Models>>> <<</Implementing metrics satisfying scorecard>>> <<</Towards a robust data-driven approach>>> <<<Conclusion>>> In this work, we have shown three main limitations of BLEU and ROUGE and proposed a path forward outlining why and how state of the art language models can be used as summary evaluators. Future work includes extending the proposed scorecard, updating the models matching best the scorecard criteria and assessing published summarization models using that scorecard. <<</Conclusion>>> <<</Title>>>
{ "references": [ "The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences." ], "type": "extractive" }
1909.09268
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the three limitations? Context: <<<Title>>> Towards Neural Language Evaluators <<<Abstract>>> We review three limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries. <<</Abstract>>> <<<Introduction>>> Evaluation metrics play a central role in the machine learning community. They direct the efforts of the research community and are used to define the state of the art models. In machine translation and summarization, the two most common metrics used for evaluating similarity between candidate and reference texts are BLEU BIBREF0 and ROUGE BIBREF1. Both approaches rely on counting the matching n-grams in the candidates summary to n-grams in the reference text. BLEU is precision focused while ROUGE is recall focused. These metrics have posed serious limitations and have already been criticized by the academic community.In this work we formulate three criticisms of BLEU and ROUGE, establish criteria that a sound metric should have and propose concrete ways to use recent advances in NLP to design data-driven metric addressing the weaknesses found in BLEU and ROUGE. <<</Introduction>>> <<<Related Work>>> <<<BLEU, ROUGE and n-gram matching approaches>>> BLEU (Bilingual Evaluation Understudy) BIBREF0 and ROUGE BIBREF1 have been used to evaluate many NLP tasks for almost two decades. The general acceptance of these methods depend on many factors including their simplicity and the intuitive interpretability. Yet the main factor is the claim that they highly correlate with human judgement BIBREF0. This has been criticised extensively by the literature and the shortcomings of these methods have been widely studied. Reiter BIBREF2 , in his structured review of BLEU, finds a low correlation between BLEU and human judgment. Callison et al BIBREF3 examines BLEU in the context of machine translation and find that BLEU does neither correlate with human judgment on adequacy(whether the hypothesis sentence adequately captures the meaning of the reference sentence) nor fluency(the quality of language in a sentence). Sulem et al BIBREF4 examines BLEU in the context of text simplification on grammaticality, meaning preservation and simplicity and report BLEU has very low or in some cases negative correlation with human judgment. Considering these results it is a natural step to pursue new avenues for natural language evaluation and with the advent of deep learning using neural networks for this task is a promising step forward. <<</BLEU, ROUGE and n-gram matching approaches>>> <<<Transformers, BERT and GPT>>> Language modeling has become an important NLP technique thanks to the ability to apply it to various NLP tasks as explained in Radford et al BIBREF5. There are two leading architectures for language modeling Recurrent Neural Networks (RNNs)BIBREF6 and Transformers BIBREF7 . RNNs handle the input tokens, words or characters, one by one through time to learn the relationship between them, whereas, transformers receive a segment of tokens and learn the dependencies between them using an attention mechanism. <<</Transformers, BERT and GPT>>> <<<Model-based metrics>>> While BLEU and ROUGE are defined in a discrete space new evaluation metric can be defined in this continuous space. BERTscore BIBREF8 uses word embeddings and cosine similarity to create a score array and use greedy matching to maximize the similarity score. Sentence Mover’s Similarity BIBREF9 uses the mover similarity, Wasserstein distance, between sentence embedding generated from averaging the word embeddings in a sentence. Both of these methods report stronger correlations with human judgment and better results when compared to BLEU and ROUGE. While they are using word embeddings BIBREF10 to transfer their sentence in a continuous space they are still using distance metrics to evaluate that sentence. While BLEND BIBREF11 uses an SVM to combine different existing evaluation metrics. One other evaluation method proposed is RUSE BIBREF12 this method proposes embedding both sentences separately and pooling them to a given size. After that they use a pre trained MLP to predict on different tasks. This quality estimator metric is then proposed to be used in language evaluation. Our proposed methodology is to take neural language evaluation beyond architecture specifications. We are proposing a framework in which an evaluators success can be determined. <<</Model-based metrics>>> <<</Related Work>>> <<<Challenges with BLEU and ROUGE>>> In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries. <<<High score, opposite meanings>>> Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score. <<</High score, opposite meanings>>> <<<Low score, similar meanings>>> In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant. <<</Low score, similar meanings>>> <<<High score, unintelligible sentences>>> A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1. <<</High score, unintelligible sentences>>> <<<Experiments>>> <<<Experiments with carefully crafted sentences>>> To illustrate our argument, let's consider the following pairs of sentences: In Pair 1: s1 is "For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown to be robust to criticism” s2 is "“For the past two decades, the translation and summarization communities have used ROUGE and BLEU and these metrics have shown not to be robust to criticism”. They differ by adding the negation in s2. In Pair 2: s1 is "On a morning, I saw a man running in the street." and s2 is "In the early hours of the day, I observed one gentleman jogging along the road”. s2 is a paraphrase of s1. <<</Experiments with carefully crafted sentences>>> <<<Semantic similarity experiments>>> To go beyond carefully crafted sentences. We assessed how well BLEU and ROUGE correlated with human judgement of similarity between pairs of paraphrased sentences and compared their performance to a RoBERTa model finetuned for semantic similarity (Table 2). <<</Semantic similarity experiments>>> <<</Experiments>>> <<</Challenges with BLEU and ROUGE>>> <<<Towards a robust data-driven approach>>> <<<Metric Scorecard>>> In our methodology to design new evaluation metrics for comparing reference summaries/translations to hypothesis ones, we established first-principles criteria on what a good evaluator should do. The first one is that it should be highly correlated with human judgement of similarity. The second one is that it should be able to distinguish sentences which are in logical contradiction, logically unrelated or in logical agreement. The third one is that a robust evaluator should also be able to identify unintelligible sentences. The last criteria is that a good evaluation metric should not give high scores to semantically distant sentences and low scores to semantically related sentences. <<</Metric Scorecard>>> <<<Implementing metrics satisfying scorecard>>> <<<Semantic Similarity>>> Starting from the RoBERTa large pre-trained model BIBREF13 , we finetune it to predict sentence similarity on the STS-B benchmark dataset. Given two sentences of text, s1 and s2, the systems need to compute how similar s1 and s2 are, returning a similarity score between 0 and 5. The dataset comprises naturally occurring pairs of sentences drawn from several domains and genres, annotated by crowdsourcing. The benchmark comprises 8628 sentence pairs with 5700 pairs in the training set, 1500 in the development set and 1379 in the test set. <<</Semantic Similarity>>> <<<Logical Equivalence>>> For logical inference, we start with a pretrained RoBERTa BIBREF13 model and finetune it using the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018). It is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (neutral). The training set includes 393k sentence pairs, development set includes 20k and test set includes 20k. The accuracy of the pre-trained model on the development set is 0.9060. <<</Logical Equivalence>>> <<<Sentence Intelligibility>>> We start with a pretrained roBERTa BIBREF13 model and finetune it using the Corpus of Linguistic Acceptability (CoLA) . It consists of examples of expert English sentence acceptability judgments drawn from 22 books. Each example is a single string of English words annotated with whether it is grammatically possible sentence of English. The training set for CoLA has 10k sentences and the development set includes 1k sentences. The current model gets 67.8 percent accuracy <<</Sentence Intelligibility>>> <<<Rationale for Language Models>>> The overall rationale for using language models fine tuned for specific aspects of the scorecard is that recent work has shown that language models are unsupervised multitask learners BIBREF5 and can rediscover the classical NLP pipeline. By fine tuning them on a specific task, we make them pay attention to the correct level of abstraction corresponding to the scorecard. <<</Rationale for Language Models>>> <<</Implementing metrics satisfying scorecard>>> <<</Towards a robust data-driven approach>>> <<<Conclusion>>> In this work, we have shown three main limitations of BLEU and ROUGE and proposed a path forward outlining why and how state of the art language models can be used as summary evaluators. Future work includes extending the proposed scorecard, updating the models matching best the scorecard criteria and assessing published summarization models using that scorecard. <<</Conclusion>>> <<</Title>>>
{ "references": [ "High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries." ], "type": "extractive" }
1910.00194
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which language(s) are found in the WSD datasets? Context: <<<Title>>> Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations <<<Abstract>>> Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the state-of-the-art approach that makes use of non-contextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets. <<</Abstract>>> <<<Introduction>>> Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1. A WSD system assigns a sense to a word by taking into account its context, comprising the other words in the sentence. This can be done through discrete word features, which typically involve surrounding words and collocations trained using a classifier BIBREF2, BIBREF3, BIBREF4, BIBREF5. The classifier can also make use of continuous word representations of the surrounding words BIBREF6, BIBREF7. Neural WSD systems BIBREF8, BIBREF9 feed the continuous word representations into a neural network that captures the whole sentence and the word representation in the sentence. However, in both approaches, the word representations are independent of the context. Recently, pre-trained contextualized word representations BIBREF10, BIBREF11, BIBREF12, BIBREF13 have been shown to improve downstream NLP tasks. Pre-trained contextualized word representations are obtained through neural sentence encoders trained on a huge amount of raw texts. When the resulting sentence encoder is fine-tuned on the downstream task, such as question answering, named entity recognition, and sentiment analysis, with much smaller annotated training data, it has been shown that the trained model, with the pre-trained sentence encoder component, achieves new state-of-the-art results on those tasks. While demonstrating superior performance in downstream NLP tasks, pre-trained contextualized word representations are still reported to give lower accuracy compared to approaches that use non-contextualized word representations BIBREF10, BIBREF12 when evaluated on WSD. This seems counter-intuitive, as a neural sentence encoder better captures the surrounding context that serves as an important cue to disambiguate words. In this paper, we explore different strategies of integrating pre-trained contextualized word representations for WSD. Our best strategy outperforms prior methods of incorporating pre-trained contextualized word representations and achieves new state-of-the-art accuracy on multiple benchmark WSD datasets. The following sections are organized as follows. Section SECREF2 presents related work. Section SECREF3 describes our pre-trained contextualized word representation. Section SECREF4 proposes different strategies to incorporate the contextualized word representation for WSD. Section SECREF5 describes our experimental setup. Section SECREF6 presents the experimental results. Section SECREF7 discusses the findings from the experiments. Finally, Section SECREF8 presents the conclusion. <<</Introduction>>> <<<Related Work>>> Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased. The neural sequence tagging approach has also been explored for WSD. BIBREF8 proposed bidirectional long short-term memory (LSTM) BIBREF15 for WSD. They concatenated the hidden states of the forward and backward LSTMs and fed the concatenation into an affine transformation followed by softmax normalization, similar to the approach to incorporate a bidirectional LSTM adopted in sequence labeling tasks such as part-of-speech tagging and named entity recognition BIBREF16. BIBREF9 proposed a self-attention layer on top of the concatenated bidirectional LSTM hidden states for WSD and introduced multi-task learning with part-of-speech tagging and semantic labeling as auxiliary tasks. However, on average across the test sets, their approach did not outperform SVM with word embedding features. Subsequently, BIBREF17 proposed the incorporation of glosses from WordNet in a bidirectional LSTM for WSD, and reported better results than both SVM and prior bidirectional LSTM models. A neural language model (LM) is aimed at predicting a word given its surrounding context. As such, the resulting hidden representation vector captures the context of a word in a sentence. BIBREF10 designed context2vec, which is a one-layer bidirectional LSTM trained to maximize the similarity between the hidden state representation of the LSTM and the target word embedding. BIBREF12 designed ELMo, which is a two-layer bidirectional LSTM language model trained to predict the next word in the forward LSTM and the previous word in the backward LSTM. In both models, WSD was evaluated by nearest neighbor matching between the test and training instance representations. However, despite training on a huge amount of raw texts, the resulting accuracies were still lower than those achieved by WSD approaches with pre-trained non-contextualized word representations. End-to-end neural machine translation (NMT) BIBREF18, BIBREF19 learns to generate an output sequence given an input sequence, using an encoder-decoder model. The encoder captures the contextualized representation of the words in the input sentence for the decoder to generate the output sentence. Following this intuition, BIBREF11 trained an encoder-decoder model on parallel texts and obtained pre-trained contextualized word representations from the encoder. <<</Related Work>>> <<<Pre-Trained Contextualized Word Representation>>> The contextualized word representation that we use is BERT BIBREF13, which is a bidirectional transformer encoder model BIBREF20 pre-trained on billions of words of texts. There are two tasks on which the model is trained, i.e., masked word and next sentence prediction. In both tasks, prediction accuracy is determined by the ability of the model to understand the context. A transformer encoder computes the representation of each word through an attention mechanism with respect to the surrounding words. Given a sentence $x^n_1$ of length $n$, the transformer computes the representation of each word $x_i$ through a multi-head attention mechanism, where the query vector is from $x_i$ and the key-value vector pairs are from the surrounding words $x_{i^{\prime }}$ ($1 \le i^{\prime } \le n$). The word representation produced by the transformer captures the contextual information of a word. The attention mechanism can be viewed as mapping a query vector $\mathbf {q}$ and a set of key-value vector pairs $(\mathbf {k}, \mathbf {v})$ to an output vector. The attention function $A(\cdot )$ computes the output vector which is the weighted sum of the value vectors and is defined as: where $\mathbf {K}$ and $\mathbf {V}$ are matrices, containing the key vectors and the value vectors of the words in the sentence respectively, and $\alpha (\mathbf {q}, \mathbf {k}, \rho )$ is a scalar attention weight between $\mathbf {q}$ and $\mathbf {k}$, re-scaled by a scalar $\rho $. Two building blocks for the transformer encoder are the multi-head attention mechanism and the position-wise feed-forward neural network (FFNN). The multi-head attention mechanism with $H$ heads leverages the attention function in Equation DISPLAY_FORM1 as follows: where $\oplus $ denotes concatenation of vectors, $\mathbf {W}_\text{MH} \in \mathbb {R}^{d_\text{model} \times Hd_\mathbf {v}}$, $\mathbf {W}^\mathbf {Q}_\eta , \mathbf {W}^\mathbf {K}_\eta \in \mathbb {R}^{d_\mathbf {k} \times d_\text{model}}$, and $ \mathbf {W}^\mathbf {V}_\eta \in \mathbb {R}^{d_\mathbf {v} \times d_\text{model}}$. The input vector $\mathbf {q} \in \mathbb {R}^{d_\text{model}}$ is the hidden vector for the ambiguous word, while input matrices $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{d_\text{model} \times n}$ are the concatenation of the hidden vectors of all words in the sentence. For each attention head, the dimension of both the query and key vectors is $d_\mathbf {k}$ while the dimension of the value vector is $d_\mathbf {v}$. The encoder model dimension is $d_\text{model}$. The position-wise FFNN performs a non-linear transformation on the attention output corresponding to each input word position as follows: in which the input vector $\mathbf {u} \in \mathbb {R}^{d_\text{model}}$ is transformed to the output vector with dimension $d_\text{model}$ via a series of linear projections with the ReLU activation function. For the hidden layer $l$ ($1 \le l \le L$), the self-attention sub-layer output $\mathbf {f}^l_i$ is computed as follows: where LayerNorm refers to layer normalization BIBREF21 and the superscript $l$ and subscript $\mathbf {h}$ indicate that each encoder layer $l$ has an independent set of multi-head attention weight parameters (see Equations DISPLAY_FORM2 and ). The input for the first layer is $\mathbf {h}^0_i = \mathbf {E}(x_i)$, which is the non-contextualized word embedding of $x_i$. The second sub-layer consists of the position-wise fully connected FFNN, computed as: where, similar to self-attention, an independent set of weight parameters in Equation DISPLAY_FORM3 is defined in each layer. <<</Pre-Trained Contextualized Word Representation>>> <<<Incorporating Pre-Trained Contextualized Word Representation>>> As BERT is trained on the masked word prediction task, which is to predict a word given the surrounding (left and right) context, the pre-trained model already captures the context. In this section, we describe different techniques of leveraging BERT for WSD, broadly categorized into nearest neighbor matching and linear projection of hidden layers. <<<Nearest Neighbor Matching>>> A straightforward way to disambiguate word sense is through 1-nearest neighbor matching. We compute the contextualized representation of each word in the training data and the test data through BERT. Given a hidden representation $\mathbf {h}^L_{i}$ at the $L$-th layer for word $x_i$ in the test data, nearest neighbor matching finds a vector $\mathbf {h^*}$ in the $L$-th layer from the training data such that where the sense assigned to $x_i$ is the sense of the word whose contextualized representation is $\mathbf {h^*}$. This WSD technique is adopted in earlier work on contextualized word representations BIBREF10, BIBREF12. <<</Nearest Neighbor Matching>>> <<<Linear Projection of Hidden Layers>>> Apart from nearest neighbor matching, we can perform a linear projection of the hidden vector $\mathbf {h}_i$ by an affine transformation into an output sense vector, with its dimension equal to the number of senses for word $x_i$. The output of this affine transformation is normalized by softmax such that all its values sum to 1. Therefore, the predicted sense $\mathbf {s}_i$ of word $x_i$ is formulated as where $\mathbf {s}_i$ is a vector of predicted sense distribution for word $x_i$, while $\mathbf {W}^{\text{lexelt}(x_i)}$ and $\mathbf {b}^{\text{lexelt}(x_i)}$ are respectively the projection matrix and bias vector specific to the lexical element (lexelt) of word $x_i$, which consists of its lemma and optionally its part-of-speech tag. We choose the sense corresponding to the element of $\mathbf {s}_i$ with the maximum value. Training the linear projection model is done by the back-propagation algorithm, which updates the model parameters to minimize a cost function. Our cost function is the negative log-likelihood of the softmax output value that corresponds to the tagged sense in the training data. In addition, we propose two novel ways of incorporating BERT's hidden representation vectors to compute the sense output vector, which are described in the following sub-subsections. <<<Last Layer Projection>>> Similar to the nearest neighbor matching model, we can feed the hidden vector of BERT in the last layer, $\mathbf {h}^L_i$, into an affine transformation followed by softmax. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is instantiated by $\mathbf {h}^L_i$. The last layer projection model is illustrated in Figure FIGREF7(a). <<</Last Layer Projection>>> <<<Weighted Sum of Hidden Layers>>> BERT consists of multiple layers stacked one after another. Each layer carries a different representation of word context. Taking into account different hidden layers may help the WSD system to learn from different context information encoded in different layers of BERT. To take into account all layers, we compute the weighted sum of all hidden layers, $\mathbf {h}^l_i$, where $1 \le l \le L$, corresponding to a word position $i$, through attention mechanism. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is replaced by the following equation: where $\mathbf {m} \in \mathbb {R}^{d_\text{model}}$ is a projection vector to obtain scalar values with the key vectors. The model with weighted sum of all hidden layers is illustrated in Figure FIGREF7(b). <<</Weighted Sum of Hidden Layers>>> <<<Gated Linear Unit>>> While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. As such, we propose filtering the vector values by a gating vector whose values range from 0 to 1. This mechanism is known as the gated linear unit (GLU) BIBREF22, which is formulated as where $\mathbf {W}^\mathbf {h}$ and $\mathbf {W}^\mathbf {g}$ are separate projection matrices and $\mathbf {b}^\mathbf {h}$ and $\mathbf {b}^\mathbf {g}$ are separate bias vectors. The symbols $\sigma (\cdot )$ and $\odot $ denote the sigmoid function and element-wise vector multiplication operation respectively. GLU transforms the input vector $\mathbf {h}$ by feeding it to two separate affine transformations. The second transformation is used as the sigmoid gate to filter the input vector, which is summed with the vector after the first affine transformation. <<</Gated Linear Unit>>> <<</Linear Projection of Hidden Layers>>> <<</Incorporating Pre-Trained Contextualized Word Representation>>> <<<Experimental Setup>>> We conduct experiments on various WSD tasks. The description and the statistics for each task are given in Table . For English, a lexical element (lexelt) is defined as a combination of lemma and part-of-speech tag, while for Chinese, it is simply the lemma, following the OntoNotes setup. We exploit English BERT$_\text{BASE}$ for the English tasks and Chinese BERT for the Chinese task. We conduct experiments with different strategies of incorporating BERT as described in Section SECREF4, namely 1-nearest neighbor matching (1-nn) and linear projection. In the latter technique, we explore strategies including simple last layer projection, layer weighting (LW), and gated linear unit (GLU). In the linear projection model, we train the model by the Adam algorithm BIBREF23 with a learning rate of $10^{-3}$. The model parameters are updated per mini-batch of 16 sentences. As update progresses, we pick the best model parameters from a series of neural network updates based on accuracy on a held-out development set, disjoint from the training set. The state-of-the-art supervised WSD approach takes into account features from the neighboring sentences, typically one sentence to the left and one to the right apart from the current sentence that contains the ambiguous words. We also exploit this in our model, as BERT supports inputs with multiple sentences separated by a special [SEP] symbol. For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training. We also evaluate on Senseval-2 and Senseval-3 English lexical sample tasks, which come with pre-defined training and test data. For each word type, we pick 20% of the training instances to be our development set and keep the remaining 80% as the actual training data. Through this development set, we determine the number of epochs to use in training. We then re-train the model with the whole training dataset using the number of epochs identified in the initial training step. While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre. For Chinese WSD evaluation, we train IMS BIBREF5 on the Chinese OntoNotes dataset as our baseline. We also incorporate pre-trained non-contextualized Chinese word embeddings as IMS features BIBREF6, BIBREF7. The pre-trained word embeddings are obtained by training the word2vec skip-gram model on Chinese Gigaword Fifth Edition, which after automatic word segmentation contains approximately 2 billion words. Following BIBREF6, we incorporate the embedding features of words within a window surrounding the target ambiguous word. In our experiments, we take into account 5 words to the left and right. <<</Experimental Setup>>> <<<Results>>> We present our experimental results and compare them with prior baselines. <<<English All-Words Tasks>>> For English all-words WSD, we compare our approach with three categories of prior approaches. Firstly, we compare our approach with the supervised SVM classifier approach, namely IMS BIBREF5. We compare our approach with both the original IMS without word embedding features and IMS with non-contextualized word embedding features, that is, word2vec with exponential decay BIBREF7. We also compare with SupWSD BIBREF27, which is an alternative implementation of IMS. Secondly, we compare our approach with the neural WSD approaches that leverage bidirectional LSTM (bi-LSTM). These include the bi-LSTM model with attention trained jointly with lexical semantic labeling task BIBREF9 (BiLSTMatt+LEX) and the bi-LSTM model enhanced with gloss representation from WordNet (GAS). Thirdly, we show comparison with prior contextualized word representations for WSD, pre-trained on a large number of texts, namely context2vec BIBREF10 and ELMo BIBREF12. In these two models, WSD is treated as nearest neighbor matching as described in Section SECREF4. Table shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT's pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently. We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§SECREF12 and gated linear unit (§SECREF14) gives the best F1 scores on all test sets. All our BERT WSD systems outperform gloss-enhanced neural WSD, which has the best overall score among all prior systems. <<</English All-Words Tasks>>> <<<English Lexical Sample Tasks>>> For English lexical sample tasks, we compare our approach with the original IMS BIBREF5 and IMS with non-contextualized word embedding features. The embedding features incorporated into IMS include CW embeddings BIBREF28, obtained from a convolutional language model, fine-tuned (adapted) to WSD BIBREF6 (+adapted CW), and word2vec skip-gram BIBREF29 with exponential decay BIBREF7 (+w2v+expdecay). We also compare our approach with the bi-LSTM, on top of which sense classification is defined BIBREF8, and context2vec BIBREF10, which is a contextualized pre-trained bi-LSTM model trained on 2B words of text. Finally, we also compare with a prior multi-task and semi-supervised WSD approach learned through alternating structure optimization (ASO) BIBREF3, which also utilizes unlabeled data for training. As shown in Table , our BERT-based WSD approach with linear projection model outperforms all prior approaches. context2vec, which is pre-trained on a large amount of texts, performs worse than the prior semi-supervised ASO approach on Senseval-3, while our best result outperforms ASO by a large margin. Neural bi-LSTM performs worse than IMS with non-contextualized word embedding features. Our neural model with pre-trained contextualized word representations outperforms the best result achieved by IMS on both Senseval-2 and Senseval-3. <<</English Lexical Sample Tasks>>> <<<Chinese OntoNotes WSD>>> We compare our approach with IMS without and with word embedding features as the baselines. The results are shown in Table . Across all genres, BERT outperforms the baseline IMS with word embedding (non-contextualized word representation) features BIBREF6. The latter also improves over the original IMS without word embedding features BIBREF5. Linear projection approaches consistently outperform nearest neighbor matching by a significant margin, similar to the results on English WSD tasks. The best overall result for the Chinese OntoNotes test set is achieved by the models with simple projection and hidden layer weighting. <<</Chinese OntoNotes WSD>>> <<</Results>>> <<<Discussion>>> Across all tasks (English all-words, English lexical sample, and Chinese OntoNotes), our experiments demonstrate the effectiveness of BERT over various prior WSD approaches. The best results are consistently obtained by linear projection models, which project the last hidden layer or the weighted sum of all hidden layers to an output sense vector. We can view the BERT hidden layer outputs as contextual features, which serve as useful cues in determining the word senses. In fact, the attention mechanism in transformer captures the surrounding words. In prior work like IMS BIBREF5, these contextual cues are captured by the manually-defined surrounding word and collocation features. The features obtained by the hidden vector output are shown to be more effective than the manually-defined features. We introduced two advanced linear projection techniques, namely layer weighting and gated linear unit (GLU). While BIBREF12 showed that the second biLSTM layer results in better WSD accuracy compared to the first layer (nearer to the individual word representation), we showed that taking into account different layers by means of the attention mechanism is useful for WSD. GLU as an activation function has been shown to be effective for better convergence and to overcome the vanishing gradient problem in the convolutional language model BIBREF22. In addition, the GLU gate vector, with values ranging from 0 to 1, can be seen as a filter for the features from the hidden layer vector. Compared with two prior contextualized word representations models, context2vec BIBREF10 and ELMo BIBREF12, BERT achieves higher accuracy. This shows the effectiveness of the attention mechanism used in the transformer model to represent the context. Our BERT WSD models outperform prior neural WSD models by a large margin. These prior neural WSD models perform comparably with IMS with embeddings as classifier features, in addition to the discrete features. While neural WSD approaches BIBREF8, BIBREF9, BIBREF17 exploit non-contextualized word embeddings which are trained on large texts, the hidden layers are trained only on a small amount of labeled data. <<</Discussion>>> <<<Conclusion>>> For the WSD task, we have proposed novel strategies of incorporating BERT, a pre-trained contextualized word representation which effectively captures the context in its hidden vectors. Our experiments show that linear projection of the hidden vectors, coupled with gating to filter the values, gives better results than the prior state of the art. Compared to prior neural and feature-based WSD approaches that make use of non-contextualized word representations, using pre-trained contextualized word representation with our proposed incorporation strategy achieves significantly higher scores. <<</Conclusion>>> <<</Title>>>
{ "references": [ " WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese" ], "type": "extractive" }
1910.00194
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What datasets are used for testing? Context: <<<Title>>> Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations <<<Abstract>>> Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the state-of-the-art approach that makes use of non-contextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets. <<</Abstract>>> <<<Introduction>>> Word sense disambiguation (WSD) automatically assigns a pre-defined sense to a word in a text. Different senses of a word reflect different meanings a word has in different contexts. Identifying the correct word sense given a context is crucial in natural language processing (NLP). Unfortunately, while it is easy for a human to infer the correct sense of a word given a context, it is a challenge for NLP systems. As such, WSD is an important task and it has been shown that WSD helps downstream NLP tasks, such as machine translation BIBREF0 and information retrieval BIBREF1. A WSD system assigns a sense to a word by taking into account its context, comprising the other words in the sentence. This can be done through discrete word features, which typically involve surrounding words and collocations trained using a classifier BIBREF2, BIBREF3, BIBREF4, BIBREF5. The classifier can also make use of continuous word representations of the surrounding words BIBREF6, BIBREF7. Neural WSD systems BIBREF8, BIBREF9 feed the continuous word representations into a neural network that captures the whole sentence and the word representation in the sentence. However, in both approaches, the word representations are independent of the context. Recently, pre-trained contextualized word representations BIBREF10, BIBREF11, BIBREF12, BIBREF13 have been shown to improve downstream NLP tasks. Pre-trained contextualized word representations are obtained through neural sentence encoders trained on a huge amount of raw texts. When the resulting sentence encoder is fine-tuned on the downstream task, such as question answering, named entity recognition, and sentiment analysis, with much smaller annotated training data, it has been shown that the trained model, with the pre-trained sentence encoder component, achieves new state-of-the-art results on those tasks. While demonstrating superior performance in downstream NLP tasks, pre-trained contextualized word representations are still reported to give lower accuracy compared to approaches that use non-contextualized word representations BIBREF10, BIBREF12 when evaluated on WSD. This seems counter-intuitive, as a neural sentence encoder better captures the surrounding context that serves as an important cue to disambiguate words. In this paper, we explore different strategies of integrating pre-trained contextualized word representations for WSD. Our best strategy outperforms prior methods of incorporating pre-trained contextualized word representations and achieves new state-of-the-art accuracy on multiple benchmark WSD datasets. The following sections are organized as follows. Section SECREF2 presents related work. Section SECREF3 describes our pre-trained contextualized word representation. Section SECREF4 proposes different strategies to incorporate the contextualized word representation for WSD. Section SECREF5 describes our experimental setup. Section SECREF6 presents the experimental results. Section SECREF7 discusses the findings from the experiments. Finally, Section SECREF8 presents the conclusion. <<</Introduction>>> <<<Related Work>>> Continuous word representations in real-valued vectors, or commonly known as word embeddings, have been shown to help improve NLP performance. Initially, exploiting continuous representations was achieved by adding real-valued vectors as classification features BIBREF14. BIBREF6 fine-tuned non-contextualized word embeddings by a feed-forward neural network such that those word embeddings were more suited for WSD. The fine-tuned embeddings were incorporated into an SVM classifier. BIBREF7 explored different strategies of incorporating word embeddings and found that their best strategy involved exponential decay that decreased the contribution of surrounding word features as their distances to the target word increased. The neural sequence tagging approach has also been explored for WSD. BIBREF8 proposed bidirectional long short-term memory (LSTM) BIBREF15 for WSD. They concatenated the hidden states of the forward and backward LSTMs and fed the concatenation into an affine transformation followed by softmax normalization, similar to the approach to incorporate a bidirectional LSTM adopted in sequence labeling tasks such as part-of-speech tagging and named entity recognition BIBREF16. BIBREF9 proposed a self-attention layer on top of the concatenated bidirectional LSTM hidden states for WSD and introduced multi-task learning with part-of-speech tagging and semantic labeling as auxiliary tasks. However, on average across the test sets, their approach did not outperform SVM with word embedding features. Subsequently, BIBREF17 proposed the incorporation of glosses from WordNet in a bidirectional LSTM for WSD, and reported better results than both SVM and prior bidirectional LSTM models. A neural language model (LM) is aimed at predicting a word given its surrounding context. As such, the resulting hidden representation vector captures the context of a word in a sentence. BIBREF10 designed context2vec, which is a one-layer bidirectional LSTM trained to maximize the similarity between the hidden state representation of the LSTM and the target word embedding. BIBREF12 designed ELMo, which is a two-layer bidirectional LSTM language model trained to predict the next word in the forward LSTM and the previous word in the backward LSTM. In both models, WSD was evaluated by nearest neighbor matching between the test and training instance representations. However, despite training on a huge amount of raw texts, the resulting accuracies were still lower than those achieved by WSD approaches with pre-trained non-contextualized word representations. End-to-end neural machine translation (NMT) BIBREF18, BIBREF19 learns to generate an output sequence given an input sequence, using an encoder-decoder model. The encoder captures the contextualized representation of the words in the input sentence for the decoder to generate the output sentence. Following this intuition, BIBREF11 trained an encoder-decoder model on parallel texts and obtained pre-trained contextualized word representations from the encoder. <<</Related Work>>> <<<Pre-Trained Contextualized Word Representation>>> The contextualized word representation that we use is BERT BIBREF13, which is a bidirectional transformer encoder model BIBREF20 pre-trained on billions of words of texts. There are two tasks on which the model is trained, i.e., masked word and next sentence prediction. In both tasks, prediction accuracy is determined by the ability of the model to understand the context. A transformer encoder computes the representation of each word through an attention mechanism with respect to the surrounding words. Given a sentence $x^n_1$ of length $n$, the transformer computes the representation of each word $x_i$ through a multi-head attention mechanism, where the query vector is from $x_i$ and the key-value vector pairs are from the surrounding words $x_{i^{\prime }}$ ($1 \le i^{\prime } \le n$). The word representation produced by the transformer captures the contextual information of a word. The attention mechanism can be viewed as mapping a query vector $\mathbf {q}$ and a set of key-value vector pairs $(\mathbf {k}, \mathbf {v})$ to an output vector. The attention function $A(\cdot )$ computes the output vector which is the weighted sum of the value vectors and is defined as: where $\mathbf {K}$ and $\mathbf {V}$ are matrices, containing the key vectors and the value vectors of the words in the sentence respectively, and $\alpha (\mathbf {q}, \mathbf {k}, \rho )$ is a scalar attention weight between $\mathbf {q}$ and $\mathbf {k}$, re-scaled by a scalar $\rho $. Two building blocks for the transformer encoder are the multi-head attention mechanism and the position-wise feed-forward neural network (FFNN). The multi-head attention mechanism with $H$ heads leverages the attention function in Equation DISPLAY_FORM1 as follows: where $\oplus $ denotes concatenation of vectors, $\mathbf {W}_\text{MH} \in \mathbb {R}^{d_\text{model} \times Hd_\mathbf {v}}$, $\mathbf {W}^\mathbf {Q}_\eta , \mathbf {W}^\mathbf {K}_\eta \in \mathbb {R}^{d_\mathbf {k} \times d_\text{model}}$, and $ \mathbf {W}^\mathbf {V}_\eta \in \mathbb {R}^{d_\mathbf {v} \times d_\text{model}}$. The input vector $\mathbf {q} \in \mathbb {R}^{d_\text{model}}$ is the hidden vector for the ambiguous word, while input matrices $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{d_\text{model} \times n}$ are the concatenation of the hidden vectors of all words in the sentence. For each attention head, the dimension of both the query and key vectors is $d_\mathbf {k}$ while the dimension of the value vector is $d_\mathbf {v}$. The encoder model dimension is $d_\text{model}$. The position-wise FFNN performs a non-linear transformation on the attention output corresponding to each input word position as follows: in which the input vector $\mathbf {u} \in \mathbb {R}^{d_\text{model}}$ is transformed to the output vector with dimension $d_\text{model}$ via a series of linear projections with the ReLU activation function. For the hidden layer $l$ ($1 \le l \le L$), the self-attention sub-layer output $\mathbf {f}^l_i$ is computed as follows: where LayerNorm refers to layer normalization BIBREF21 and the superscript $l$ and subscript $\mathbf {h}$ indicate that each encoder layer $l$ has an independent set of multi-head attention weight parameters (see Equations DISPLAY_FORM2 and ). The input for the first layer is $\mathbf {h}^0_i = \mathbf {E}(x_i)$, which is the non-contextualized word embedding of $x_i$. The second sub-layer consists of the position-wise fully connected FFNN, computed as: where, similar to self-attention, an independent set of weight parameters in Equation DISPLAY_FORM3 is defined in each layer. <<</Pre-Trained Contextualized Word Representation>>> <<<Incorporating Pre-Trained Contextualized Word Representation>>> As BERT is trained on the masked word prediction task, which is to predict a word given the surrounding (left and right) context, the pre-trained model already captures the context. In this section, we describe different techniques of leveraging BERT for WSD, broadly categorized into nearest neighbor matching and linear projection of hidden layers. <<<Nearest Neighbor Matching>>> A straightforward way to disambiguate word sense is through 1-nearest neighbor matching. We compute the contextualized representation of each word in the training data and the test data through BERT. Given a hidden representation $\mathbf {h}^L_{i}$ at the $L$-th layer for word $x_i$ in the test data, nearest neighbor matching finds a vector $\mathbf {h^*}$ in the $L$-th layer from the training data such that where the sense assigned to $x_i$ is the sense of the word whose contextualized representation is $\mathbf {h^*}$. This WSD technique is adopted in earlier work on contextualized word representations BIBREF10, BIBREF12. <<</Nearest Neighbor Matching>>> <<<Linear Projection of Hidden Layers>>> Apart from nearest neighbor matching, we can perform a linear projection of the hidden vector $\mathbf {h}_i$ by an affine transformation into an output sense vector, with its dimension equal to the number of senses for word $x_i$. The output of this affine transformation is normalized by softmax such that all its values sum to 1. Therefore, the predicted sense $\mathbf {s}_i$ of word $x_i$ is formulated as where $\mathbf {s}_i$ is a vector of predicted sense distribution for word $x_i$, while $\mathbf {W}^{\text{lexelt}(x_i)}$ and $\mathbf {b}^{\text{lexelt}(x_i)}$ are respectively the projection matrix and bias vector specific to the lexical element (lexelt) of word $x_i$, which consists of its lemma and optionally its part-of-speech tag. We choose the sense corresponding to the element of $\mathbf {s}_i$ with the maximum value. Training the linear projection model is done by the back-propagation algorithm, which updates the model parameters to minimize a cost function. Our cost function is the negative log-likelihood of the softmax output value that corresponds to the tagged sense in the training data. In addition, we propose two novel ways of incorporating BERT's hidden representation vectors to compute the sense output vector, which are described in the following sub-subsections. <<<Last Layer Projection>>> Similar to the nearest neighbor matching model, we can feed the hidden vector of BERT in the last layer, $\mathbf {h}^L_i$, into an affine transformation followed by softmax. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is instantiated by $\mathbf {h}^L_i$. The last layer projection model is illustrated in Figure FIGREF7(a). <<</Last Layer Projection>>> <<<Weighted Sum of Hidden Layers>>> BERT consists of multiple layers stacked one after another. Each layer carries a different representation of word context. Taking into account different hidden layers may help the WSD system to learn from different context information encoded in different layers of BERT. To take into account all layers, we compute the weighted sum of all hidden layers, $\mathbf {h}^l_i$, where $1 \le l \le L$, corresponding to a word position $i$, through attention mechanism. That is, $\mathbf {h}_i$ in Equation DISPLAY_FORM10 is replaced by the following equation: where $\mathbf {m} \in \mathbb {R}^{d_\text{model}}$ is a projection vector to obtain scalar values with the key vectors. The model with weighted sum of all hidden layers is illustrated in Figure FIGREF7(b). <<</Weighted Sum of Hidden Layers>>> <<<Gated Linear Unit>>> While the contextualized representations in the BERT hidden layer vectors are features that determine the word sense, some features are more useful than the others. As such, we propose filtering the vector values by a gating vector whose values range from 0 to 1. This mechanism is known as the gated linear unit (GLU) BIBREF22, which is formulated as where $\mathbf {W}^\mathbf {h}$ and $\mathbf {W}^\mathbf {g}$ are separate projection matrices and $\mathbf {b}^\mathbf {h}$ and $\mathbf {b}^\mathbf {g}$ are separate bias vectors. The symbols $\sigma (\cdot )$ and $\odot $ denote the sigmoid function and element-wise vector multiplication operation respectively. GLU transforms the input vector $\mathbf {h}$ by feeding it to two separate affine transformations. The second transformation is used as the sigmoid gate to filter the input vector, which is summed with the vector after the first affine transformation. <<</Gated Linear Unit>>> <<</Linear Projection of Hidden Layers>>> <<</Incorporating Pre-Trained Contextualized Word Representation>>> <<<Experimental Setup>>> We conduct experiments on various WSD tasks. The description and the statistics for each task are given in Table . For English, a lexical element (lexelt) is defined as a combination of lemma and part-of-speech tag, while for Chinese, it is simply the lemma, following the OntoNotes setup. We exploit English BERT$_\text{BASE}$ for the English tasks and Chinese BERT for the Chinese task. We conduct experiments with different strategies of incorporating BERT as described in Section SECREF4, namely 1-nearest neighbor matching (1-nn) and linear projection. In the latter technique, we explore strategies including simple last layer projection, layer weighting (LW), and gated linear unit (GLU). In the linear projection model, we train the model by the Adam algorithm BIBREF23 with a learning rate of $10^{-3}$. The model parameters are updated per mini-batch of 16 sentences. As update progresses, we pick the best model parameters from a series of neural network updates based on accuracy on a held-out development set, disjoint from the training set. The state-of-the-art supervised WSD approach takes into account features from the neighboring sentences, typically one sentence to the left and one to the right apart from the current sentence that contains the ambiguous words. We also exploit this in our model, as BERT supports inputs with multiple sentences separated by a special [SEP] symbol. For English all-words WSD, we train our WSD model on SemCor BIBREF24, and test it on Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15). This common benchmark, which has been annotated with WordNet-3.0 senses BIBREF25, has recently been adopted in English all-words WSD. Following BIBREF9, we choose SemEval 2007 Task 17 (SE07) as our development data to pick the best model parameters after a number of neural network updates, for models that require back-propagation training. We also evaluate on Senseval-2 and Senseval-3 English lexical sample tasks, which come with pre-defined training and test data. For each word type, we pick 20% of the training instances to be our development set and keep the remaining 80% as the actual training data. Through this development set, we determine the number of epochs to use in training. We then re-train the model with the whole training dataset using the number of epochs identified in the initial training step. While WSD is predominantly evaluated on English, we are also interested in evaluating our approach on Chinese, to evaluate the effectiveness of our approach in a different language. We use OntoNotes Release 5.0, which contains a number of annotations including word senses for Chinese. We follow the data setup of BIBREF26 and conduct an evaluation on four genres, i.e., broadcast conversation (BC), broadcast news (BN), magazine (MZ), and newswire (NW), as well as the concatenation of all genres. While the training and development datasets are divided into genres, we train on the concatenation of all genres and test on each individual genre. For Chinese WSD evaluation, we train IMS BIBREF5 on the Chinese OntoNotes dataset as our baseline. We also incorporate pre-trained non-contextualized Chinese word embeddings as IMS features BIBREF6, BIBREF7. The pre-trained word embeddings are obtained by training the word2vec skip-gram model on Chinese Gigaword Fifth Edition, which after automatic word segmentation contains approximately 2 billion words. Following BIBREF6, we incorporate the embedding features of words within a window surrounding the target ambiguous word. In our experiments, we take into account 5 words to the left and right. <<</Experimental Setup>>> <<<Results>>> We present our experimental results and compare them with prior baselines. <<<English All-Words Tasks>>> For English all-words WSD, we compare our approach with three categories of prior approaches. Firstly, we compare our approach with the supervised SVM classifier approach, namely IMS BIBREF5. We compare our approach with both the original IMS without word embedding features and IMS with non-contextualized word embedding features, that is, word2vec with exponential decay BIBREF7. We also compare with SupWSD BIBREF27, which is an alternative implementation of IMS. Secondly, we compare our approach with the neural WSD approaches that leverage bidirectional LSTM (bi-LSTM). These include the bi-LSTM model with attention trained jointly with lexical semantic labeling task BIBREF9 (BiLSTMatt+LEX) and the bi-LSTM model enhanced with gloss representation from WordNet (GAS). Thirdly, we show comparison with prior contextualized word representations for WSD, pre-trained on a large number of texts, namely context2vec BIBREF10 and ELMo BIBREF12. In these two models, WSD is treated as nearest neighbor matching as described in Section SECREF4. Table shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT's pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, we get improved F1 scores consistently. We also show that linear projection to the sense output vector further improves WSD performance, by which our best results are achieved. While BERT has been shown to outperform other pre-trained contextualized word representations through the nearest neighbor matching experiments, its potential can be maximized through linear projection to the sense output vector. It is worthwhile to note that our more advanced linear projection, by means of layer weighting (§SECREF12 and gated linear unit (§SECREF14) gives the best F1 scores on all test sets. All our BERT WSD systems outperform gloss-enhanced neural WSD, which has the best overall score among all prior systems. <<</English All-Words Tasks>>> <<<English Lexical Sample Tasks>>> For English lexical sample tasks, we compare our approach with the original IMS BIBREF5 and IMS with non-contextualized word embedding features. The embedding features incorporated into IMS include CW embeddings BIBREF28, obtained from a convolutional language model, fine-tuned (adapted) to WSD BIBREF6 (+adapted CW), and word2vec skip-gram BIBREF29 with exponential decay BIBREF7 (+w2v+expdecay). We also compare our approach with the bi-LSTM, on top of which sense classification is defined BIBREF8, and context2vec BIBREF10, which is a contextualized pre-trained bi-LSTM model trained on 2B words of text. Finally, we also compare with a prior multi-task and semi-supervised WSD approach learned through alternating structure optimization (ASO) BIBREF3, which also utilizes unlabeled data for training. As shown in Table , our BERT-based WSD approach with linear projection model outperforms all prior approaches. context2vec, which is pre-trained on a large amount of texts, performs worse than the prior semi-supervised ASO approach on Senseval-3, while our best result outperforms ASO by a large margin. Neural bi-LSTM performs worse than IMS with non-contextualized word embedding features. Our neural model with pre-trained contextualized word representations outperforms the best result achieved by IMS on both Senseval-2 and Senseval-3. <<</English Lexical Sample Tasks>>> <<<Chinese OntoNotes WSD>>> We compare our approach with IMS without and with word embedding features as the baselines. The results are shown in Table . Across all genres, BERT outperforms the baseline IMS with word embedding (non-contextualized word representation) features BIBREF6. The latter also improves over the original IMS without word embedding features BIBREF5. Linear projection approaches consistently outperform nearest neighbor matching by a significant margin, similar to the results on English WSD tasks. The best overall result for the Chinese OntoNotes test set is achieved by the models with simple projection and hidden layer weighting. <<</Chinese OntoNotes WSD>>> <<</Results>>> <<<Discussion>>> Across all tasks (English all-words, English lexical sample, and Chinese OntoNotes), our experiments demonstrate the effectiveness of BERT over various prior WSD approaches. The best results are consistently obtained by linear projection models, which project the last hidden layer or the weighted sum of all hidden layers to an output sense vector. We can view the BERT hidden layer outputs as contextual features, which serve as useful cues in determining the word senses. In fact, the attention mechanism in transformer captures the surrounding words. In prior work like IMS BIBREF5, these contextual cues are captured by the manually-defined surrounding word and collocation features. The features obtained by the hidden vector output are shown to be more effective than the manually-defined features. We introduced two advanced linear projection techniques, namely layer weighting and gated linear unit (GLU). While BIBREF12 showed that the second biLSTM layer results in better WSD accuracy compared to the first layer (nearer to the individual word representation), we showed that taking into account different layers by means of the attention mechanism is useful for WSD. GLU as an activation function has been shown to be effective for better convergence and to overcome the vanishing gradient problem in the convolutional language model BIBREF22. In addition, the GLU gate vector, with values ranging from 0 to 1, can be seen as a filter for the features from the hidden layer vector. Compared with two prior contextualized word representations models, context2vec BIBREF10 and ELMo BIBREF12, BERT achieves higher accuracy. This shows the effectiveness of the attention mechanism used in the transformer model to represent the context. Our BERT WSD models outperform prior neural WSD models by a large margin. These prior neural WSD models perform comparably with IMS with embeddings as classifier features, in addition to the discrete features. While neural WSD approaches BIBREF8, BIBREF9, BIBREF17 exploit non-contextualized word embeddings which are trained on large texts, the hidden layers are trained only on a small amount of labeled data. <<</Discussion>>> <<<Conclusion>>> For the WSD task, we have proposed novel strategies of incorporating BERT, a pre-trained contextualized word representation which effectively captures the context in its hidden vectors. Our experiments show that linear projection of the hidden vectors, coupled with gating to filter the values, gives better results than the prior state of the art. Compared to prior neural and feature-based WSD approaches that make use of non-contextualized word representations, using pre-trained contextualized word representation with our proposed incorporation strategy achieves significantly higher scores. <<</Conclusion>>> <<</Title>>>
{ "references": [ "Senseval-2 (SE2), Senseval-3 (SE3), SemEval 2013 task 12 (SE13), and SemEval 2015 task 13 (SE15),OntoNotes Release 5.0" ], "type": "extractive" }
1908.11860
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: By how much does their model outperform the baseline in the cross-domain evaluation? Context: <<<Title>>> Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification <<<Abstract>>> Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based Sentiment Analysis (ABSA), which has many applications e.g. in e-commerce, where data and insights from reviews can be leveraged to create value for businesses and customers. Recently, deep transfer-learning methods have been applied successfully to a myriad of Natural Language Processing (NLP) tasks, including ATSC. Building on top of the prominent BERT language model, we approach ATSC using a two-step procedure: self-supervised domain-specific BERT language model finetuning, followed by supervised task-specific finetuning. Our findings on how to best exploit domain-specific language model finetuning enable us to produce new state-of-the-art performance on the SemEval 2014 Task 4 restaurants dataset. In addition, to explore the real-world robustness of our models, we perform cross-domain evaluation. We show that a cross-domain adapted BERT language model performs significantly better than strong baseline models like vanilla BERT-base and XLNet-base. Finally, we conduct a case study to interpret model prediction errors. <<</Abstract>>> <<<Introduction>>> Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity. In constrast to SA, the more fine-grained task of Aspect Based Sentiment Analysis (ABSA) BIBREF0, BIBREF1 aims at finding both the aspect of an entity like a restaurant and the sentiment associated with this aspect. It is important to note that ABSA comes in two variants. We will use the sentence “I love their dumplings” to explain these variants in detail. Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect is classified. For our example-sentence the correct result is (“food”, “positive”). The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like “dumplings” are detected. In the second step, ATSC, the sentiment polarity associated to the aspect-target is determined. In our example the correct result is the tuple ("dumplings", "positive"). In this work, we focus on ATSC. In the last years, specialized neural architectures BIBREF2, BIBREF3 have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community experienced a substantial shift towards using pre-trained language models BIBREF4, BIBREF5, BIBREF6, BIBREF7 as a base for many down-stream tasks, including ABSA BIBREF8, BIBREF9, BIBREF10. We still see huge potential that comes with this trend, this is why we approach the ATSC task using the BERT architecture. As shown by BIBREF9, for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the model on domain-specific corpora — in their case review corpora — that have not been used for pre-training BERT, or other language models. We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are: The analysis of the influence of the amount of training-steps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance. The findings on how to exploit BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset. The analysis of cross-domain adaptation between the laptops and restaurants domain. Adaptation is tested by finetuning the BERT language model self-supervised on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured. <<</Introduction>>> <<<Related Works>>> We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains. <<<Architecture Improvements for Aspect-Target Sentiment Classification>>> The datasets typically used for Aspect-Target Sentiment Classification are the SemEval 2014 Task 4 datasets BIBREF1 for the restaurants and laptops domain. Unfortunately, both datasets only have a small number of training examples. One common approach to compensate for insufficient training examples is to invent neural architectures that better model ATSC. For example, in the past a big leap in classification performance was achieved with the use of the Memory Network architecture BIBREF3, which uses memory to remember context words and explicitly models attention over both the target word and context. It was found that making full use of context words improves their model compared to previous models BIBREF2 that make use of left- and right-sided context independently. BIBREF8 proposed Attention Encoder Networks (AEN), a modification to the transformer architecture. The authors split the Multi-Head Attention (MHA) layers into Intra-MHA and Inter-MHA layers in order to model target words and context differently, which results in a more lightweight model compared to the transformer architecture. Another recent performance leap was achieved by BIBREF11, who model dependencies between sentiment words explicitly in sentences with more than one aspect-target by using a graph convolutional neural network. They show that their architecture performs particularly well if multiple aspects are present in a sentence. <<</Architecture Improvements for Aspect-Target Sentiment Classification>>> <<<Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>> Another approach to compensate for insufficient training examples is to transfer knowledge across domains or across similar tasks. BIBREF12 proposed Multi-Granularity Alignment Networks (MGAN). They use this architecture to transfer knowledge from both an aspect-category classification task and also across different domains. They built a large scale aspect-category dataset specifically for this. BIBREF13 transfer knowledge from a document-level sentiment classification task trained on the amazon review dataset BIBREF14. They successfully apply pre-training by reusing the weights of a Long Short Term Memory (LSTM) network BIBREF15 that has been trained on the document-level sentiment task. In addition, they apply multi-task learning where aspect and document-level tasks are learned simultaneously by minimizing a joint loss function. Similarly, BIBREF9 introduce a multi-task loss function to simultaneously optimize the BERT model's BIBREF7 pre-training objectives as well as a question answering task. In contrast to the methods described above that aim to transfer knowledge from a different source task like question answering or document-level sentiment classification, this paper aims at transferring knowledge across different domains by finetuning the BERT language model. <<</Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>> <<</Related Works>>> <<<Methodology>>> We approach the Aspect-Target Sentiment Classification task using a two-step procedure. We use the pre-trained BERT architecture as a basis. In the first step we finetune the pre-trained weights of the language model further in a self-supervised way on a domain-specific corpus. In the second step we train the finetuned language model in a supervised way on the ATSC end-task. In the following subsections, we discuss the BERT architecture, how we finetune the language model, and how we transform the ATSC task into a BERT sequence-pair classification task BIBREF10. Finally, we discuss the different end-task training and domain-specific finetuning combinations we employ to evaluate our model's generalization performance not only in-domain but also cross-domain. <<<BERT>>> The BERT model builds on many previous innovations: contextualized word representations BIBREF4, the transformer architecture BIBREF16, and pre-training on a language modeling task with subsequent end-to-end finetuning on a downstream task BIBREF5, BIBREF6. Due to being deeply bidirectional, the BERT architecture creates very powerful sequence representations that perform extremely well on many downstream tasks BIBREF7. The main innovation of BERT is that instead of using the objective of next-word prediction a different objective is used to train the language model. This objective consists of 2 parts. The first part is the masked language model objective, where the model learns to predict tokens, which have been randomly masked, from the context. The second part is the next-sequence prediction objective, where the model needs to predict if a sequence $B$ would naturally follow the previous sequence $A$. This objective enables the model to capture long-term dependencies better. Both objectives are discussed in more detail in the next section. As a base for our experiments we use the BERTBASE model, which has been pre-trained by the Google research team. It has the following parameters: 12 layers, 768 hidden dimensions per token and 12 attention heads. It has 110 Mio. parameters in total. For finetuning the BERT language model on a specific domain we use the weights of BERTBASE as a starting point. <<</BERT>>> <<<BERT Language Model Finetuning>>> As the first step of our procedure we perform language model finetuning of the BERT model using domain-specific corpora. Algorithmically, this is equivalent to pre-training. The domain-specific language model finetuning as an intermediate step to ATSC has been shown by BIBREF9. As an extension to their paper we investigate the limits of language model finetuning in terms of how end-task performance is dependent on the amount of training steps. The training input representation for language model finetuning consists of two sequences $s_A$ and $s_B$ in the format of $"\textrm {[CLS]} \ s_{A} \ \textrm {[SEP]} \ s_{B} \ \textrm {[SEP]}"$, where [CLS] is a dummy token used for downstream classification and [SEP] are separator tokens. <<<Masked Language Model Objective>>> The sequences $A$ and $B$ have tokens randomly masked out in order for the model to learn to predict them. The following example shows why domain-specific finetuning can alleviate the bias from pre-training on a Wikipedia corpus: "The touchscreen is an [MASK] device". In the fact-based context of Wikipedia the [MASK] could be "input" and in the review domain a typical guess could be the general opinion word "amazing". <<</Masked Language Model Objective>>> <<<Next-Sentence Prediction>>> In order to train BERT to capture long-term dependencies better, the model is trained to predict if sequence $B$ follows sequence $A$. If this is the case, sequence A and sequence B are jointly sampled from the same document in the order they are occuring naturally. Otherwise the sequences are sampled randomly from the training corpus. <<</Next-Sentence Prediction>>> <<</BERT Language Model Finetuning>>> <<<Aspect-Target Sentiment Classification>>> The ATSC task aims at classifying sentiment polarity into the three classes positive, negative, neutral with respect to an aspect-target. The input to the classifier is a tokenized sentence $s=s_{1:n}$ and a target $t=s_{j:j+m}$ contained in the sentence, where $j < j+m \le n$. Similar to previous work by BIBREF10, we transform the input into a format compatible with BERT sequence-pair classification tasks: $"\textrm {[CLS]} \ s \ \textrm {[SEP]} \ t \ \textrm {[SEP]}"$. In the BERT architecture the position of the token embeddings is structurally maintained after each Multi-Head Attention layer. Therefore, we refer to the last hidden representation of the [CLS] token as $h_{[CLS]} \in \mathbf {R}^{768 \times 1}$. The number of sentiment polarity classes is three. A distribution $p \in [0,1]^3$ over these classes is predicted using a fully-connected layer with 3 output neurons on top of $h_{[CLS]}$, followed by a softmax activation function where $b \in \mathbf {R}^3$ and $W \in \mathbf {R}^{3 \times 768}$. Cross-entropy is used as the training loss. The way we use BERT for classifying the sentiment polaritites is equivalent to how BERT is used for sequence-pair classification tasks in the original paper BIBREF7. <<</Aspect-Target Sentiment Classification>>> <<<Domain Adaptation through Language Model Finetuning>>> In academia, it is common that the performance of a machine learning model is evaluated in-domain. This means that the model is evaluated on a test set that comes from the same distribution as the training set. In real-world applications this setting is not always valid, as the trained model is used to predict previously unseen data. In order to evaluate the performance of a machine learning model more robustly, its generalization error can be evaluated across different domains, i.e. cross-domain. Additionally, the model itself can be adapted towards a target domain. This is known as Domain Adaptation, which is a special case of Transductive Transfer Learning in the taxonomy of BIBREF17. Here, it is typically assumed that supervised data for a specific task is only available for a source domain $S$, whereas only unsupervised data is available in the target domain $T$. The goal is to optimize performance of the task in the target domain while transferring task-specific knowledge from the source domain. If we map this framework to our challenge, we define Aspect-Target Sentiment Classification as the transfer-task and BERT language model finetuning is used for domain adaptation. In terms of on which domain is finetuned on, the full transfer-procedure can be expressed in the following way: Here, $D_{LM}$ stands for the domain on which the language model is finetuned and can take on the values of Restaurants, Laptops or (Restaurants $\cup $ Laptops). The domain for training $D_{Train}$ can take on the same values, for the joint case case the training datasets for laptops and restaurants are simply combined. The domain for testing $D_{Test}$ can only be take on the values Restaurants or Laptops. Combining finetuning and training steps gives us nine different evaluation scenarios, which we group into the following four categories: <<</Domain Adaptation through Language Model Finetuning>>> <<<In-Domain Training>>> ATSC is trained on a domain-specific dataset and evaluated on the test set from the same domain. This can be expressed as $D_{LM} \rightarrow T \rightarrow T,$ where $T$ is our target domain and can be either Laptops or Restaurants. It is expected that the performance of the model is best if $D_{LM} = T$. <<</In-Domain Training>>> <<<Cross-Domain Training>>> ATSC is trained on a domain-specific dataset and evaluated on the test set from the other domain. This can be expressed as $D_{LM} \rightarrow S \rightarrow T,$ where $S\ne T$ are source and target domain and can be either Laptops or Restaurants. <<</Cross-Domain Training>>> <<<Cross-Domain Adaptation>>> As a special case of cross-domain Training we expect performance to be optimal if $D_{LM} = T$. This is the variant of Domain Adaptation and is written as $T \rightarrow S \rightarrow T.$ <<</Cross-Domain Adaptation>>> <<<Joint-Domain Training>>> ATSC is trained on both domain-specific datasets jointly and evaluated on both test sets independently. This can be expressed as $D_{LM} \rightarrow (S \cup T) \rightarrow T,$ where $S\ne T$ are source- and target domain and can either be Laptops or Restaurants. <<</Joint-Domain Training>>> <<</Methodology>>> <<<Experiments>>> In our experiments we aim to answer the following research questions (RQs): RQ1: How does the number of training iterations in the BERT language model finetuning stage influence the ATSC end-task performance? At what point does performance start to improve, when does it converge? RQ2: When trained in-domain, what ATSC end-task performance can be reached through fully exploitet finetuning of the BERT language model? RQ3: When trained cross-domain in the special case of domain adaptation, what ATSC end-task performance can be reached if BERT language model finetuning is fully exploitet? <<<Datasets for Classification and Language Model Finetuning>>> We conduct experiments using the two SemEval 2014 Task 4 Subtask 2 datasets BIBREF1 for the laptops and the restaurants domain. The two datasets contain sentences with multiple marked aspect terms that each have a 3-level sentiment polarity (positive, neutral or negative) associated. In the original dataset the conflict label is also present. Here, conflicting labels are dropped for reasons of comparability with BIBREF9. Both datasets are small, detailed statistics are shown in tab:datasets. For BERT language model finetuning we prepare three corpora for the two domains of laptops and restaurants. For the restaurants domain we use Yelp Dataset Challenge reviews and for the laptops domain we use Amazon Laptop reviews BIBREF14. For the laptop domain we filtered out reviews that appear in the SemEval 2014 laptops dataset to avoid training bias for the test data. To be compatible with the next-sentence prediction task used during fine tuning, we removed reviews containing less than two sentences. For the laptop corpus, $1,007,209$ sentences are left after pre-processing. For the restaurants domain more reviews are available, we sampled $10,000,000$ sentences to have a sufficient amount of data for fully exploitet language model finetuning. In order to compensate for the smaller amount of finetuning data in the laptops domain, we finetune for more epochs, 30 epochs in the case of the laptops domain compared to 3 epochs for the restaurants domain, so that the BERT model trains on about 30 million sentences in both cases. This means that 1 sentence can be seen multiple times with a different language model masking. We also create a mixed corpus to jointly finetune both domains. Here, we sample 1 Mio. restaurant reviews and combine them with the laptop reviews. This results in about 2 Mio. reviews that are finetuned for 15 epochs. The exact statistics for the three finetuning corpora are shown in the top of tab:datasets. To be able to reproduce our finetuning corpora, we make the code that is used to generate them available online. <<</Datasets for Classification and Language Model Finetuning>>> <<<Hyperparameters>>> We use BERTBASE (uncased) as the base for all of our experiments, with the exception of XLNetBASE (cased), which is used as one of the baseline models. For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer BIBREF18. The batchsize is set to 32 while the learning rate is set to $3\cdot 10^{-5}$. The maximum input sequence length is set to 256 tokens, which amounts to about 4 sentences per sequence on average. As shown in tab:datasets, we finetune the language models on each domain so that the model trains a total of about 30 Mio. sentences (7.5 Mio. sequences). For training the BERT and XLNet models on the down-stream task of ATSC we use mixed 16 bit and 32 bit floating point computations, the Adam optimizer, and a learning rate of $3\cdot 10^{-5}$ and a batchsize of 32. We train the model for a total of 7 epochs. The validation accuracy converges after about 3 epochs of training on all datasets, but training loss still improves after that. It is important to note that all our results reported are the average of 9 runs with different random initializations. This is needed to measure significance of improvements, as the standard deviation in accuray amounts to roughly $1\%$ for all experiments, see fig:acc-dep-lmiterations. <<</Hyperparameters>>> <<<Compared Methods>>> We compare in-domain results to current state of the art methods, which we will now describe briefly. SDGCN-BERT BIBREF11 explicitly models sentiment dependencies for sentences with multiple aspects with a graph convolutional network. This method is current state-of-the-art on the SemEval 2014 laptops dataset. AEN-BERT BIBREF8 is an attentional encoder network. When used on top of BERT embeddings this method performs especially well on the laptops dataset. BERT-SPC BIBREF8 is BERT used in sentence-pair classification mode. This is exactly the same method as our BERT-base baseline and therefore, we can cross-check the authors results. BERT-PT BIBREF9 uses multi-task fine-tuning prior to downstream classification, where the BERT language model is finetuned jointly with a question answering task. It performs state-of-the-art on the restaurants dataset prior to this paper. To our knowledge, cross- and joint-domain training on the SemEval 2014 Task 4 datasets has not been analyzed so far. Thus, we compare our method to two very strong baselines: BERT and XLNet. BERT-base BIBREF7 is using the pre-trained BERTBASE embeddings directly on the down-stream task without any domain specific language model finetuning. XLNet-base BIBREF19 is a method also based on general language model pre-training similar to BERT. Instead of randomly masking tokens for pre-training like in BERT a more general permutation objective is used, where all possible variants of masking are fully exploitet. Our models are BERT models whose language model has been finetuned on different domain corpora. BERT-ADA Lapt is the BERT language model finetuned on the laptops domain corpus. BERT-ADA Rest is the BERT language model finetuned on the restaurant domain corpus. BERT-ADA Joint is the BERT language model finetuned on the corpus containing an equal amount of laptops and restaurants reviews. <<</Compared Methods>>> <<<Results Analysis>>> The results of our experiments are shown in fig:acc-dep-lmiterations and tab:results respectively. To answer RQ1, which is concerned with details on domain-specific language model finetuning, we can see in fig:acc-dep-lmiterations that first of all, language model finetuning has a substantial effect on ATSC end-task performance. Secondly, we see that in the laptops domain the performance starts to increase at about 10 Mio. finetuned sentences. This is an interesting insight as one would expect a relation closer to a logarithmic curve. One reason might be that it takes many steps to train knowledge into the BERT language model due to its vast amount of parameters. The model already converges at around 17 Mio. sentences. More finetuning does not improve performance significantly. In addition, we find that different runs have a high variance, the standard deviation amounts to about $1\%$ in accuracy, which justifies averaging over 9 runs to measure differences in model performance reliably. To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\%$ on the laptops dataset. To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\%$ absolute accuracy improvement on the laptops test set and even $3.6\%$ accuracy improvement on the restaurants test set compared to BERT-base. In general, the ATSC task generalizes well cross-domain, with about 2-$3\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases. For the joint-training case, we find that combining both training datasets improves performance on both test sets. This result is intuitive, as more training data leads to better performance if the domains do not confuse each other. Interesting for the joint-training case is that the BERT-ADA Joint model performs especially strong when measured by the Macro-F1 metric. A reason for this might be that the SemEval 2014 datasets are imbalanced due to dominance of positive label. It seems like through finetuning the language model on both domains the model learns to classify the neutral class much better, especially in the laptops domain. <<</Results Analysis>>> <<</Experiments>>> <<<Conclusion>>> We performed experiments on the task of Aspect-Target Sentiment Classification by first finetuning a pre-trained BERT model on a domain specific corpus with subsequent training on the down-stream classification task. We analyzed the behavior of the number of domain-specific BERT language model finetuning steps in relation to the end-task performance. With the findings on how to best exploit BERT language model finetuning we were able to train high performing models, which one of even performs as new state-of-the-art on SemEval 2014 Task 4 restaurants dataset. We further evaluated our models cross-domain to explore the robustness of Aspect-Target Sentiment Classification. We found that in general, this task transfers well between the laptops and the restaurants domain. As a special case we ran a cross-domain adaptation experiments, where the BERT language model is specifically finetuned on the target domain. We achieve significant improvement over unadapted models, a cross-domain adapted model performs even better than a BERT-base model that is trained in-domain. Overall, our findings reveal promising directions for follow-up work. The XLNet-base model performs strongly on the ATSC task. Here, domain-specific finetuning could probably bring significant performance improvements. Another interesting direction for future work would be to investigate cross-domain behavior for an additional domain like hotels, which is more similar to the restaurants domain. Here, it could be interesting to find out if the shared nature of these domain would results in more confusion or if they would behave synergetically. <<</Conclusion>>> <<</Title>>>
{ "references": [ "$2.2\\%$ absolute accuracy improvement on the laptops test set,$3.6\\%$ accuracy improvement on the restaurants test set" ], "type": "extractive" }
1908.11860
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the performance results? Context: <<<Title>>> Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification <<<Abstract>>> Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based Sentiment Analysis (ABSA), which has many applications e.g. in e-commerce, where data and insights from reviews can be leveraged to create value for businesses and customers. Recently, deep transfer-learning methods have been applied successfully to a myriad of Natural Language Processing (NLP) tasks, including ATSC. Building on top of the prominent BERT language model, we approach ATSC using a two-step procedure: self-supervised domain-specific BERT language model finetuning, followed by supervised task-specific finetuning. Our findings on how to best exploit domain-specific language model finetuning enable us to produce new state-of-the-art performance on the SemEval 2014 Task 4 restaurants dataset. In addition, to explore the real-world robustness of our models, we perform cross-domain evaluation. We show that a cross-domain adapted BERT language model performs significantly better than strong baseline models like vanilla BERT-base and XLNet-base. Finally, we conduct a case study to interpret model prediction errors. <<</Abstract>>> <<<Introduction>>> Sentiment Analysis (SA) is an active field of research in Natural Language Processing and deals with opinions in text. A typical application of classical SA in an industrial setting would be to classify a document like a product review into positve, negative or neutral sentiment polarity. In constrast to SA, the more fine-grained task of Aspect Based Sentiment Analysis (ABSA) BIBREF0, BIBREF1 aims at finding both the aspect of an entity like a restaurant and the sentiment associated with this aspect. It is important to note that ABSA comes in two variants. We will use the sentence “I love their dumplings” to explain these variants in detail. Both variants are implemented as a two-step procedure. The first variant is comprised of Aspect-Category Detection (ACD) followed by Aspect-Category Sentiment Classification (ACSC). ACD is a multilabel classification task, where a sentence can be associated with a set of predefined aspect categories like "food" and "service" in the restaurants domain. In the second step, ACSC, the sentiment polarity associated to the aspect is classified. For our example-sentence the correct result is (“food”, “positive”). The second variant consists of Aspect-Target Extraction (ATE) followed by Aspect-Target Sentiment Classification (ATSC). ATE is a sequence labeling task, where terms like “dumplings” are detected. In the second step, ATSC, the sentiment polarity associated to the aspect-target is determined. In our example the correct result is the tuple ("dumplings", "positive"). In this work, we focus on ATSC. In the last years, specialized neural architectures BIBREF2, BIBREF3 have been developed that substantially improved modeling of this target-context relationship. More recently, the Natural Language Processing community experienced a substantial shift towards using pre-trained language models BIBREF4, BIBREF5, BIBREF6, BIBREF7 as a base for many down-stream tasks, including ABSA BIBREF8, BIBREF9, BIBREF10. We still see huge potential that comes with this trend, this is why we approach the ATSC task using the BERT architecture. As shown by BIBREF9, for the ATSC task the performance of models that were pre-trained on general text corpora is improved substantially by finetuning the model on domain-specific corpora — in their case review corpora — that have not been used for pre-training BERT, or other language models. We extend the work by Xu et al. by further investigating the behavior of finetuning the BERT language model in relation to ATSC performance. In particular, our contributions are: The analysis of the influence of the amount of training-steps used for BERT language model finetuning on the Aspect-Target Sentiment Classification performance. The findings on how to exploit BERT language model finetuning enables us to achieve new state-of-the-art performance on the SemEval 2014 restaurants dataset. The analysis of cross-domain adaptation between the laptops and restaurants domain. Adaptation is tested by finetuning the BERT language model self-supervised on the target-domain and then supervised training on the ATSC task in the source-domain. In addition, the performance of training on the combination of both datasets is measured. <<</Introduction>>> <<<Related Works>>> We separate our discussion of related work into two areas: First, neural methods applied to ATSC that have improved performance solely by model architecture improvements. Secondly, methods that additionally aim to transfer knowledge from semantically related tasks or domains. <<<Architecture Improvements for Aspect-Target Sentiment Classification>>> The datasets typically used for Aspect-Target Sentiment Classification are the SemEval 2014 Task 4 datasets BIBREF1 for the restaurants and laptops domain. Unfortunately, both datasets only have a small number of training examples. One common approach to compensate for insufficient training examples is to invent neural architectures that better model ATSC. For example, in the past a big leap in classification performance was achieved with the use of the Memory Network architecture BIBREF3, which uses memory to remember context words and explicitly models attention over both the target word and context. It was found that making full use of context words improves their model compared to previous models BIBREF2 that make use of left- and right-sided context independently. BIBREF8 proposed Attention Encoder Networks (AEN), a modification to the transformer architecture. The authors split the Multi-Head Attention (MHA) layers into Intra-MHA and Inter-MHA layers in order to model target words and context differently, which results in a more lightweight model compared to the transformer architecture. Another recent performance leap was achieved by BIBREF11, who model dependencies between sentiment words explicitly in sentences with more than one aspect-target by using a graph convolutional neural network. They show that their architecture performs particularly well if multiple aspects are present in a sentence. <<</Architecture Improvements for Aspect-Target Sentiment Classification>>> <<<Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>> Another approach to compensate for insufficient training examples is to transfer knowledge across domains or across similar tasks. BIBREF12 proposed Multi-Granularity Alignment Networks (MGAN). They use this architecture to transfer knowledge from both an aspect-category classification task and also across different domains. They built a large scale aspect-category dataset specifically for this. BIBREF13 transfer knowledge from a document-level sentiment classification task trained on the amazon review dataset BIBREF14. They successfully apply pre-training by reusing the weights of a Long Short Term Memory (LSTM) network BIBREF15 that has been trained on the document-level sentiment task. In addition, they apply multi-task learning where aspect and document-level tasks are learned simultaneously by minimizing a joint loss function. Similarly, BIBREF9 introduce a multi-task loss function to simultaneously optimize the BERT model's BIBREF7 pre-training objectives as well as a question answering task. In contrast to the methods described above that aim to transfer knowledge from a different source task like question answering or document-level sentiment classification, this paper aims at transferring knowledge across different domains by finetuning the BERT language model. <<</Knowledge Transfer for Aspect-Target Sentiment Classification Analysis>>> <<</Related Works>>> <<<Methodology>>> We approach the Aspect-Target Sentiment Classification task using a two-step procedure. We use the pre-trained BERT architecture as a basis. In the first step we finetune the pre-trained weights of the language model further in a self-supervised way on a domain-specific corpus. In the second step we train the finetuned language model in a supervised way on the ATSC end-task. In the following subsections, we discuss the BERT architecture, how we finetune the language model, and how we transform the ATSC task into a BERT sequence-pair classification task BIBREF10. Finally, we discuss the different end-task training and domain-specific finetuning combinations we employ to evaluate our model's generalization performance not only in-domain but also cross-domain. <<<BERT>>> The BERT model builds on many previous innovations: contextualized word representations BIBREF4, the transformer architecture BIBREF16, and pre-training on a language modeling task with subsequent end-to-end finetuning on a downstream task BIBREF5, BIBREF6. Due to being deeply bidirectional, the BERT architecture creates very powerful sequence representations that perform extremely well on many downstream tasks BIBREF7. The main innovation of BERT is that instead of using the objective of next-word prediction a different objective is used to train the language model. This objective consists of 2 parts. The first part is the masked language model objective, where the model learns to predict tokens, which have been randomly masked, from the context. The second part is the next-sequence prediction objective, where the model needs to predict if a sequence $B$ would naturally follow the previous sequence $A$. This objective enables the model to capture long-term dependencies better. Both objectives are discussed in more detail in the next section. As a base for our experiments we use the BERTBASE model, which has been pre-trained by the Google research team. It has the following parameters: 12 layers, 768 hidden dimensions per token and 12 attention heads. It has 110 Mio. parameters in total. For finetuning the BERT language model on a specific domain we use the weights of BERTBASE as a starting point. <<</BERT>>> <<<BERT Language Model Finetuning>>> As the first step of our procedure we perform language model finetuning of the BERT model using domain-specific corpora. Algorithmically, this is equivalent to pre-training. The domain-specific language model finetuning as an intermediate step to ATSC has been shown by BIBREF9. As an extension to their paper we investigate the limits of language model finetuning in terms of how end-task performance is dependent on the amount of training steps. The training input representation for language model finetuning consists of two sequences $s_A$ and $s_B$ in the format of $"\textrm {[CLS]} \ s_{A} \ \textrm {[SEP]} \ s_{B} \ \textrm {[SEP]}"$, where [CLS] is a dummy token used for downstream classification and [SEP] are separator tokens. <<<Masked Language Model Objective>>> The sequences $A$ and $B$ have tokens randomly masked out in order for the model to learn to predict them. The following example shows why domain-specific finetuning can alleviate the bias from pre-training on a Wikipedia corpus: "The touchscreen is an [MASK] device". In the fact-based context of Wikipedia the [MASK] could be "input" and in the review domain a typical guess could be the general opinion word "amazing". <<</Masked Language Model Objective>>> <<<Next-Sentence Prediction>>> In order to train BERT to capture long-term dependencies better, the model is trained to predict if sequence $B$ follows sequence $A$. If this is the case, sequence A and sequence B are jointly sampled from the same document in the order they are occuring naturally. Otherwise the sequences are sampled randomly from the training corpus. <<</Next-Sentence Prediction>>> <<</BERT Language Model Finetuning>>> <<<Aspect-Target Sentiment Classification>>> The ATSC task aims at classifying sentiment polarity into the three classes positive, negative, neutral with respect to an aspect-target. The input to the classifier is a tokenized sentence $s=s_{1:n}$ and a target $t=s_{j:j+m}$ contained in the sentence, where $j < j+m \le n$. Similar to previous work by BIBREF10, we transform the input into a format compatible with BERT sequence-pair classification tasks: $"\textrm {[CLS]} \ s \ \textrm {[SEP]} \ t \ \textrm {[SEP]}"$. In the BERT architecture the position of the token embeddings is structurally maintained after each Multi-Head Attention layer. Therefore, we refer to the last hidden representation of the [CLS] token as $h_{[CLS]} \in \mathbf {R}^{768 \times 1}$. The number of sentiment polarity classes is three. A distribution $p \in [0,1]^3$ over these classes is predicted using a fully-connected layer with 3 output neurons on top of $h_{[CLS]}$, followed by a softmax activation function where $b \in \mathbf {R}^3$ and $W \in \mathbf {R}^{3 \times 768}$. Cross-entropy is used as the training loss. The way we use BERT for classifying the sentiment polaritites is equivalent to how BERT is used for sequence-pair classification tasks in the original paper BIBREF7. <<</Aspect-Target Sentiment Classification>>> <<<Domain Adaptation through Language Model Finetuning>>> In academia, it is common that the performance of a machine learning model is evaluated in-domain. This means that the model is evaluated on a test set that comes from the same distribution as the training set. In real-world applications this setting is not always valid, as the trained model is used to predict previously unseen data. In order to evaluate the performance of a machine learning model more robustly, its generalization error can be evaluated across different domains, i.e. cross-domain. Additionally, the model itself can be adapted towards a target domain. This is known as Domain Adaptation, which is a special case of Transductive Transfer Learning in the taxonomy of BIBREF17. Here, it is typically assumed that supervised data for a specific task is only available for a source domain $S$, whereas only unsupervised data is available in the target domain $T$. The goal is to optimize performance of the task in the target domain while transferring task-specific knowledge from the source domain. If we map this framework to our challenge, we define Aspect-Target Sentiment Classification as the transfer-task and BERT language model finetuning is used for domain adaptation. In terms of on which domain is finetuned on, the full transfer-procedure can be expressed in the following way: Here, $D_{LM}$ stands for the domain on which the language model is finetuned and can take on the values of Restaurants, Laptops or (Restaurants $\cup $ Laptops). The domain for training $D_{Train}$ can take on the same values, for the joint case case the training datasets for laptops and restaurants are simply combined. The domain for testing $D_{Test}$ can only be take on the values Restaurants or Laptops. Combining finetuning and training steps gives us nine different evaluation scenarios, which we group into the following four categories: <<</Domain Adaptation through Language Model Finetuning>>> <<<In-Domain Training>>> ATSC is trained on a domain-specific dataset and evaluated on the test set from the same domain. This can be expressed as $D_{LM} \rightarrow T \rightarrow T,$ where $T$ is our target domain and can be either Laptops or Restaurants. It is expected that the performance of the model is best if $D_{LM} = T$. <<</In-Domain Training>>> <<<Cross-Domain Training>>> ATSC is trained on a domain-specific dataset and evaluated on the test set from the other domain. This can be expressed as $D_{LM} \rightarrow S \rightarrow T,$ where $S\ne T$ are source and target domain and can be either Laptops or Restaurants. <<</Cross-Domain Training>>> <<<Cross-Domain Adaptation>>> As a special case of cross-domain Training we expect performance to be optimal if $D_{LM} = T$. This is the variant of Domain Adaptation and is written as $T \rightarrow S \rightarrow T.$ <<</Cross-Domain Adaptation>>> <<<Joint-Domain Training>>> ATSC is trained on both domain-specific datasets jointly and evaluated on both test sets independently. This can be expressed as $D_{LM} \rightarrow (S \cup T) \rightarrow T,$ where $S\ne T$ are source- and target domain and can either be Laptops or Restaurants. <<</Joint-Domain Training>>> <<</Methodology>>> <<<Experiments>>> In our experiments we aim to answer the following research questions (RQs): RQ1: How does the number of training iterations in the BERT language model finetuning stage influence the ATSC end-task performance? At what point does performance start to improve, when does it converge? RQ2: When trained in-domain, what ATSC end-task performance can be reached through fully exploitet finetuning of the BERT language model? RQ3: When trained cross-domain in the special case of domain adaptation, what ATSC end-task performance can be reached if BERT language model finetuning is fully exploitet? <<<Datasets for Classification and Language Model Finetuning>>> We conduct experiments using the two SemEval 2014 Task 4 Subtask 2 datasets BIBREF1 for the laptops and the restaurants domain. The two datasets contain sentences with multiple marked aspect terms that each have a 3-level sentiment polarity (positive, neutral or negative) associated. In the original dataset the conflict label is also present. Here, conflicting labels are dropped for reasons of comparability with BIBREF9. Both datasets are small, detailed statistics are shown in tab:datasets. For BERT language model finetuning we prepare three corpora for the two domains of laptops and restaurants. For the restaurants domain we use Yelp Dataset Challenge reviews and for the laptops domain we use Amazon Laptop reviews BIBREF14. For the laptop domain we filtered out reviews that appear in the SemEval 2014 laptops dataset to avoid training bias for the test data. To be compatible with the next-sentence prediction task used during fine tuning, we removed reviews containing less than two sentences. For the laptop corpus, $1,007,209$ sentences are left after pre-processing. For the restaurants domain more reviews are available, we sampled $10,000,000$ sentences to have a sufficient amount of data for fully exploitet language model finetuning. In order to compensate for the smaller amount of finetuning data in the laptops domain, we finetune for more epochs, 30 epochs in the case of the laptops domain compared to 3 epochs for the restaurants domain, so that the BERT model trains on about 30 million sentences in both cases. This means that 1 sentence can be seen multiple times with a different language model masking. We also create a mixed corpus to jointly finetune both domains. Here, we sample 1 Mio. restaurant reviews and combine them with the laptop reviews. This results in about 2 Mio. reviews that are finetuned for 15 epochs. The exact statistics for the three finetuning corpora are shown in the top of tab:datasets. To be able to reproduce our finetuning corpora, we make the code that is used to generate them available online. <<</Datasets for Classification and Language Model Finetuning>>> <<<Hyperparameters>>> We use BERTBASE (uncased) as the base for all of our experiments, with the exception of XLNetBASE (cased), which is used as one of the baseline models. For the BERT language model finetuning we use 32 bit floating point computations using the Adam optimizer BIBREF18. The batchsize is set to 32 while the learning rate is set to $3\cdot 10^{-5}$. The maximum input sequence length is set to 256 tokens, which amounts to about 4 sentences per sequence on average. As shown in tab:datasets, we finetune the language models on each domain so that the model trains a total of about 30 Mio. sentences (7.5 Mio. sequences). For training the BERT and XLNet models on the down-stream task of ATSC we use mixed 16 bit and 32 bit floating point computations, the Adam optimizer, and a learning rate of $3\cdot 10^{-5}$ and a batchsize of 32. We train the model for a total of 7 epochs. The validation accuracy converges after about 3 epochs of training on all datasets, but training loss still improves after that. It is important to note that all our results reported are the average of 9 runs with different random initializations. This is needed to measure significance of improvements, as the standard deviation in accuray amounts to roughly $1\%$ for all experiments, see fig:acc-dep-lmiterations. <<</Hyperparameters>>> <<<Compared Methods>>> We compare in-domain results to current state of the art methods, which we will now describe briefly. SDGCN-BERT BIBREF11 explicitly models sentiment dependencies for sentences with multiple aspects with a graph convolutional network. This method is current state-of-the-art on the SemEval 2014 laptops dataset. AEN-BERT BIBREF8 is an attentional encoder network. When used on top of BERT embeddings this method performs especially well on the laptops dataset. BERT-SPC BIBREF8 is BERT used in sentence-pair classification mode. This is exactly the same method as our BERT-base baseline and therefore, we can cross-check the authors results. BERT-PT BIBREF9 uses multi-task fine-tuning prior to downstream classification, where the BERT language model is finetuned jointly with a question answering task. It performs state-of-the-art on the restaurants dataset prior to this paper. To our knowledge, cross- and joint-domain training on the SemEval 2014 Task 4 datasets has not been analyzed so far. Thus, we compare our method to two very strong baselines: BERT and XLNet. BERT-base BIBREF7 is using the pre-trained BERTBASE embeddings directly on the down-stream task without any domain specific language model finetuning. XLNet-base BIBREF19 is a method also based on general language model pre-training similar to BERT. Instead of randomly masking tokens for pre-training like in BERT a more general permutation objective is used, where all possible variants of masking are fully exploitet. Our models are BERT models whose language model has been finetuned on different domain corpora. BERT-ADA Lapt is the BERT language model finetuned on the laptops domain corpus. BERT-ADA Rest is the BERT language model finetuned on the restaurant domain corpus. BERT-ADA Joint is the BERT language model finetuned on the corpus containing an equal amount of laptops and restaurants reviews. <<</Compared Methods>>> <<<Results Analysis>>> The results of our experiments are shown in fig:acc-dep-lmiterations and tab:results respectively. To answer RQ1, which is concerned with details on domain-specific language model finetuning, we can see in fig:acc-dep-lmiterations that first of all, language model finetuning has a substantial effect on ATSC end-task performance. Secondly, we see that in the laptops domain the performance starts to increase at about 10 Mio. finetuned sentences. This is an interesting insight as one would expect a relation closer to a logarithmic curve. One reason might be that it takes many steps to train knowledge into the BERT language model due to its vast amount of parameters. The model already converges at around 17 Mio. sentences. More finetuning does not improve performance significantly. In addition, we find that different runs have a high variance, the standard deviation amounts to about $1\%$ in accuracy, which justifies averaging over 9 runs to measure differences in model performance reliably. To answer RQ2, which is concerned with in-domain ATSC performance, we see in tab:results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset and new state-of-the-art on the restaurants dataset with accuracies of $79.19\%$ and $87.14\%$, respectively. On the restaurants dataset, this corresponds to an absolute improvement of $2.2\%$ compared to the previous state-of-the-art method BERT-PT. Language model finetuning produces a larger improvement on the restaurants dataset. We think that one reason for that might be that the restaurants domain is underrepresented in the pre-training corpora of BERTBASE. Generally, we find that language model finetuning helps even if the finetuning domain does not match the evaluation domain. We think the reason for this might be that the BERT-base model is pre-trained more on knowledge-based corpora like Wikipedia than on text containing opinions. Another finding is that BERT-ADA Joint performs better on the laptops dataset than BERT-ADA Rest, although the unique amount of laptop reviews are the same in laptops- and joint-corpora. We think that confusion can be created when mixing the domains, but this needs to be investigated further. We also find that the XLNet-base baseline performs generally stronger than BERT-base and even outperforms BERT-ADA Lapt with an accuracy of $79.89\%$ on the laptops dataset. To answer RQ3, which is concerned with domain adaptation, we can see in the grayed out cells in tab:results, which correspond to the cross-domain adaption case where the BERT language model is trained on the target domain, that domain adaptation works well with $2.2\%$ absolute accuracy improvement on the laptops test set and even $3.6\%$ accuracy improvement on the restaurants test set compared to BERT-base. In general, the ATSC task generalizes well cross-domain, with about 2-$3\%$ drop in accuracy compared to in-domain training. We think the reason for this might be that syntactical relationships between the aspect-target and the phrase expressing sentiment polarity as well as knowing the sentiment-polarity itself are sufficient to solve the ATSC task in many cases. For the joint-training case, we find that combining both training datasets improves performance on both test sets. This result is intuitive, as more training data leads to better performance if the domains do not confuse each other. Interesting for the joint-training case is that the BERT-ADA Joint model performs especially strong when measured by the Macro-F1 metric. A reason for this might be that the SemEval 2014 datasets are imbalanced due to dominance of positive label. It seems like through finetuning the language model on both domains the model learns to classify the neutral class much better, especially in the laptops domain. <<</Results Analysis>>> <<</Experiments>>> <<<Conclusion>>> We performed experiments on the task of Aspect-Target Sentiment Classification by first finetuning a pre-trained BERT model on a domain specific corpus with subsequent training on the down-stream classification task. We analyzed the behavior of the number of domain-specific BERT language model finetuning steps in relation to the end-task performance. With the findings on how to best exploit BERT language model finetuning we were able to train high performing models, which one of even performs as new state-of-the-art on SemEval 2014 Task 4 restaurants dataset. We further evaluated our models cross-domain to explore the robustness of Aspect-Target Sentiment Classification. We found that in general, this task transfers well between the laptops and the restaurants domain. As a special case we ran a cross-domain adaptation experiments, where the BERT language model is specifically finetuned on the target domain. We achieve significant improvement over unadapted models, a cross-domain adapted model performs even better than a BERT-base model that is trained in-domain. Overall, our findings reveal promising directions for follow-up work. The XLNet-base model performs strongly on the ATSC task. Here, domain-specific finetuning could probably bring significant performance improvements. Another interesting direction for future work would be to investigate cross-domain behavior for an additional domain like hotels, which is more similar to the restaurants domain. Here, it could be interesting to find out if the shared nature of these domain would results in more confusion or if they would behave synergetically. <<</Conclusion>>> <<</Title>>>
{ "references": [ "results that for the in-domain training case, our models BERT-ADA Lapt and BERT-ADA Rest achieve performance close to state-of-the-art on the laptops dataset,new state-of-the-art on the restaurants dataset with accuracies of $79.19\\%$ and $87.14\\%$, respectively." ], "type": "extractive" }
2002.09758
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What off-the-shelf QA model was used to answer sub-questions? Context: <<<Title>>> Unsupervised Question Decomposition for Question Answering <<<Abstract>>> We aim to improve question answering (QA) by decomposing hard questions into easier sub-questions that existing QA systems can answer. Since collecting labeled decompositions is cumbersome, we propose an unsupervised approach to produce sub-questions. Specifically, by leveraging>10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and incorporate the resulting answers in a downstream, multi-hop QA system. On a popular multi-hop QA dataset, HotpotQA, we show large improvements over a strong baseline, especially on adversarial and out-of-domain questions. Our method is generally applicable and automatically learns to decompose questions of different classes, while matching the performance of decomposition methods that rely heavily on hand-engineering and annotation. <<</Abstract>>> <<<Introduction>>> Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?” Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision. Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach. We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark. Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA. <<</Introduction>>> <<<Method>>> We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$. Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$. <<<Unsupervised Question Decomposition>>> To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6). <<<Creating Pseudo-Decompositions>>> For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse: <<<Similarity-based Retrieval>>> To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to: The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$. <<</Similarity-based Retrieval>>> <<<Random Retrieval>>> For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions. <<</Random Retrieval>>> <<<Editing Pseudo-Decompositions>>> Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval. <<</Editing Pseudo-Decompositions>>> <<</Creating Pseudo-Decompositions>>> <<<Learning to Decompose>>> Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section): <<<No Learning>>> We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA. <<</No Learning>>> <<<Sequence-to-Sequence (Seq2Seq)>>> We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$. <<</Sequence-to-Sequence (Seq2Seq)>>> <<<Unsupervised Sequence-to-Sequence (USeq2Seq)>>> We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing. <<</Unsupervised Sequence-to-Sequence (USeq2Seq)>>> <<</Learning to Decompose>>> <<</Unsupervised Question Decomposition>>> <<<Answering Sub-Questions>>> To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$. <<</Answering Sub-Questions>>> <<<QA using Decompositions>>> Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above. <<</QA using Decompositions>>> <<</Method>>> <<<Experimental Setup>>> <<<Question Answering Task>>> We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans. <<</Question Answering Task>>> <<<Unsupervised Decomposition>>> <<<Question Data>>> We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts. <<</Question Data>>> <<<Unsupervised Decomposition Models>>> <<<Pre-training>>> Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6. <<</Pre-training>>> <<<Seq2Seq>>> We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions. <<</Seq2Seq>>> <<<USeq2Seq>>> We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$). <<</USeq2Seq>>> <<</Unsupervised Decomposition Models>>> <<</Unsupervised Decomposition>>> <<<Single-hop Question Answering Model>>> We train our single-hop QA model following prior work from BIBREF3 on HotpotQA. <<<Model Architecture>>> We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows: We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3. <<</Model Architecture>>> <<<Training Data and Ensembling>>> Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions. <<</Training Data and Ensembling>>> <<<Returned Text>>> We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions. <<</Returned Text>>> <<<Sub-Answer Confidence>>> Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question. <<</Sub-Answer Confidence>>> <<<Changing the Single-hop QA Model>>> We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble). <<</Changing the Single-hop QA Model>>> <<</Single-hop Question Answering Model>>> <<<Multi-hop Question Answering Model>>> Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions. <<<Varying the Base Model>>> To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model. <<</Varying the Base Model>>> <<</Multi-hop Question Answering Model>>> <<</Experimental Setup>>> <<<Results on Question Answering>>> We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions. We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version. <<<Main Results>>> Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set). More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set). The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing. After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question. <<</Main Results>>> <<<Question Type Breakdown>>> To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types. Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions). <<</Question Type Breakdown>>> <<<Answers to Sub-Questions are Crucial>>> To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1). <<</Answers to Sub-Questions are Crucial>>> <<<How Do Decompositions Help?>>> Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22. <<</How Do Decompositions Help?>>> <<<Example Decompositions>>> To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA. <<</Example Decompositions>>> <<</Results on Question Answering>>> <<<Analysis>>> <<<Unsupervised Decomposition Model>>> <<<Intrinsic Evaluation of Decompositions>>> We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table . Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive. <<</Intrinsic Evaluation of Decompositions>>> <<<Quality of Decomposition Model>>> Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition. <<</Quality of Decomposition Model>>> <<</Unsupervised Decomposition Model>>> <<</Analysis>>> <<<Related Work>>> Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering. More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach. Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44. <<</Related Work>>> <<<Conclusion>>> We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "$\\textsc {BERT}_{\\textsc {BASE}}$ ensemble from BIBREF3" ], "type": "extractive" }
2002.09758
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How large is the improvement over the baseline? Context: <<<Title>>> Unsupervised Question Decomposition for Question Answering <<<Abstract>>> We aim to improve question answering (QA) by decomposing hard questions into easier sub-questions that existing QA systems can answer. Since collecting labeled decompositions is cumbersome, we propose an unsupervised approach to produce sub-questions. Specifically, by leveraging>10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and incorporate the resulting answers in a downstream, multi-hop QA system. On a popular multi-hop QA dataset, HotpotQA, we show large improvements over a strong baseline, especially on adversarial and out-of-domain questions. Our method is generally applicable and automatically learns to decompose questions of different classes, while matching the performance of decomposition methods that rely heavily on hand-engineering and annotation. <<</Abstract>>> <<<Introduction>>> Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?” Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision. Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach. We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark. Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA. <<</Introduction>>> <<<Method>>> We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$. Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$. <<<Unsupervised Question Decomposition>>> To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6). <<<Creating Pseudo-Decompositions>>> For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse: <<<Similarity-based Retrieval>>> To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to: The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$. <<</Similarity-based Retrieval>>> <<<Random Retrieval>>> For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions. <<</Random Retrieval>>> <<<Editing Pseudo-Decompositions>>> Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval. <<</Editing Pseudo-Decompositions>>> <<</Creating Pseudo-Decompositions>>> <<<Learning to Decompose>>> Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section): <<<No Learning>>> We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA. <<</No Learning>>> <<<Sequence-to-Sequence (Seq2Seq)>>> We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$. <<</Sequence-to-Sequence (Seq2Seq)>>> <<<Unsupervised Sequence-to-Sequence (USeq2Seq)>>> We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing. <<</Unsupervised Sequence-to-Sequence (USeq2Seq)>>> <<</Learning to Decompose>>> <<</Unsupervised Question Decomposition>>> <<<Answering Sub-Questions>>> To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$. <<</Answering Sub-Questions>>> <<<QA using Decompositions>>> Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above. <<</QA using Decompositions>>> <<</Method>>> <<<Experimental Setup>>> <<<Question Answering Task>>> We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans. <<</Question Answering Task>>> <<<Unsupervised Decomposition>>> <<<Question Data>>> We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts. <<</Question Data>>> <<<Unsupervised Decomposition Models>>> <<<Pre-training>>> Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6. <<</Pre-training>>> <<<Seq2Seq>>> We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions. <<</Seq2Seq>>> <<<USeq2Seq>>> We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$). <<</USeq2Seq>>> <<</Unsupervised Decomposition Models>>> <<</Unsupervised Decomposition>>> <<<Single-hop Question Answering Model>>> We train our single-hop QA model following prior work from BIBREF3 on HotpotQA. <<<Model Architecture>>> We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows: We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3. <<</Model Architecture>>> <<<Training Data and Ensembling>>> Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions. <<</Training Data and Ensembling>>> <<<Returned Text>>> We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions. <<</Returned Text>>> <<<Sub-Answer Confidence>>> Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question. <<</Sub-Answer Confidence>>> <<<Changing the Single-hop QA Model>>> We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble). <<</Changing the Single-hop QA Model>>> <<</Single-hop Question Answering Model>>> <<<Multi-hop Question Answering Model>>> Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions. <<<Varying the Base Model>>> To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model. <<</Varying the Base Model>>> <<</Multi-hop Question Answering Model>>> <<</Experimental Setup>>> <<<Results on Question Answering>>> We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions. We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version. <<<Main Results>>> Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set). More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set). The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing. After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question. <<</Main Results>>> <<<Question Type Breakdown>>> To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types. Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions). <<</Question Type Breakdown>>> <<<Answers to Sub-Questions are Crucial>>> To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1). <<</Answers to Sub-Questions are Crucial>>> <<<How Do Decompositions Help?>>> Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22. <<</How Do Decompositions Help?>>> <<<Example Decompositions>>> To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA. <<</Example Decompositions>>> <<</Results on Question Answering>>> <<<Analysis>>> <<<Unsupervised Decomposition Model>>> <<<Intrinsic Evaluation of Decompositions>>> We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table . Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive. <<</Intrinsic Evaluation of Decompositions>>> <<<Quality of Decomposition Model>>> Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition. <<</Quality of Decomposition Model>>> <<</Unsupervised Decomposition Model>>> <<</Analysis>>> <<<Related Work>>> Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering. More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach. Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44. <<</Related Work>>> <<<Conclusion>>> We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "3.1 F1 gain on the original dev set,11 F1 gain on the multi-hop dev set,10 F1 gain on the out-of-domain dev set." ], "type": "extractive" }
2002.09758
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What is the strong baseline that this work outperforms? Context: <<<Title>>> Unsupervised Question Decomposition for Question Answering <<<Abstract>>> We aim to improve question answering (QA) by decomposing hard questions into easier sub-questions that existing QA systems can answer. Since collecting labeled decompositions is cumbersome, we propose an unsupervised approach to produce sub-questions. Specifically, by leveraging>10M questions from Common Crawl, we learn to map from the distribution of multi-hop questions to the distribution of single-hop sub-questions. We answer sub-questions with an off-the-shelf QA model and incorporate the resulting answers in a downstream, multi-hop QA system. On a popular multi-hop QA dataset, HotpotQA, we show large improvements over a strong baseline, especially on adversarial and out-of-domain questions. Our method is generally applicable and automatically learns to decompose questions of different classes, while matching the performance of decomposition methods that rely heavily on hand-engineering and annotation. <<</Abstract>>> <<<Introduction>>> Question answering (QA) systems have become remarkably good at answering simple, single-hop questions but still struggle with compositional, multi-hop questions BIBREF0, BIBREF1. In this work, we examine if we can answer hard questions by leveraging our ability to answer simple questions. Specifically, we approach QA by breaking a hard question into a series of sub-questions that can be answered by a simple, single-hop QA system. The system's answers can then be given as input to a downstream QA system to answer the hard question, as shown in Fig. FIGREF1. Our approach thus answers the hard question in multiple, smaller steps, which can be easier than answering the hard question all at once. For example, it may be easier to answer “What profession do H. L. Mencken and Albert Camus have in common?” when given the answers to the sub-questions “What profession does H. L. Mencken have?” and “Who was Albert Camus?” Prior work in learning to decompose questions into sub-questions has relied on extractive heuristics, which generalizes poorly to different domains and question types, and requires human annotation BIBREF2, BIBREF3. In order to scale to any arbitrary question, we would require sophisticated natural language generation capabilities, which often relies on large quantities of high-quality supervised data. Instead, we find that it is possible to learn to decompose questions without supervision. Specifically, we learn to map from the distribution of hard questions to the distribution of simpler questions. First, we automatically construct a noisy, “pseudo-decomposition” for each hard question by retrieving relevant sub-question candidates based on their similarity to the given hard question. We retrieve candidates from a corpus of 10M simple questions that we extracted from Common Crawl. Second, we train neural text generation models on that data with (1) standard sequence-to-sequence learning and (2) unsupervised sequence-to-sequence learning. The latter has the advantage that it can go beyond the noisy pairing between questions and pseudo-decompositions. Fig. FIGREF2 overviews our decomposition approach. We use decompositions to improve multi-hop QA. We first use an off-the-shelf single-hop QA model to answer decomposed sub-questions. We then give each sub-question and its answer as additional input to a multi-hop QA model. We test our method on HotpotQA BIBREF0, a popular multi-hop QA benchmark. Our contributions are as follows. First, QA models relying on decompositions improve accuracy over a strong baseline by 3.1 F1 on the original dev set, 11 F1 on the multi-hop dev set from BIBREF4, and 10 F1 on the out-of-domain dev set from BIBREF3. Our most effective decomposition model is a 12-block transformer encoder-decoder BIBREF5 trained using unsupervised sequence-to-sequence learning, involving masked language modeling, denoising, and back-translation objectives BIBREF6. Second, our method is competitive with state-of-the-art methods SAE BIBREF7 and HGN BIBREF8 which leverage strong supervision. Third, we show that our approach automatically learns to generate useful decompositions for all 4 question types in HotpotQA, highlighting the general nature of our approach. In our analysis, we explore how sub-questions improve multi-hop QA, and we provide qualitative examples that highlight how question decomposition adds a form of interpretability to black-box QA models. Our ablations show that each component of our pipeline contributes to QA performance. Overall, we find that it is possible to successfully decompose questions without any supervision and that doing so improves QA. <<</Introduction>>> <<<Method>>> We now formulate the problem and overview our high-level approach, with details in the following section. We aim to leverage a QA model that is accurate on simple questions to answer hard questions, without using supervised question decompositions. Here, we consider simple questions to be “single-hop” questions that require reasoning over one paragraph or piece of evidence, and we consider hard questions to be “multi-hop.” Our aim is then to train a multi-hop QA model $M$ to provide the correct answer $a$ to a multi-hop question $q$ about a given a context $c$ (e.g., several paragraphs). Normally, we would train $M$ to maximize $\log p_M(a | c, q)$. To help $M$, we leverage a single-hop QA model that may be queried with sub-questions $s_1, \dots , s_N$, whose “sub-answers” to each sub-question $a_1, \dots , a_N$ may be provided to the multi-hop QA model. $M$ may then instead maximize the (potentially easier) objective $\log p_M(a | c, q, [s_1, a_1], \dots , [a_N, s_N])$. Supervised decomposition models learn to map each question $q \in Q$ to a decomposition $d = [s_1; \dots ; s_N]$ of $N$ sub-questions $s_n \in S$ using annotated $(q, d)$ examples. In this work, we do not assume access to strong $(q, d)$ supervision. To leverage the single-hop QA model without supervision, we follow a three-stage approach: 1) map a question $q$ into sub-questions $s_1, \dots , s_N$ via unsupervised techniques, 2) find sub-answers $a_1, \dots , a_N$ with the single-hop QA model, and 3) provide $s_1, \dots , s_N$ and $a_1, \dots , a_N$ to help predict $a$. <<<Unsupervised Question Decomposition>>> To train a decomposition model, we need appropriate training data. We assume access to a hard question corpus $Q$ and a simple question corpus $S$. Instead of using supervised $(q, d)$ training examples, we design an algorithm that constructs pseudo-decompositions $d^{\prime }$ to form $(q, d^{\prime })$ pairs from $Q$ and $S$ using an unsupervised approach (§SECREF4). We then train a model to map $q$ to a decomposition. We explore learning to decompose with standard and unsupervised sequence-to-sequence learning (§SECREF6). <<<Creating Pseudo-Decompositions>>> For each $q \in Q$, we construct a pseudo-decomposition set $d^{\prime } = \lbrace s_1; \dots ; s_N\rbrace $ by retrieving simple question $s$ from $S$. We concatenate all $N$ simple questions in $d^{\prime }$ to form the pseudo-decomposition used downstream. $N$ may be chosen based on the task or vary based on $q$. To retrieve useful simple questions for answering $q$, we face a joint optimization problem. We want sub-questions that are both (i) similar to $q$ according to some metric $f$ and (ii) maximally diverse: <<<Similarity-based Retrieval>>> To retrieve question-relevant sub-questions, we embed any text $t$ into a vector $\mathbf {v}_t$ by summing the FastText vectors BIBREF13 for words in $t$. We use cosine similarity as our similarity metric $f$. Let $q$ be a multi-hop question used to retrieve pseudo-decomposition $(s_1^*, s_2^*)$, and let $\hat{\mathbf {v}}$ be the unit vector of $\mathbf {v}$. Since $N=2$, Eq. DISPLAY_FORM5 reduces to: The last term requires $O(|S|^2)$ comparisons, which is expensive as $|S|$ is large ($>$10M). Instead of solving Eq. (DISPLAY_FORM19) exactly, we find an approximate pseudo-decomposition $(s_1^{\prime }, s_2^{\prime })$ by computing Eq. (DISPLAY_FORM19) over $S^{\prime } = \operatornamewithlimits{topK}_{\lbrace s \in S\rbrace }\left[ \mathbf {\hat{v}}_{q}^{\top } \mathbf {\hat{v}}_s\right]$, using $K=1000$. We use FAISS BIBREF14 to efficiently build $S^{\prime }$. <<</Similarity-based Retrieval>>> <<<Random Retrieval>>> For comparison, we test random pseudo-decompositions, where we randomly retrieve $s_1, \dots , s_N$ by sampling from $S$. USeq2Seq trained on random $d^{\prime } = [s_1; \dots ; s_N]$ should at minimum learn to map $q$ to multiple simple questions. <<</Random Retrieval>>> <<<Editing Pseudo-Decompositions>>> Since the sub-questions are retrieval-based, the sub-questions are often not about the same entities as $q$. As a post-processing step, we replace entities in $(s^{\prime }_1, s^{\prime }_2)$ with entities from $q$. We find all entities in $(s^{\prime }_1, s^{\prime }_2)$ that do not appear in $q$ using spaCy BIBREF15. We replace these entities with a random entity from $q$ with the same type (e.g., “Date” or “Location”) if and only if one exists. We use entity replacement on pseudo-decompositions from both random and similarity-based retrieval. <<</Editing Pseudo-Decompositions>>> <<</Creating Pseudo-Decompositions>>> <<<Learning to Decompose>>> Having now retrieved relevant pseudo-decompositions, we examine different ways to learn to decompose (with implementation details in the following section): <<<No Learning>>> We use pseudo-decompositions directly, employing retrieved sub-questions in downstream QA. <<</No Learning>>> <<<Sequence-to-Sequence (Seq2Seq)>>> We train a Seq2Seq model with parameters $\theta $ to maximize $\log p_{\theta }(d^{\prime } | q)$. <<</Sequence-to-Sequence (Seq2Seq)>>> <<<Unsupervised Sequence-to-Sequence (USeq2Seq)>>> We start with paired $(q, d^{\prime })$ examples but do not learn from the pairing, because the pairing is noisy. We use unsupervised sequence-to-sequence learning to learn a $q \rightarrow d$ mapping instead of training directly on the noisy pairing. <<</Unsupervised Sequence-to-Sequence (USeq2Seq)>>> <<</Learning to Decompose>>> <<</Unsupervised Question Decomposition>>> <<<Answering Sub-Questions>>> To answer the generated sub-questions, we use an off-the-shelf QA model. The QA model may answer sub-questions using any free-form text (i.e., a word, phrase, sentence, etc.). Any QA model is suitable, so long as it can accurately answer simple questions in $S$. We thus leverage good accuracy on questions in $S$ to help QA models on questions in $Q$. <<</Answering Sub-Questions>>> <<<QA using Decompositions>>> Downstream QA systems may use sub-questions and sub-answers in various ways. We add sub-questions and sub-answers as auxiliary input for a downstream QA model to incorporate in its processing. We now describe the implementation details of our approach outlined above. <<</QA using Decompositions>>> <<</Method>>> <<<Experimental Setup>>> <<<Question Answering Task>>> We test unsupervised decompositions on HotpotQA BIBREF0, a standard benchmark for multi-hop QA. We use HotpotQA's “Distractor Setting,” which provides 10 context paragraphs from Wikipedia. Two (or more) paragraphs contain question-relevant sentences called “supporting facts,” and the remaining paragraphs are irrelevant, “distractor paragraphs.” Answers in HotpotQA are either yes, no, or a span of text in an input paragraph. Accuracy is measured with F1 and Exact Match (EM) scores between the predicted and gold spans. <<</Question Answering Task>>> <<<Unsupervised Decomposition>>> <<<Question Data>>> We use HotpotQA questions as our initial multi-hop, hard question corpus $Q$. We use SQuAD 2 questions as our initial single-hop, simple question corpus $S$. However, our pseudo-decomposition corpus should be large, as the corpus will be used to train neural Seq2Seq models, which are data hungry. A larger $|S|$ will also improve the relevance of retrieved simple questions to the hard question. Thus, we take inspiration from work in machine translation on parallel corpus mining BIBREF9, BIBREF10 and in unsupervised QA BIBREF11. We augment $Q$ and $S$ by mining more questions from Common Crawl. We choose sentences which start with common “wh”-words and end with “?” Next, we train a FastText classifier BIBREF12 to classify between 60K questions sampled from Common Crawl, SQuAD 2, and HotpotQA. Then, we classify Common Crawl questions, adding questions classified as SQuAD 2 questions to $S$ and questions classified as HotpotQA questions to $Q$. Question mining greatly increases the number of single-hop questions (130K $\rightarrow $ 10.1M) and multi-hop questions (90K $\rightarrow $ 2.4M). Thus, our unsupervised approach allows us to make use of far more data than supervised counterparts. <<</Question Data>>> <<<Unsupervised Decomposition Models>>> <<<Pre-training>>> Pre-training is a key ingredient for unsupervised Seq2Seq methods BIBREF16, BIBREF17, so we initialize all decomposition models with the same pre-trained weights, regardless of training method (Seq2Seq or USeq2Seq). We warm-start our pre-training with the pre-trained, English Masked Language Model (MLM) from BIBREF6, a 12-block decoder-only transformer model BIBREF5 trained to predict masked-out words on Toronto Books Corpus BIBREF18 and Wikipedia. We train the model with the MLM objective for one epoch on the augmented corpus $Q$ (2.4 M questions), while also training on decompositions $D$ formed via random retrieval from $S$. For our pre-trained encoder-decoder, we initialize a 6-block encoder with the first 6 MLM blocks, and we initialize a 6-block decoder with the last 6 MLM blocks, randomly initializing the remaining weights as in BIBREF6. <<</Pre-training>>> <<<Seq2Seq>>> We fine-tune the pre-trained encoder-decoder using maximum likelihood. We stop training based on validation BLEU BIBREF19 between generated decompositions and pseudo-decompositions. <<</Seq2Seq>>> <<<USeq2Seq>>> We follow the approach by BIBREF6 in unsupervised translation. Training follows two stages: (1) MLM pre-training on the training corpora (described above), followed by (2) training simultaneously with denoising and back-translation objectives. For denoising, we produce a noisy input $\hat{d}$ by randomly masking, dropping, and locally shuffling tokens in $d \sim D$, and we train a model with parameters $\theta $ to maximize $\log p_{\theta }(d | \hat{d})$. We likewise maximize $\log p_{\theta }(q | \hat{q})$. For back-translation, we generate a multi-hop question $\hat{q}$ for a decomposition $d \sim D$, and we maximize $\log p_{\theta }(d | \hat{q})$. Similarly, we maximize $\log p_{\theta }(q | \hat{d})$. To stop training without supervision, we use a modified version of round-trip BLEU BIBREF17 (see Appendix §SECREF56 for details). We train with denoising and back-translation on smaller corpora of HotpotQA questions ($Q$) and their pseudo-decompositions ($D$). <<</USeq2Seq>>> <<</Unsupervised Decomposition Models>>> <<</Unsupervised Decomposition>>> <<<Single-hop Question Answering Model>>> We train our single-hop QA model following prior work from BIBREF3 on HotpotQA. <<<Model Architecture>>> We fine-tune a pre-trained model to take a question and several paragraphs and predicts the answer, similar to the single-hop QA model from BIBREF21. The model computes a separate forward pass on each paragraph (with the question). For each paragraph, the model learns to predict the answer span if the paragraph contains the answer and to predict “no answer” otherwise. We treat yes and no predictions as spans within the passage (prepended to each paragraph), as in BIBREF22 on HotpotQA. During inference, for the final softmax, we consider all paragraphs as a single chunk. Similar to BIBREF23, we subtract a paragraph's “no answer” logit from the logits of all spans in that paragraph, to reduce or increase span probabilities accordingly. In other words, we compute the probability $p(s_p)$ of each span $s_p$ in a paragraph $p \in \lbrace 1, \dots , P \rbrace $ using the predicted span logit $l(s_p)$ and “no answer” paragraph logit $n(p)$ as follows: We use $\textsc {RoBERTa}_{\textsc {LARGE}}$ BIBREF24 as our pre-trained initialization. Later, we also experiment with using the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3. <<</Model Architecture>>> <<<Training Data and Ensembling>>> Similar to BIBREF3, we train an ensemble of 2 single-hop QA models using data from SQuAD 2 and HotpotQA questions labeled as “easy” (single-hop). To ensemble, we average the logits of the two models before predicting the answer. SQuAD is a single-paragraph QA task, so we adapt SQuAD to the multi-paragraph setting by retrieving distractor paragraphs from Wikipedia for each question. We use the TFIDF retriever from DrQA BIBREF25 to retrieve 2 distractor paragraphs, which we add to the input for one model in the ensemble. We drop words from the question with a 5% probability to help the model handle any ill-formed sub-questions. We use the single-hop QA ensemble as a black-box model once trained, never training the model on multi-hop questions. <<</Training Data and Ensembling>>> <<<Returned Text>>> We have the single-hop QA model return the sentence containing the model's predicted answer span, alongside the sub-questions. Later, we compare against alternatives, i.e., returning the predicted answer span without its context or not returning sub-questions. <<</Returned Text>>> <<<Sub-Answer Confidence>>> Figure FIGREF46 (right) shows that the model's sub-answer confidence correlates with downstream multi-hop QA performance for all HotpotQA dev sets. A low confidence sub-answer may be indicative of (i) an unanswerable or ill-formed sub-question or (ii) a sub-answer that is more likely to be incorrect. In both cases, the single-hop QA model is less likely to retrieve the useful supporting evidence to answer the multi-hop question. <<</Sub-Answer Confidence>>> <<<Changing the Single-hop QA Model>>> We find that our approach is robust to the single-hop QA model that answers sub-questions. We use the $\textsc {BERT}_{\textsc {BASE}}$ ensemble from BIBREF3 as the single-hop QA model. The model performs much worse compared to our $\textsc {RoBERTa}_{\textsc {LARGE}}$ single-hop ensemble when used directly on HotpotQA (56.3 vs. 66.7 F1). However, the model results in comparable QA when used to answer single-hop sub-questions within our larger system (79.9 vs. 80.1 F1 for our $\textsc {RoBERTa}_{\textsc {LARGE}}$ ensemble). <<</Changing the Single-hop QA Model>>> <<</Single-hop Question Answering Model>>> <<<Multi-hop Question Answering Model>>> Our multi-hop QA architecture is identical to the single-hop QA model, but the multi-hop QA model also uses sub-questions and sub-answers as input. We append each (sub-question, sub-answer) pair in order to the multi-hop question along with separator tokens. We train one multi-hop QA model on all of HotpotQA, also including SQuAD 2 examples used to train the single-hop QA model. Later, we experiment with using $\textsc {BERT}_{\textsc {LARGE}}$ and $\textsc {BERT}_{\textsc {BASE}}$ instead of $\textsc {RoBERTa}_{\textsc {LARGE}}$ as the multi-hop QA model. All reported error margins show the mean and std. dev. across 5 multi-hop QA training runs using the same decompositions. <<<Varying the Base Model>>> To understand how decompositions impact performance as the multi-hop QA model gets stronger, we vary the base pre-trained model. Table shows the impact of adding decompositions to $\textsc {BERT}_{\textsc {BASE}}$ , $\textsc {BERT}_{\textsc {LARGE}}$ , and finally $\textsc {RoBERTa}_{\textsc {LARGE}}$ (see Appendix §SECREF64 for hyperparameters). The gain from using decompositions grows with strength of the multi-hop QA model. Decompositions improve QA by 1.2 F1 for a $\textsc {BERT}_{\textsc {BASE}}$ model, by 2.6 F1 for the stronger $\textsc {BERT}_{\textsc {LARGE}}$ model, and by 3.1 F1 for our best $\textsc {RoBERTa}_{\textsc {LARGE}}$ model. <<</Varying the Base Model>>> <<</Multi-hop Question Answering Model>>> <<</Experimental Setup>>> <<<Results on Question Answering>>> We compare variants of our approach that use different learning methods and different pseudo-aligned training sets. As a baseline, we compare RoBERTa with decompositions to a RoBERTa model that does not use decompositions but is identical in all other respects. We train the baseline for 2 epochs, sweeping over batch size $\in \lbrace 64, 128\rbrace $, learning rate $\in \lbrace 1 \times 10^{-5}, 1.5 \times 10^{-5}, 2 \times 10^{-5}, 3 \times 10^{-5}\rbrace $, and weight decay $\in \lbrace 0, 0.1, 0.01, 0.001\rbrace $; we choose the hyperparameters that perform best on our dev set. We then use the best hyperparameters for the baseline to train our RoBERTa models with decompositions. We report results on 3 versions of the dev set: (1) the original version, (2) the multi-hop version from BIBREF4 which created some distractor paragraphs adversarially to test multi-hop reasoning, and (3) the out-of-domain version from BIBREF3 which retrieved distractor paragraphs using the same procedure as the original version, but excluded paragraphs in the original version. <<<Main Results>>> Table shows how unsupervised decompositions affect QA. Our RoBERTa baseline performs quite well on HotpotQA (77.0 F1), despite processing each paragraph separately, which prohibits inter-paragraph reasoning. The result is in line with prior work which found that a version of our baseline QA model using BERT BIBREF26 does well on HotpotQA by exploiting single-hop reasoning shortcuts BIBREF21. We achieve significant gains over our strong baseline by leveraging decompositions from our best decomposition model, trained with USeq2Seq on FastText pseudo-decompositions; we find a 3.1 F1 gain on the original dev set, 11 F1 gain on the multi-hop dev set, and 10 F1 gain on the out-of-domain dev set. Unsupervised decompositions even match the performance of using (within our pipeline) supervised and heuristic decompositions from DecompRC (i.e., 80.1 vs. 79.8 F1 on the original dev set). More generally, all decomposition methods improve QA over the baseline by leveraging the single-hop QA model (“1hop” in Table ). Using FastText pseudo-decompositions as sub-questions directly improves QA over using random sub-questions on the multi-hop set (72.4 vs. 70.9 F1) and out-of-domain set (72.0 vs. 70.7 F1). USeq2Seq on random pseudo-decompositions also improves over the random sub-question baseline (e.g., 79.8 vs. 78.4 F1 on HotpotQA). However, we only find small improvements when training USeq2Seq on FastText vs. Random pseudo-decompositions (e.g., 77.1 vs. 76.5 F1 on the out-of-domain dev set). The best decomposition methods learn with USeq2Seq. Using Seq2Seq to generate decompositions gives similar QA accuracy as the “No Learning” setup, e.g. both approaches achieve 78.9 F1 on the original dev set for FastText pseudo-decompositions. The results are similar perhaps since supervised learning is directly trained to place high probability on pseudo-decompositions. USeq2Seq may improve over Seq2Seq by learning to align hard questions and pseudo-decompositions while ignoring the noisy pairing. After our experimentation, we chose USeq2Seq trained on FastText pseudo-decompositions as the final model, and we submitted the model for hidden test evaluation. Our approach achieved a test F1 of 79.34 and Exact Match (EM) of 66.33. Our approach is competitive with concurrent, state-of-the-art systems SAE BIBREF7 and HGN BIBREF8, which both (unlike our approach) learn from additional, strong supervision about which sentences are necessary to answer the question. <<</Main Results>>> <<<Question Type Breakdown>>> To understand where decompositions help, we break down QA performance across 4 question types from BIBREF3. “Bridge” questions ask about an entity not explicitly mentioned in the question (“When was Erik Watts' father born?”). “Intersection” questions ask to find an entity that satisfies multiple separate conditions (“Who was on CNBC and Fox News?”). “Comparison” questions ask to compare a property of two entities (“Which is taller, Momhil Sar or K2?”). “Single-hop” questions are likely answerable using single-hop shortcuts or single-paragraph reasoning (“Where is Electric Six from?”). We split the original dev set into the 4 types using the supervised type classifier from BIBREF3. Table shows F1 scores for RoBERTa with and without decompositions across the 4 types. Unsupervised decompositions improve QA across all question types. Our single decomposition model generates useful sub-questions for all question types without special case handling, unlike earlier work from BIBREF3 which handled each question type separately. For single-hop questions, our QA approach does not require falling back to a single-hop QA model and instead learns to leverage decompositions to better answer questions with single-hop shortcuts (76.9 vs. 73.9 F1 without decompositions). <<</Question Type Breakdown>>> <<<Answers to Sub-Questions are Crucial>>> To measure the usefulness of sub-questions and sub-answers, we train the multi-hop QA model with various, ablated inputs, as shown in Table . Sub-answers are crucial to improving QA, as sub-questions with no answers or random answers do not help (76.9 vs. 77.0 F1 for the baseline). Only when sub-answers are provided do we see improved QA, with or without sub-questions (80.1 and 80.2 F1, respectively). It is important to provide the sentence containing the predicted answer span instead of the answer span alone (80.1 vs. 77.8 F1, respectively), though the answer span alone still improves over the baseline (77.0 F1). <<</Answers to Sub-Questions are Crucial>>> <<<How Do Decompositions Help?>>> Decompositions help to answer questions by retrieving important supporting evidence to answer questions. Fig. FIGREF41 shows that multi-hop QA accuracy increases when the sub-answer sentences are the “supporting facts” or sentences needed to answer the question, as annotated by HotpotQA. We retrieve supporting facts without learning to predict them with strong supervision, unlike many state-of-the-art models BIBREF7, BIBREF8, BIBREF22. <<</How Do Decompositions Help?>>> <<<Example Decompositions>>> To illustrate how decompositions help QA, Table shows example sub-questions from our best decomposition model with predicted sub-answers. Sub-questions are single-hop questions relevant to the multi-hop question. The single-hop QA model returns relevant sub-answers, sometimes in spite of grammatical errors (Q1, SQ$_1$) or under-specified questions (Q2, SQ$_1$). The multi-hop QA model then returns an answer consistent with the predicted sub-answers. The decomposition model is largely extractive, copying from the multi-hop question rather than hallucinating new entities, which helps generate relevant sub-questions. To better understand our system, we analyze the model for each stage: decomposition, single-hop QA, and multi-hop QA. <<</Example Decompositions>>> <<</Results on Question Answering>>> <<<Analysis>>> <<<Unsupervised Decomposition Model>>> <<<Intrinsic Evaluation of Decompositions>>> We evaluate the quality of decompositions on other metrics aside from downstream QA. To measure the fluency of decompositions, we compute the likelihood of decompositions using the pre-trained GPT-2 language model BIBREF27. We train a classifier on the question-wellformedness dataset of BIBREF28, and we use the classifier to estimate the proportion of sub-questions that are well-formed. We measure how abstractive decompositions are by computing (i) the token Levenstein distance between the multi-hop question and its generated decomposition and (ii) the ratio between the length of the decomposition and the length of the multi-hop question. We compare our best decomposition model against the supervised+heuristic decompositions from DecompRC BIBREF3 in Table . Unsupervised decompositions are both more natural and well-formed than decompositions from DecompRC. Unsupervised decompositions are also closer in edit distance and length to the multi-hop question, consistent with our observation that our decomposition model is largely extractive. <<</Intrinsic Evaluation of Decompositions>>> <<<Quality of Decomposition Model>>> Another way to test the quality of the decomposition model is to test if the model places higher probability on decompositions that are more helpful for downstream QA. We generate $N=5$ hypotheses from our best decomposition model using beam search, and we train a multi-hop QA model to use the $n$th-ranked hypothesis as a question decomposition (Fig. FIGREF46, left). QA accuracy decreases as we use lower probability decompositions, but accuracy remains relatively robust, at most decreasing from 80.1 to 79.3 F1. The limited drop suggests that decompositions are still useful if they are among the model's top hypotheses, another indication that our model is trained well for decomposition. <<</Quality of Decomposition Model>>> <<</Unsupervised Decomposition Model>>> <<</Analysis>>> <<<Related Work>>> Answering complicated questions has been a long-standing challenge in natural language processing. To this end, prior work has explored decomposing questions with supervision or heuristic algorithms. IBM Watson BIBREF29 decomposes questions into sub-questions in multiple ways or not at all. DecompRC BIBREF3 largely frames sub-questions as extractive spans of a multi-hop question, learning to predict span-based sub-questions via supervised learning on human annotations. In other cases, DecompRC decomposes a multi-hop question using a heuristic algorithm, or DecompRC does not decompose at all. Watson and DecompRC use special case handling to decompose different questions, while our algorithm is fully automated and requires minimal hand-engineering. More traditional, semantic parsing methods map questions to compositional programs, whose sub-programs can be viewed as question decompositions in a formal language BIBREF2, BIBREF30. Examples include classical QA systems like SHRDLU BIBREF31 and LUNAR BIBREF32, as well as neural Seq2Seq semantic parsers BIBREF33 and neural module networks BIBREF34, BIBREF35. Such methods usually require strong, program-level supervision to generate programs, as in visual QA BIBREF36 and on HotpotQA BIBREF37. Some models use other forms of strong supervision, e.g. predicting the “supporting evidence” to answer a question annotated by HotpotQA. Such an approach is taken by SAE BIBREF7 and HGN BIBREF8, whose methods may be combined with our approach. Unsupervised decomposition complements strongly and weakly supervised decomposition approaches. Our unsupervised approach enables methods to leverage millions of otherwise unusable questions, similar to work on unsupervised QA BIBREF11. When decomposition examples exist, supervised and unsupervised learning can be used in tandem to learn from both labeled and unlabeled examples. Such semi-supervised methods outperform supervised learning for tasks like machine translation BIBREF38. Other work on weakly supervised question generation uses a downstream QA model's accuracy as a signal for learning to generate useful questions. Weakly supervised question generation often uses reinforcement learning BIBREF39, BIBREF40, BIBREF41, BIBREF42, BIBREF43, where an unsupervised initialization can greatly mitigate the issues of exploring from scratch BIBREF44. <<</Related Work>>> <<<Conclusion>>> We proposed an algorithm that decomposes questions without supervision, using 3 stages: (1) learning to decompose using pseudo-decompositions without supervision, (2) answering sub-questions with an off-the-shelf QA system, and (3) answering hard questions more accurately using sub-questions and their answers as additional input. When evaluated on HotpotQA, a standard benchmark for multi-hop QA, our approach significantly improved accuracy over an equivalent model that did not use decompositions. Our approach relies only on the final answer as supervision but works as effectively as state-of-the-art methods that rely on strong supervision, such as supporting fact labels or example decompositions. Qualitatively, we found that unsupervised decomposition resulted in fluent sub-questions whose answers often match the annotated supporting facts in HotpotQA. Our unsupervised decompositions are largely extractive, which is effective for compositional, multi-hop questions but not all complex questions, showing room for future work. Overall, this work opens up exciting avenues for leveraging methods in unsupervised learning and natural language generation to improve the interpretability and generalization of machine learning systems. <<</Conclusion>>> <<</Title>>>
{ "references": [ "RoBERTa baseline" ], "type": "extractive" }
1912.08320
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the key issues around whether the gold standard data produced in such an annotation is reliable? Context: <<<Title>>> Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? <<<Abstract>>> Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place. <<</Abstract>>> <<<Introduction>>> Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications. <<<Study overview>>> All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more. As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier. <<</Study overview>>> <<</Introduction>>> <<<Literature review and motivation>>> <<<A different kind of “black-boxing” in machine learning>>> In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8. In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation. In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15. <<</A different kind of “black-boxing” in machine learning>>> <<<Content analysis>>> Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17. Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based. Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways. <<</Content analysis>>> <<<Meta-research and methods papers in linguistics and crowdsourcing>>> Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31. Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly. <<</Meta-research and methods papers in linguistics and crowdsourcing>>> <<<The data documentation movements>>> Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47. A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion. <<</The data documentation movements>>> <<</Literature review and motivation>>> <<<Data and methods>>> <<<Data: machine learning papers performing classification tasks on Twitter data>>> Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more. We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined. ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo. <<</Data: machine learning papers performing classification tasks on Twitter data>>> <<<Labeling team, training, and workflow>>> Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics. The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined. <<</Labeling team, training, and workflow>>> <<<Second round verification and reconciliation>>> After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57. Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators. We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52. <<</Second round verification and reconciliation>>> <<<Raw and normalized information scores>>> We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset. For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47. <<</Raw and normalized information scores>>> <<</Data and methods>>> <<<Findings>>> <<<Original classification task>>> The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models. As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions). <<</Original classification task>>> <<<Labels from human annotation>>> One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research. <<</Labels from human annotation>>> <<<Used original human annotation and external human annotation>>> Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset. <<</Used original human annotation and external human annotation>>> <<<Original human annotation source>>> Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.” As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk. <<</Original human annotation source>>> <<<Number of human annotators>>> Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics. As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work. <<</Number of human annotators>>> <<<Formal definitions and instructions>>> Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples. <<</Formal definitions and instructions>>> <<<Training for human annotators>>> We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions. The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema. <<</Training for human annotators>>> <<<Pre-screening for crowdwork platforms>>> Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them. <<</Pre-screening for crowdwork platforms>>> <<<Multiple annotator overlap and reporting inter-annotator agreement>>> Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement. For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates. <<</Multiple annotator overlap and reporting inter-annotator agreement>>> <<<Reported crowdworker compensation>>> Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema. <<</Reported crowdworker compensation>>> <<<Link to dataset available>>> Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can. <<</Link to dataset available>>> <<</Findings>>> <<<Paper information scores>>> The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation. <<<Overall distributions of information scores>>> Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05. <<</Overall distributions of information scores>>> <<<Information scores by corpus and publication type>>> Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play. <<</Information scores by corpus and publication type>>> <<<Information scores by publisher>>> Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints. <<</Information scores by publisher>>> <<</Paper information scores>>> <<<Concluding discussion>>> <<<Implications>>> Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers. Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56 From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers. Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed. Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others. <<</Implications>>> <<<Limitations and future work>>> Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors. Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes). Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners. <<</Limitations and future work>>> <<</Concluding discussion>>> <<<Appendix>>> The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project. <<<Dataset/corpus details>>> <<<Keyword labels>>> To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords. The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus. <<</Keyword labels>>> <<<Distribution of paper types in the corpus>>> For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version. To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue. The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section. <<</Distribution of paper types in the corpus>>> <<<Distribution of publishers in corpus>>> For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49. <<</Distribution of publishers in corpus>>> <<</Dataset/corpus details>>> <<<Methods and analysis details>>> <<<Inter-annotator agreement>>> In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous. We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions. The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity. We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication. The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist. Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations. In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper. <<</Inter-annotator agreement>>> <<<Changes to the coding schema>>> Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases. The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55). In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round. <<</Changes to the coding schema>>> <<</Methods and analysis details>>> <<<Software used>>> All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63. <<</Software used>>> <<<Coding schema, examples, and instructions>>> A final version of our coding schema and instructions is below: 1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area. Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not. Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all. Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations. Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer. Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier. Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that. If no, skip the following questions. 2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation. 3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata. Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure. If not, skip the following questions about human annotation. Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q). Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation. Example: Generating (smart) simulated datasets from metadata is not human annotation. Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved. Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it. Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf) Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf) 4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset? Yes No Unsure Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes. New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap. If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf) 4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data? Yes No Unsure If they are using external human annotated data, skip the remaining questions: 5. Original human annotation source: Who were the human annotators? Drop-down options are: Amazon Mechanical Turk (AMT, Turkers) Any other crowdworking platform (Crowdflower / Figure8) The paper's authors Academic experts / professionals in the area No information in the paper Other Unsure For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column. Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated. 6. Number of human annotators: Put the number if stated, if not, leave blank. 7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include: Some kind of training is mentioned No information in the paper Unsure Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work. Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.” 8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples. No instructions beyond question text Instructions include formal definition or examples No information in paper (or not enough to decide) Unsure Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label” 9. Prescreening for crowdwork platforms Leave blank if this is not applicable. No prescreening (must state this) Previous platform performance qualification (e.g. AMT Master) Generic skills-based qualification (e.g. AMT Premium) Location qualification Project-specific prescreening: researchers had known ground truth and only invited No information Unsure 10. Multiple annotator overlap: Did the annotators label at least some of the same items? Yes, for all items Yes, for some items No Unsure No information If it says there was overlap but not info to say all or some, put unsure. 11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things. Yes No Unsure 12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used. Yes No Unsure 13. Link to dataset available: Is there a link in the paper to the dataset they used? Yes No Unsure <<</Coding schema, examples, and instructions>>> <<</Appendix>>> <<</Title>>>
{ "references": [ " only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics,low-effort responses from crowdworkers" ], "type": "extractive" }
1912.08320
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How were the machine learning papers from ArXiv sampled? Context: <<<Title>>> Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? <<<Abstract>>> Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place. <<</Abstract>>> <<<Introduction>>> Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications. <<<Study overview>>> All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more. As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier. <<</Study overview>>> <<</Introduction>>> <<<Literature review and motivation>>> <<<A different kind of “black-boxing” in machine learning>>> In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8. In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation. In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15. <<</A different kind of “black-boxing” in machine learning>>> <<<Content analysis>>> Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17. Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based. Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways. <<</Content analysis>>> <<<Meta-research and methods papers in linguistics and crowdsourcing>>> Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31. Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly. <<</Meta-research and methods papers in linguistics and crowdsourcing>>> <<<The data documentation movements>>> Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47. A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion. <<</The data documentation movements>>> <<</Literature review and motivation>>> <<<Data and methods>>> <<<Data: machine learning papers performing classification tasks on Twitter data>>> Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more. We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined. ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo. <<</Data: machine learning papers performing classification tasks on Twitter data>>> <<<Labeling team, training, and workflow>>> Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics. The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined. <<</Labeling team, training, and workflow>>> <<<Second round verification and reconciliation>>> After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57. Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators. We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52. <<</Second round verification and reconciliation>>> <<<Raw and normalized information scores>>> We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset. For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47. <<</Raw and normalized information scores>>> <<</Data and methods>>> <<<Findings>>> <<<Original classification task>>> The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models. As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions). <<</Original classification task>>> <<<Labels from human annotation>>> One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research. <<</Labels from human annotation>>> <<<Used original human annotation and external human annotation>>> Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset. <<</Used original human annotation and external human annotation>>> <<<Original human annotation source>>> Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.” As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk. <<</Original human annotation source>>> <<<Number of human annotators>>> Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics. As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work. <<</Number of human annotators>>> <<<Formal definitions and instructions>>> Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples. <<</Formal definitions and instructions>>> <<<Training for human annotators>>> We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions. The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema. <<</Training for human annotators>>> <<<Pre-screening for crowdwork platforms>>> Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them. <<</Pre-screening for crowdwork platforms>>> <<<Multiple annotator overlap and reporting inter-annotator agreement>>> Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement. For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates. <<</Multiple annotator overlap and reporting inter-annotator agreement>>> <<<Reported crowdworker compensation>>> Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema. <<</Reported crowdworker compensation>>> <<<Link to dataset available>>> Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can. <<</Link to dataset available>>> <<</Findings>>> <<<Paper information scores>>> The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation. <<<Overall distributions of information scores>>> Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05. <<</Overall distributions of information scores>>> <<<Information scores by corpus and publication type>>> Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play. <<</Information scores by corpus and publication type>>> <<<Information scores by publisher>>> Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints. <<</Information scores by publisher>>> <<</Paper information scores>>> <<<Concluding discussion>>> <<<Implications>>> Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers. Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56 From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers. Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed. Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others. <<</Implications>>> <<<Limitations and future work>>> Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors. Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes). Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners. <<</Limitations and future work>>> <<</Concluding discussion>>> <<<Appendix>>> The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project. <<<Dataset/corpus details>>> <<<Keyword labels>>> To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords. The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus. <<</Keyword labels>>> <<<Distribution of paper types in the corpus>>> For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version. To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue. The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section. <<</Distribution of paper types in the corpus>>> <<<Distribution of publishers in corpus>>> For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49. <<</Distribution of publishers in corpus>>> <<</Dataset/corpus details>>> <<<Methods and analysis details>>> <<<Inter-annotator agreement>>> In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous. We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions. The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity. We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication. The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist. Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations. In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper. <<</Inter-annotator agreement>>> <<<Changes to the coding schema>>> Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases. The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55). In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round. <<</Changes to the coding schema>>> <<</Methods and analysis details>>> <<<Software used>>> All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63. <<</Software used>>> <<<Coding schema, examples, and instructions>>> A final version of our coding schema and instructions is below: 1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area. Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not. Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all. Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations. Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer. Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier. Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that. If no, skip the following questions. 2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation. 3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata. Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure. If not, skip the following questions about human annotation. Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q). Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation. Example: Generating (smart) simulated datasets from metadata is not human annotation. Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved. Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it. Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf) Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf) 4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset? Yes No Unsure Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes. New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap. If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf) 4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data? Yes No Unsure If they are using external human annotated data, skip the remaining questions: 5. Original human annotation source: Who were the human annotators? Drop-down options are: Amazon Mechanical Turk (AMT, Turkers) Any other crowdworking platform (Crowdflower / Figure8) The paper's authors Academic experts / professionals in the area No information in the paper Other Unsure For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column. Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated. 6. Number of human annotators: Put the number if stated, if not, leave blank. 7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include: Some kind of training is mentioned No information in the paper Unsure Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work. Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.” 8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples. No instructions beyond question text Instructions include formal definition or examples No information in paper (or not enough to decide) Unsure Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label” 9. Prescreening for crowdwork platforms Leave blank if this is not applicable. No prescreening (must state this) Previous platform performance qualification (e.g. AMT Master) Generic skills-based qualification (e.g. AMT Premium) Location qualification Project-specific prescreening: researchers had known ground truth and only invited No information Unsure 10. Multiple annotator overlap: Did the annotators label at least some of the same items? Yes, for all items Yes, for some items No Unsure No information If it says there was overlap but not info to say all or some, put unsure. 11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things. Yes No Unsure 12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used. Yes No Unsure 13. Link to dataset available: Is there a link in the paper to the dataset they used? Yes No Unsure <<</Coding schema, examples, and instructions>>> <<</Appendix>>> <<</Title>>>
{ "references": [ "sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph),filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive),filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive)" ], "type": "extractive" }
1912.08320
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What are the core best practices of structured content analysis? Context: <<<Title>>> Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? <<<Abstract>>> Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place. <<</Abstract>>> <<<Introduction>>> Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications. <<<Study overview>>> All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more. As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier. <<</Study overview>>> <<</Introduction>>> <<<Literature review and motivation>>> <<<A different kind of “black-boxing” in machine learning>>> In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8. In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation. In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15. <<</A different kind of “black-boxing” in machine learning>>> <<<Content analysis>>> Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17. Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based. Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways. <<</Content analysis>>> <<<Meta-research and methods papers in linguistics and crowdsourcing>>> Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31. Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly. <<</Meta-research and methods papers in linguistics and crowdsourcing>>> <<<The data documentation movements>>> Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47. A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion. <<</The data documentation movements>>> <<</Literature review and motivation>>> <<<Data and methods>>> <<<Data: machine learning papers performing classification tasks on Twitter data>>> Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more. We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined. ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo. <<</Data: machine learning papers performing classification tasks on Twitter data>>> <<<Labeling team, training, and workflow>>> Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics. The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined. <<</Labeling team, training, and workflow>>> <<<Second round verification and reconciliation>>> After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57. Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators. We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52. <<</Second round verification and reconciliation>>> <<<Raw and normalized information scores>>> We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset. For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47. <<</Raw and normalized information scores>>> <<</Data and methods>>> <<<Findings>>> <<<Original classification task>>> The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models. As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions). <<</Original classification task>>> <<<Labels from human annotation>>> One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research. <<</Labels from human annotation>>> <<<Used original human annotation and external human annotation>>> Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset. <<</Used original human annotation and external human annotation>>> <<<Original human annotation source>>> Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.” As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk. <<</Original human annotation source>>> <<<Number of human annotators>>> Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics. As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work. <<</Number of human annotators>>> <<<Formal definitions and instructions>>> Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples. <<</Formal definitions and instructions>>> <<<Training for human annotators>>> We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions. The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema. <<</Training for human annotators>>> <<<Pre-screening for crowdwork platforms>>> Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them. <<</Pre-screening for crowdwork platforms>>> <<<Multiple annotator overlap and reporting inter-annotator agreement>>> Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement. For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates. <<</Multiple annotator overlap and reporting inter-annotator agreement>>> <<<Reported crowdworker compensation>>> Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema. <<</Reported crowdworker compensation>>> <<<Link to dataset available>>> Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can. <<</Link to dataset available>>> <<</Findings>>> <<<Paper information scores>>> The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation. <<<Overall distributions of information scores>>> Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05. <<</Overall distributions of information scores>>> <<<Information scores by corpus and publication type>>> Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play. <<</Information scores by corpus and publication type>>> <<<Information scores by publisher>>> Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints. <<</Information scores by publisher>>> <<</Paper information scores>>> <<<Concluding discussion>>> <<<Implications>>> Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers. Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56 From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers. Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed. Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others. <<</Implications>>> <<<Limitations and future work>>> Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors. Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes). Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners. <<</Limitations and future work>>> <<</Concluding discussion>>> <<<Appendix>>> The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project. <<<Dataset/corpus details>>> <<<Keyword labels>>> To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords. The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus. <<</Keyword labels>>> <<<Distribution of paper types in the corpus>>> For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version. To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue. The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section. <<</Distribution of paper types in the corpus>>> <<<Distribution of publishers in corpus>>> For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49. <<</Distribution of publishers in corpus>>> <<</Dataset/corpus details>>> <<<Methods and analysis details>>> <<<Inter-annotator agreement>>> In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous. We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions. The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity. We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication. The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist. Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations. In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper. <<</Inter-annotator agreement>>> <<<Changes to the coding schema>>> Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases. The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55). In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round. <<</Changes to the coding schema>>> <<</Methods and analysis details>>> <<<Software used>>> All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63. <<</Software used>>> <<<Coding schema, examples, and instructions>>> A final version of our coding schema and instructions is below: 1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area. Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not. Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all. Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations. Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer. Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier. Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that. If no, skip the following questions. 2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation. 3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata. Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure. If not, skip the following questions about human annotation. Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q). Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation. Example: Generating (smart) simulated datasets from metadata is not human annotation. Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved. Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it. Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf) Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf) 4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset? Yes No Unsure Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes. New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap. If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf) 4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data? Yes No Unsure If they are using external human annotated data, skip the remaining questions: 5. Original human annotation source: Who were the human annotators? Drop-down options are: Amazon Mechanical Turk (AMT, Turkers) Any other crowdworking platform (Crowdflower / Figure8) The paper's authors Academic experts / professionals in the area No information in the paper Other Unsure For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column. Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated. 6. Number of human annotators: Put the number if stated, if not, leave blank. 7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include: Some kind of training is mentioned No information in the paper Unsure Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work. Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.” 8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples. No instructions beyond question text Instructions include formal definition or examples No information in paper (or not enough to decide) Unsure Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label” 9. Prescreening for crowdwork platforms Leave blank if this is not applicable. No prescreening (must state this) Previous platform performance qualification (e.g. AMT Master) Generic skills-based qualification (e.g. AMT Premium) Location qualification Project-specific prescreening: researchers had known ground truth and only invited No information Unsure 10. Multiple annotator overlap: Did the annotators label at least some of the same items? Yes, for all items Yes, for some items No Unsure No information If it says there was overlap but not info to say all or some, put unsure. 11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things. Yes No Unsure 12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used. Yes No Unsure 13. Link to dataset available: Is there a link in the paper to the dataset they used? Yes No Unsure <<</Coding schema, examples, and instructions>>> <<</Appendix>>> <<</Title>>>
{ "references": [ "“coding scheme” is defined,coders are trained with the coding scheme,Training sometimes results in changes to the coding scheme,calculation of “inter-annotator agreement” or “inter-rater reliability.”,there is a process of “reconciliation” for disagreements" ], "type": "extractive" }
1912.08320
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: In what sense is data annotation similar to structured content analysis? Context: <<<Title>>> Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? <<<Abstract>>> Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place. <<</Abstract>>> <<<Introduction>>> Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications. <<<Study overview>>> All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more. As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier. <<</Study overview>>> <<</Introduction>>> <<<Literature review and motivation>>> <<<A different kind of “black-boxing” in machine learning>>> In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8. In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation. In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15. <<</A different kind of “black-boxing” in machine learning>>> <<<Content analysis>>> Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17. Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based. Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways. <<</Content analysis>>> <<<Meta-research and methods papers in linguistics and crowdsourcing>>> Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31. Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly. <<</Meta-research and methods papers in linguistics and crowdsourcing>>> <<<The data documentation movements>>> Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47. A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion. <<</The data documentation movements>>> <<</Literature review and motivation>>> <<<Data and methods>>> <<<Data: machine learning papers performing classification tasks on Twitter data>>> Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more. We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined. ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo. <<</Data: machine learning papers performing classification tasks on Twitter data>>> <<<Labeling team, training, and workflow>>> Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics. The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined. <<</Labeling team, training, and workflow>>> <<<Second round verification and reconciliation>>> After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57. Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators. We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52. <<</Second round verification and reconciliation>>> <<<Raw and normalized information scores>>> We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset. For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47. <<</Raw and normalized information scores>>> <<</Data and methods>>> <<<Findings>>> <<<Original classification task>>> The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models. As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions). <<</Original classification task>>> <<<Labels from human annotation>>> One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research. <<</Labels from human annotation>>> <<<Used original human annotation and external human annotation>>> Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset. <<</Used original human annotation and external human annotation>>> <<<Original human annotation source>>> Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.” As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk. <<</Original human annotation source>>> <<<Number of human annotators>>> Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics. As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work. <<</Number of human annotators>>> <<<Formal definitions and instructions>>> Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples. <<</Formal definitions and instructions>>> <<<Training for human annotators>>> We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions. The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema. <<</Training for human annotators>>> <<<Pre-screening for crowdwork platforms>>> Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them. <<</Pre-screening for crowdwork platforms>>> <<<Multiple annotator overlap and reporting inter-annotator agreement>>> Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement. For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates. <<</Multiple annotator overlap and reporting inter-annotator agreement>>> <<<Reported crowdworker compensation>>> Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema. <<</Reported crowdworker compensation>>> <<<Link to dataset available>>> Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can. <<</Link to dataset available>>> <<</Findings>>> <<<Paper information scores>>> The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation. <<<Overall distributions of information scores>>> Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05. <<</Overall distributions of information scores>>> <<<Information scores by corpus and publication type>>> Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play. <<</Information scores by corpus and publication type>>> <<<Information scores by publisher>>> Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints. <<</Information scores by publisher>>> <<</Paper information scores>>> <<<Concluding discussion>>> <<<Implications>>> Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers. Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56 From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers. Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed. Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others. <<</Implications>>> <<<Limitations and future work>>> Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors. Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes). Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners. <<</Limitations and future work>>> <<</Concluding discussion>>> <<<Appendix>>> The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project. <<<Dataset/corpus details>>> <<<Keyword labels>>> To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords. The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus. <<</Keyword labels>>> <<<Distribution of paper types in the corpus>>> For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version. To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue. The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section. <<</Distribution of paper types in the corpus>>> <<<Distribution of publishers in corpus>>> For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49. <<</Distribution of publishers in corpus>>> <<</Dataset/corpus details>>> <<<Methods and analysis details>>> <<<Inter-annotator agreement>>> In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous. We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions. The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity. We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication. The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist. Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations. In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper. <<</Inter-annotator agreement>>> <<<Changes to the coding schema>>> Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases. The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55). In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round. <<</Changes to the coding schema>>> <<</Methods and analysis details>>> <<<Software used>>> All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63. <<</Software used>>> <<<Coding schema, examples, and instructions>>> A final version of our coding schema and instructions is below: 1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area. Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not. Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all. Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations. Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer. Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier. Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that. If no, skip the following questions. 2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation. 3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata. Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure. If not, skip the following questions about human annotation. Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q). Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation. Example: Generating (smart) simulated datasets from metadata is not human annotation. Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved. Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it. Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf) Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf) 4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset? Yes No Unsure Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes. New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap. If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf) 4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data? Yes No Unsure If they are using external human annotated data, skip the remaining questions: 5. Original human annotation source: Who were the human annotators? Drop-down options are: Amazon Mechanical Turk (AMT, Turkers) Any other crowdworking platform (Crowdflower / Figure8) The paper's authors Academic experts / professionals in the area No information in the paper Other Unsure For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column. Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated. 6. Number of human annotators: Put the number if stated, if not, leave blank. 7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include: Some kind of training is mentioned No information in the paper Unsure Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work. Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.” 8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples. No instructions beyond question text Instructions include formal definition or examples No information in paper (or not enough to decide) Unsure Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label” 9. Prescreening for crowdwork platforms Leave blank if this is not applicable. No prescreening (must state this) Previous platform performance qualification (e.g. AMT Master) Generic skills-based qualification (e.g. AMT Premium) Location qualification Project-specific prescreening: researchers had known ground truth and only invited No information Unsure 10. Multiple annotator overlap: Did the annotators label at least some of the same items? Yes, for all items Yes, for some items No Unsure No information If it says there was overlap but not info to say all or some, put unsure. 11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things. Yes No Unsure 12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used. Yes No Unsure 13. Link to dataset available: Is there a link in the paper to the dataset they used? Yes No Unsure <<</Coding schema, examples, and instructions>>> <<</Appendix>>> <<</Title>>>
{ "references": [ "structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data,Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items." ], "type": "extractive" }
2002.05058
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How they add human prefference annotation to fine-tuning process? Context: <<<Title>>> Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models <<<Abstract>>> Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model. <<</Abstract>>> <<<Introduction>>> Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation. Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper. To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides. The contribution of this paper is threefold: We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set. We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches. We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem. <<</Introduction>>> <<<Related Work>>> Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous. Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence. Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization. Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation. Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models. <<</Related Work>>> <<<Methodology>>> We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models. <<<Learning to Compare>>> The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments. The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator. We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation. One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models. To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations. The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6 where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$). As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable. <<</Learning to Compare>>> <<<Skill Rating>>> In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator. Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling. <<</Skill Rating>>> <<</Methodology>>> <<<Experiments>>> We set up experiments in order to answer the following research questions: RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models? RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better? RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping? RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem? <<<Experimental Settings>>> <<<Datasets>>> We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog. <<</Datasets>>> <<<Compared Models and Metrics>>> As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation. Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words. The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations. <<</Compared Models and Metrics>>> <<<Detail of Parameterized Evaluators>>> The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT. <<</Detail of Parameterized Evaluators>>> <<<Human Evaluation Procedure>>> As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed. We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach. We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator. <<</Human Evaluation Procedure>>> <<</Experimental Settings>>> <<<Experimental Designs & Results>>> <<<RQ1: Sample-Level Correlation>>> To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000. The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample. <<</RQ1: Sample-Level Correlation>>> <<<RQ2: Model-Level Correlation>>> As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model. Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation. <<</RQ2: Model-Level Correlation>>> <<<RQ3&4: Automated Metrics for Model Training>>> We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models. The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem. <<</RQ3&4: Automated Metrics for Model Training>>> <<</Experimental Designs & Results>>> <<<Qualitative Analysis>>> We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference. <<</Qualitative Analysis>>> <<<Ablation Study>>> To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model: w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method. w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models. w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training. w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision). w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty. w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT. We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance. <<</Ablation Study>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison. By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "human preference annotation is available,$Q(x_1, x_2) \\in \\lbrace >,<,\\approx \\rbrace $ is the true label for the pair" ], "type": "extractive" }
2002.05058
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What previous automated evalution approaches authors mention? Context: <<<Title>>> Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models <<<Abstract>>> Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model. <<</Abstract>>> <<<Introduction>>> Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation. Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper. To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides. The contribution of this paper is threefold: We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set. We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches. We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem. <<</Introduction>>> <<<Related Work>>> Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous. Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence. Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization. Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation. Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models. <<</Related Work>>> <<<Methodology>>> We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models. <<<Learning to Compare>>> The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments. The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator. We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation. One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models. To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations. The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6 where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$). As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable. <<</Learning to Compare>>> <<<Skill Rating>>> In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator. Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling. <<</Skill Rating>>> <<</Methodology>>> <<<Experiments>>> We set up experiments in order to answer the following research questions: RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models? RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better? RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping? RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem? <<<Experimental Settings>>> <<<Datasets>>> We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog. <<</Datasets>>> <<<Compared Models and Metrics>>> As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation. Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words. The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations. <<</Compared Models and Metrics>>> <<<Detail of Parameterized Evaluators>>> The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT. <<</Detail of Parameterized Evaluators>>> <<<Human Evaluation Procedure>>> As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed. We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach. We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator. <<</Human Evaluation Procedure>>> <<</Experimental Settings>>> <<<Experimental Designs & Results>>> <<<RQ1: Sample-Level Correlation>>> To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000. The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample. <<</RQ1: Sample-Level Correlation>>> <<<RQ2: Model-Level Correlation>>> As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model. Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation. <<</RQ2: Model-Level Correlation>>> <<<RQ3&4: Automated Metrics for Model Training>>> We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models. The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem. <<</RQ3&4: Automated Metrics for Model Training>>> <<</Experimental Designs & Results>>> <<<Qualitative Analysis>>> We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference. <<</Qualitative Analysis>>> <<<Ablation Study>>> To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model: w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method. w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models. w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training. w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision). w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty. w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT. We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance. <<</Ablation Study>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison. By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "Text Overlap Metrics, including BLEU,Perplexity,Parameterized Metrics" ], "type": "extractive" }
2002.05058
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Do the authors suggest that proposed metric replace human evaluation on this task? Context: <<<Title>>> Learning to Compare for Better Training and Evaluation of Open Domain Natural Language Generation Models <<<Abstract>>> Automated evaluation of open domain natural language generation (NLG) models remains a challenge and widely used metrics such as BLEU and Perplexity can be misleading in some cases. In our paper, we propose to evaluate natural language generation models by learning to compare a pair of generated sentences by fine-tuning BERT, which has been shown to have good natural language understanding ability. We also propose to evaluate the model-level quality of NLG models with sample-level comparison results with skill rating system. While able to be trained in a fully self-supervised fashion, our model can be further fine-tuned with a little amount of human preference annotation to better imitate human judgment. In addition to evaluating trained models, we propose to apply our model as a performance indicator during training for better hyperparameter tuning and early-stopping. We evaluate our approach on both story generation and chit-chat dialogue response generation. Experimental results show that our model correlates better with human preference compared with previous automated evaluation approaches. Training with the proposed metric yields better performance in human evaluation, which further demonstrates the effectiveness of the proposed model. <<</Abstract>>> <<<Introduction>>> Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation. Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper. To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides. The contribution of this paper is threefold: We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set. We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches. We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem. <<</Introduction>>> <<<Related Work>>> Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below. Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous. Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence. Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization. Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation. Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models. <<</Related Work>>> <<<Methodology>>> We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models. <<<Learning to Compare>>> The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments. The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator. We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation. One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models. To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations. The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6 where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$). As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable. <<</Learning to Compare>>> <<<Skill Rating>>> In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator. Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling. <<</Skill Rating>>> <<</Methodology>>> <<<Experiments>>> We set up experiments in order to answer the following research questions: RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models? RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better? RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping? RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem? <<<Experimental Settings>>> <<<Datasets>>> We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog. <<</Datasets>>> <<<Compared Models and Metrics>>> As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation. Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words. The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations. <<</Compared Models and Metrics>>> <<<Detail of Parameterized Evaluators>>> The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT. <<</Detail of Parameterized Evaluators>>> <<<Human Evaluation Procedure>>> As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed. We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach. We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator. <<</Human Evaluation Procedure>>> <<</Experimental Settings>>> <<<Experimental Designs & Results>>> <<<RQ1: Sample-Level Correlation>>> To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000. The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample. <<</RQ1: Sample-Level Correlation>>> <<<RQ2: Model-Level Correlation>>> As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model. Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation. <<</RQ2: Model-Level Correlation>>> <<<RQ3&4: Automated Metrics for Model Training>>> We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models. The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem. <<</RQ3&4: Automated Metrics for Model Training>>> <<</Experimental Designs & Results>>> <<<Qualitative Analysis>>> We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference. <<</Qualitative Analysis>>> <<<Ablation Study>>> To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model: w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method. w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models. w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training. w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision). w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty. w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT. We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance. <<</Ablation Study>>> <<</Experiments>>> <<<Discussion and Conclusion>>> In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison. By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. <<</Discussion and Conclusion>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
2002.06675
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How big are improvements with multilingual ASR training vs single language training? Context: <<<Title>>> Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language <<<Abstract>>> Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has been produced and accumulated to save their culture, only a quite limited parts of them are transcribed so far. Thus, we started a project of automatic speech recognition (ASR) for the Ainu language in order to contribute to the development of annotated language archives. In this paper, we report speech corpus development and the structure and performance of end-to-end ASR for Ainu. We investigated four modeling units (phone, syllable, word piece, and word) and found that the syllable-based model performed best in terms of both word and phone recognition accuracy, which were about 60% and over 85% respectively in speaker-open condition. Furthermore, word and phone accuracy of 80% and 90% has been achieved in a speaker-closed setting. We also found out that a multilingual ASR training with additional speech corpora of English and Japanese further improves the speaker-open test accuracy. <<</Abstract>>> <<<Introduction>>> Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue. The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project. We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages. <<</Introduction>>> <<<Overview of the Ainu Language>>> This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language. <<<Background>>> The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet. <<</Background>>> <<<The Ainu Language and its Writing System>>> The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”). Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below. <<</The Ainu Language and its Writing System>>> <<<Types of Ainu Recordings>>> The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step. <<</Types of Ainu Recordings>>> <<<Previous Work>>> There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet. It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum. <<</Previous Work>>> <<</Overview of the Ainu Language>>> <<<Ainu Speech Corpus>>> In this section we explain the content of the data sets and how we modified it for our ASR corpus. <<<Numbers of Speakers and Episodes>>> The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2. <<</Numbers of Speakers and Episodes>>> <<<Data Annotation>>> For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below. Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information. To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1. <<</Data Annotation>>> <<</Ainu Speech Corpus>>> <<<End-to-end Speech Recognition>>> In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem. <<<End-to-end Modeling>>> End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model. CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example: The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows. $\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1). The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output. The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training. In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder. <<</End-to-end Modeling>>> <<<Modeling Units>>> In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely. In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below. <<<Phone>>> As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3. <<</Phone>>> <<<Syllable>>> A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure. A word with a single letter is unchanged. Two consecutive Cs and Vs are given a syllable boundary between them. R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$ (R $$ {C, V}) Put a syllable boundary after the segment-initial V if it is following by at least two phones. VCR$^+$$\rightarrow $ V-CR$^+$ Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left. (CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC} In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process. This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us. <<</Syllable>>> <<<Word Piece>>> The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3. <<</Word Piece>>> <<<Word>>> The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'. <<</Word>>> <<</Modeling Units>>> <<<Multilingual Training>>> When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers. In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained. <<</Multilingual Training>>> <<</End-to-end Speech Recognition>>> <<<Experimental Evaluation>>> In this section the setting and results of ASR experiments are described and the results are discussed. <<<Data Setup>>> The ASR experiments were performed in speaker-open condition as well as speaker-closed condition. In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted. <<</Data Setup>>> <<<Experimental Setting>>> The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds. The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder, where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31. Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set. For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments. <<</Experimental Setting>>> <<<Results>>> Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets. The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size. The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below. In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.) WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero. The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language. <<</Results>>> <<</Experimental Evaluation>>> <<<Summary>>> In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques. <<</Summary>>> <<<Acknowledgement>>> The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language. <<</Acknowledgement>>> <<</Title>>>
{ "references": [ "relative WER improvement of 10%." ], "type": "extractive" }
2002.06675
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What is the difference between speaker-open and speaker-closed setting? Context: <<<Title>>> Speech Corpus of Ainu Folklore and End-to-end Speech Recognition for Ainu Language <<<Abstract>>> Ainu is an unwritten language that has been spoken by Ainu people who are one of the ethnic groups in Japan. It is recognized as critically endangered by UNESCO and archiving and documentation of its language heritage is of paramount importance. Although a considerable amount of voice recordings of Ainu folklore has been produced and accumulated to save their culture, only a quite limited parts of them are transcribed so far. Thus, we started a project of automatic speech recognition (ASR) for the Ainu language in order to contribute to the development of annotated language archives. In this paper, we report speech corpus development and the structure and performance of end-to-end ASR for Ainu. We investigated four modeling units (phone, syllable, word piece, and word) and found that the syllable-based model performed best in terms of both word and phone recognition accuracy, which were about 60% and over 85% respectively in speaker-open condition. Furthermore, word and phone accuracy of 80% and 90% has been achieved in a speaker-closed setting. We also found out that a multilingual ASR training with additional speech corpora of English and Japanese further improves the speaker-open test accuracy. <<</Abstract>>> <<<Introduction>>> Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue. The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project. We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages. <<</Introduction>>> <<<Overview of the Ainu Language>>> This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language. <<<Background>>> The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet. <<</Background>>> <<<The Ainu Language and its Writing System>>> The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”). Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below. <<</The Ainu Language and its Writing System>>> <<<Types of Ainu Recordings>>> The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step. <<</Types of Ainu Recordings>>> <<<Previous Work>>> There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet. It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum. <<</Previous Work>>> <<</Overview of the Ainu Language>>> <<<Ainu Speech Corpus>>> In this section we explain the content of the data sets and how we modified it for our ASR corpus. <<<Numbers of Speakers and Episodes>>> The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2. <<</Numbers of Speakers and Episodes>>> <<<Data Annotation>>> For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below. Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information. To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1. <<</Data Annotation>>> <<</Ainu Speech Corpus>>> <<<End-to-end Speech Recognition>>> In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem. <<<End-to-end Modeling>>> End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model. CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example: The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows. $\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1). The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output. The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training. In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder. <<</End-to-end Modeling>>> <<<Modeling Units>>> In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely. In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below. <<<Phone>>> As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3. <<</Phone>>> <<<Syllable>>> A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure. A word with a single letter is unchanged. Two consecutive Cs and Vs are given a syllable boundary between them. R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$ (R $$ {C, V}) Put a syllable boundary after the segment-initial V if it is following by at least two phones. VCR$^+$$\rightarrow $ V-CR$^+$ Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left. (CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC} In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process. This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us. <<</Syllable>>> <<<Word Piece>>> The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3. <<</Word Piece>>> <<<Word>>> The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'. <<</Word>>> <<</Modeling Units>>> <<<Multilingual Training>>> When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers. In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained. <<</Multilingual Training>>> <<</End-to-end Speech Recognition>>> <<<Experimental Evaluation>>> In this section the setting and results of ASR experiments are described and the results are discussed. <<<Data Setup>>> The ASR experiments were performed in speaker-open condition as well as speaker-closed condition. In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted. <<</Data Setup>>> <<<Experimental Setting>>> The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds. The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder, where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31. Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set. For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments. <<</Experimental Setting>>> <<<Results>>> Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets. The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size. The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below. In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.) WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero. The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language. <<</Results>>> <<</Experimental Evaluation>>> <<<Summary>>> In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques. <<</Summary>>> <<<Acknowledgement>>> The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language. <<</Acknowledgement>>> <<</Title>>>
{ "references": [ "In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets.,In the speaker-open condition, all the data except for the test speaker's were used for training" ], "type": "extractive" }
1909.08041
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How do they train the retrieval modules? Context: <<<Title>>> Revealing the Importance of Semantic Retrieval for Machine Reading at Scale <<<Abstract>>> Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL <<</Abstract>>> <<<Introduction>>> Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task. Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks. Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems). We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification. <<</Introduction>>> <<<Related Work>>> Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS. Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA. Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding. <<</Related Work>>> <<<Method>>> In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2. To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia. The system procedure is listed below: (1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing. (2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval. (3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$. (4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$. In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6. <<<Modeling and Training>>> Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text. Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as: We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function: where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples. QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as: where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as: where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference. Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27). It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS. <<</Modeling and Training>>> <<</Method>>> <<<Experimental Setup>>> MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification. <<<Tasks and Datasets>>> HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval. FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification. As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks. <<</Tasks and Datasets>>> <<<Metrics>>> Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts. For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set. <<</Metrics>>> <<</Experimental Setup>>> <<<Results on Benchmarks>>> We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA . As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation. Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval. Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters. <<</Results on Benchmarks>>> <<<Analysis and Ablations>>> Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules. <<<Ablation Studies>>> <<<Setups:>>> To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA. <<</Setups:>>> <<<Results:>>> Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system. Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label. <<</Results:>>> <<</Ablation Studies>>> <<<Sub-Module Change Analysis>>> To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance. <<<Effects of Paragraph-level Retrieval>>> We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph. <<</Effects of Paragraph-level Retrieval>>> <<<Effects of Sentence-level Retrieval>>> Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER. Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall. Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix. <<</Effects of Sentence-level Retrieval>>> <<</Sub-Module Change Analysis>>> <<<Answer Breakdown>>> We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%. <<</Answer Breakdown>>> <<<Examples>>> Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module. Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval. <<</Examples>>> <<</Analysis and Ablations>>> <<<Conclusion>>> We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting. <<</Conclusion>>> <<</Title>>>
{ "references": [ "We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss." ], "type": "extractive" }
1909.08041
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How do they model the neural retrieval modules? Context: <<<Title>>> Revealing the Importance of Semantic Retrieval for Machine Reading at Scale <<<Abstract>>> Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL <<</Abstract>>> <<<Introduction>>> Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task. Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks. Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems). We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification. <<</Introduction>>> <<<Related Work>>> Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS. Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA. Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding. <<</Related Work>>> <<<Method>>> In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2. To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia. The system procedure is listed below: (1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing. (2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval. (3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$. (4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$. In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6. <<<Modeling and Training>>> Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text. Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as: We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function: where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples. QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as: where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as: where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference. Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27). It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS. <<</Modeling and Training>>> <<</Method>>> <<<Experimental Setup>>> MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification. <<<Tasks and Datasets>>> HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval. FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification. As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks. <<</Tasks and Datasets>>> <<<Metrics>>> Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts. For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set. <<</Metrics>>> <<</Experimental Setup>>> <<<Results on Benchmarks>>> We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA . As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation. Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval. Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters. <<</Results on Benchmarks>>> <<<Analysis and Ablations>>> Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules. <<<Ablation Studies>>> <<<Setups:>>> To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA. <<</Setups:>>> <<<Results:>>> Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system. Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label. <<</Results:>>> <<</Ablation Studies>>> <<<Sub-Module Change Analysis>>> To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance. <<<Effects of Paragraph-level Retrieval>>> We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph. <<</Effects of Paragraph-level Retrieval>>> <<<Effects of Sentence-level Retrieval>>> Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER. Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall. Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix. <<</Effects of Sentence-level Retrieval>>> <<</Sub-Module Change Analysis>>> <<<Answer Breakdown>>> We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%. <<</Answer Breakdown>>> <<<Examples>>> Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module. Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval. <<</Examples>>> <<</Analysis and Ablations>>> <<<Conclusion>>> We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting. <<</Conclusion>>> <<</Title>>>
{ "references": [ "BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling" ], "type": "extractive" }
1909.08041
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Retrieval at what level performs better, sentence level or paragraph level? Context: <<<Title>>> Revealing the Importance of Semantic Retrieval for Machine Reading at Scale <<<Abstract>>> Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at: this https URL <<</Abstract>>> <<<Introduction>>> Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task. Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks. Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems). We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification. <<</Introduction>>> <<<Related Work>>> Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS. Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA. Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding. <<</Related Work>>> <<<Method>>> In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2. To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia. The system procedure is listed below: (1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing. (2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval. (3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$. (4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$. In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6. <<<Modeling and Training>>> Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text. Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as: We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function: where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples. QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as: where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as: where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference. Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27). It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS. <<</Modeling and Training>>> <<</Method>>> <<<Experimental Setup>>> MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification. <<<Tasks and Datasets>>> HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval. FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification. As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks. <<</Tasks and Datasets>>> <<<Metrics>>> Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts. For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set. <<</Metrics>>> <<</Experimental Setup>>> <<<Results on Benchmarks>>> We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA . As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation. Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval. Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters. <<</Results on Benchmarks>>> <<<Analysis and Ablations>>> Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules. <<<Ablation Studies>>> <<<Setups:>>> To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA. <<</Setups:>>> <<<Results:>>> Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system. Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label. <<</Results:>>> <<</Ablation Studies>>> <<<Sub-Module Change Analysis>>> To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance. <<<Effects of Paragraph-level Retrieval>>> We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph. <<</Effects of Paragraph-level Retrieval>>> <<<Effects of Sentence-level Retrieval>>> Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER. Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall. Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix. <<</Effects of Sentence-level Retrieval>>> <<</Sub-Module Change Analysis>>> <<<Answer Breakdown>>> We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%. <<</Answer Breakdown>>> <<<Examples>>> Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module. Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval. <<</Examples>>> <<</Analysis and Ablations>>> <<<Conclusion>>> We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting. <<</Conclusion>>> <<</Title>>>
{ "references": [ "This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval." ], "type": "extractive" }
1909.09270
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which languages are evaluated? Context: <<<Title>>> Named Entity Recognition with Partially Annotated Training Data <<<Abstract>>> Supervised machine learning assumes the availability of fully-labeled data, but in many cases, such as low-resource languages, the only data available is partially annotated. We study the problem of Named Entity Recognition (NER) with partially annotated training data in which a fraction of the named entities are labeled, and all other tokens, entities or otherwise, are labeled as non-entity by default. In order to train on this noisy dataset, we need to distinguish between the true and false negatives. To this end, we introduce a constraint-driven iterative algorithm that learns to detect false negatives in the noisy set and downweigh them, resulting in a weighted training set. With this set, we train a weighted NER model. We evaluate our algorithm with weighted variants of neural and non-neural NER models on data in 8 languages from several language and script families, showing strong ability to learn from partial data. Finally, to show real-world efficacy, we evaluate on a Bengali NER corpus annotated by non-speakers, outperforming the prior state-of-the-art by over 5 points F1. <<</Abstract>>> <<<Introduction>>> Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather. We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives. To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task. We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods. <<</Introduction>>> <<<Related Work>>> The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data. FernandesBr11 introduces a transductive version of structured perceptron for partially annotated sequences. However, their definition of partial annotation is labels removed at random, so examples from all classes are still available if not contiguous. Fidelity Weighted Learning BIBREF3 uses a teacher/student model, in which the teacher has access to (a small amount) of high quality data, and uses this to guide the student, which has access to (a large amount) of weak data. HedderichKl18, following GoldbergerBe17, add a noise adaptation layer on top of an LSTM, which learns how to correct noisy labels, given a small amount of training data. We compare against this model in our experiments. In the world of weak supervision, Snorkel BIBREF4, BIBREF5, is a system that combines automatic labeling functions with data integration and noise reduction methods to rapidly build large datasets. They rely on high recall and consequent redundancy of the labeling functions. We argue that in certain realistic cases, high-recall candidate identification is unavailable. We draw inspiration from the Positive-Unlabeled (PU) learning framework BIBREF6, BIBREF7, BIBREF8, BIBREF9. Originally introduced for document classification, PU learning addresses problems where examples of a single class (for example, sports) are easy to obtain, but a full labeling of all other classes is prohibitively expensive. Named entity classification as an instance of PU learning was introduced in Grave14, which uses constrained optimization with constraints similar to ours. However, they only address the problem of named entity classification, in which mentions are given, and the goal is to assign a type to a named-entity (like `location', `person', etc.) as opposed to our goal of identifying and typing named entities. Although the task is slightly different, there has been work on building `silver standard' data from Wikipedia BIBREF10, BIBREF11, BIBREF12, using hyperlink annotations as the seed set and propagating throughout the document. Partial annotation in various forms has also been studied in the contexts of POS-tagging BIBREF13, word sense disambiguation BIBREF14, temporal relation extraction BIBREF15, dependency parsing BIBREF16, and named entity recognition BIBREF17. In particular, BIBREF17 study a similar problem with a few key differences: since they remove entity surfaces randomly, the dataset is too easy; and they do not use constraints on their output. We compare against their results in our experiments. Our proposed method is most closely aligned with the Constraint Driven Learning (CoDL) framework BIBREF18, in which an iterative algorithm reminiscent of self-training is guided by constraints that are applied at each iteration. <<</Related Work>>> <<<Constrained Binary Learning>>> Our method assigns instance weights to all negative elements (tokens tagged as O), so that false negatives have low weights, and all other instances have high weights. We calculate weights according to the confidence predictions of a classifier trained iteratively over the partially annotated data. We refer to our method as Constrained Binary Learning (CBL). We will first describe the motivation for this approach before moving on to the mechanics. We start with partially annotated data (which we call set $T$) in which some, but not all, positives are annotated (set $P$), and no negative is labeled. By default, we assume that any instance not labeled as positive is labeled as negative as opposed to unlabeled. This data (set $N$) is noisy in the sense that many true positives are labeled as negative (these are false negatives). Clearly, training on $T$ as-is will result in a noisy classifier. Two possible approaches are: 1) find the false negatives and label them correctly, or 2) find the false negatives and remove them. The former method affords more training data, but runs the risk of adding noise, which could be worse than the original partial annotations. The latter is more forgiving because of an asymmetry in the penalties: it is important to remove all false negatives in $N$, but inadvertently removing true negatives from $N$ is typically not a problem, especially in NER, where negative examples dominate. Further, a binary model (only two labels) is sufficient in this case, as we need only detect entities, not type them. We choose the latter method, but instead of removing false negatives, we adopt an instance-weighting approach, in which each instance is assigned a weight $v_i \ge 0$ according to confidence in the labeling of that instance. A weight of 0 means that the loss this instance incurs during training will not update the model. With this in mind, CBL takes two phases: first, it learns a binary classifier $\lambda $ using a constrained iterative process modeled after the CODL framework BIBREF18, and depicted in Figure FIGREF5. The core of the algorithm is the train-predict-infer loop. The training process (line 4) is weighted, using weights $V$. At the start, these can be all 1 (Raw), or can be initialized with prior knowledge. The learned model is then used to predict on all of $T$ (line 5). In the inference step (line 6), we take the predictions from the prior round and the constraints $C$ and produce a new labeling on $T$, and a new set of weights $V$. The details of this inference step are presented later in this section. Although our ultimate strategy is simply to assign weights (not change labels), in this inner loop, we update the labels on $N$ according to classifier predictions. In the second phase of CBL, we use the $\lambda $ trained in the previous phase to assign weights to instances as follows: Where $P_{\lambda }(y_i=\text{O} \mid x_i)$ is understood as the classifier's confidence that instance $x_i$ takes the negative label. In practice it is sufficient to use any confidence score from the classifier, not necessarily a probability. If the classifier has accurately learned to detect entities, then for all the false negatives in $N$, $P_{\lambda }(y_i=\text{O}|x_i)$ is small, which is the goal. Ultimately, we send the original multiclass partially annotated dataset along with final weights $V$ to a standard weighted NER classifier to learn a model. No weights are needed at test time. <<<NER with CBL>>> So far, we have given a high-level view of the algorithm. In this section, we will give more low-level details, especially as they relate to the specific problem of NER. One contribution of this work is the inference step (line 6), which we address using a constrained Integer Linear Program (ILP) and describe in this section. However, the constraints are based on a value we call the entity ratio. First, we describe the entity ratio, then we describe the constraints and stopping condition of the algorithm. <<<Entity ratio and Balancing>>> We have observed that NER datasets tend to hold a relatively stable ratio of entity tokens to total tokens. We refer to this ratio as $b$, and define it with respect to some labeled dataset as: where $N$ is the set of negative examples. Previous work has shown that in fully-annotated datasets the entity ratio tends to be about $0.09 \pm 0.05$, depending on the dataset and genre BIBREF19. Intuitively, knowledge of the gold entity ratio can help us estimate when we have found all the false negatives. In our main experiments, we assume that the entity ratio with respect to the gold labeling is known for each training dataset. A similar assumption was made in ElkanNo08 when determining the $c$ value, and in Grave14 in the constraint determining the percentage of other examples. However, we also show in Section that knowledge of this ratio is not strictly necessary, and a flat value across all datasets produces similar performance. With a weighted training set, it is also useful to define the weighted entity ratio. When training an NER model on weighted data, one can change the weighted entity ratio to achieve different effects. To make balanced predictions on test, the entity ratio in the training data should roughly match that of the test data BIBREF20. To bias a model towards predicting positives or predicting negatives, the weighted entity ratio can be set higher or lower respectively. This effect is pronounced when using linear methods for NER, but not as clear in neural methods. To change the entity ratio, we scale the weights in $N$ by a scaling constant $\gamma $. Targeting a particular $b^*$, we may write: We can solve for $\gamma $: To obtain weights, $v^*_i$, that attain the desired entity ratio, $b^*$, we scale all weights in $N$ by $\gamma $. In the train-predict-infer loop, we balance the weights to a value near the gold ratio before training. <<</Entity ratio and Balancing>>> <<<Constraints and Stopping Condition>>> We encode our constraints with an Integer Linear Program (ILP), shown in Figure FIGREF17. Intuitively, the job of the inference step is to take predictions ($\hat{T}$) and use knowledge of the task to `fix' them. In the objective function (Eqn. DISPLAY_FORM18), token $i$ is represented by two indicator variables $y_{0i}$ and $y_{1i}$, representing negative and positive labels, respectively. Associated prediction scores $C_0$ and $C_1$ are from the classifier $\lambda $ in the last round of predictions. The first constraint (Eqn. ) encodes the fact that an instance cannot be both an entity and a non-entity. The second constraint (Eqn. ) enforces the ratio of positive to total tokens in the corpus to match a required entity ratio. $|T|$ is the total number of tokens in the corpus. $b$ is the required entity ratio, which increases at each iteration. $\delta $ allows some flexibility, but is small. Constraint encodes that instances in $P$ should be labeled positive since they were manually labeled and are by definition trustworthy. We set $\xi \ge 0.99$. This framework is flexible in that more complex language- or task-specific constraints could be added. For example, in English and many other languages with Latin script, it may help to add a capitalization constraint. In languages with rich morphology, certain suffixes may indicate or contraindicate a named entity. For simplicity, and because of the number of languages in our experiments, we use only a few constraints. After the ILP has selected predictions, we assign weights to each instance in preparation for training the next round. The decision process for an instance is: This is similar to Equation (DISPLAY_FORM6), except that the set of tokens that the ILP labeled as positive is larger than $P$. With new labels and weights, we start the next iteration. The stopping condition for the algorithm is related to the entity ratio. One important constraint (Eqn. ) governs how many positives are labeled at each round. This number starts at $|P|$ and is increased by a small value at each iteration, thereby improving recall. Positive instances are chosen in two ways. First, all instances in $P$ are constrained to be labeled positive (Eqn. ). Second, the objective function ensures that high-confidence positives will be chosen. The stopping condition is met when the number of required positive instances (computed using gold unweighted entity ratio) equals the number of predicted positive instances. <<</Constraints and Stopping Condition>>> <<</NER with CBL>>> <<</Constrained Binary Learning>>> <<<Experiments>>> We measure the performance of our method on 8 different languages using artificially perturbed labels to simulate the partial annotation setting. <<<Data>>> We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous. The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25. <<</Data>>> <<<Artificial Perturbation>>> We create partial annotations by perturbing gold annotated data in two ways: lowering recall (to simulate missing entities), and lowering precision (to simulate noisy annotations). To lower recall, we replace gold named entity tags with $O$ tags (for non-name). We do this by grouping named entity surface forms, and replacing tags on all occurrences of a randomly selected surface form until the desired amount remains. For example, if the token `Bangor' is chosen to be untagged, then every occurrence of `Bangor' will be untagged. We chose this slightly complicated method because the simplest idea (remove mentions randomly) leaves an artificially large diversity of surface forms, which makes the problem of discovering noisy entities easier. To lower precision, we tag a random span (of a random start position, and a random length between 1 and 3) with a random named entity tag. We continue this process until we reach the desired precision. When both precision and recall are to be perturbed, the recall adjustment is made first, and then the number of random spans to be added is calculated by the entities that are left. <<</Artificial Perturbation>>> <<<NER Models>>> In principle, CBL can use any NER method that can be trained with instance weights. We experiment with both non-neural and neural models. <<<Non-neural Model>>> For our non-neural system, we use a version of Cogcomp NER BIBREF24, BIBREF25 modified to use Weighted Averaged Perceptron. This operates on a weighted training set $D_w = \lbrace (x_i, y_i, v_i) \rbrace _{i=1}^N $, where $N$ is the number of training examples, and $v_i \ge 0$ is the weight on the $i$th training example. In this non-neural system, a training example is a word with context encoded in the features. We change only the update rule, where the learning rate $\alpha $ is multiplied by the weight: We use a standard set of features, as documented in BIBREF24. In order to keep the language-specific resources to a minimum, we did not use any gazetteers for any language. One of the most important features is Brown clusters, trained for 100, 500, and 1000 clusters for the CoNLL languages, and 2000 clusters for the remaining languages. We trained these clusters on Wikipedia text for the four CoNLL languages, and on the same monolingual text used to train the word vectors (described in Section SECREF26). <<</Non-neural Model>>> <<<Neural Model>>> A common neural model for NER is the BiLSTM-CRF model BIBREF26. However, because the Conditional Random Field (CRF) layer calculates loss at the sentence level, we need a different method to incorporate token weights. We use a variant of the CRF that allows partial annotations by marginalizing over all possible sequences BIBREF27. When using a standard BiLSTM-CRF model, the loss of a dataset ($D$) composed of sentences ($s$) is calculated as: Where $P_\theta (\mathbf {y}^{(s)} | \textbf {x}^{(s)})$ is calculated by the CRF over outputs from the BiLSTM. In the marginal CRF framework, it is assumed that $\mathbf {y}^{(s)}$ is necessarily partial, denoted as $\mathbf {y}^{(s)}_p$. To incorporate partial annotations, the loss is calculated by marginalizing over all possible sequences consistent with the partial annotations, denoted as $C(\mathbf {y}_p^s)$. However, this formulation assumes that all possible sequences are equally likely. To address this, BIBREF17 introduced a way to weigh sequences. It's easy to see that this formulation is a generalization of the standard CRF if $q(.)=1$ for the gold sequence $\mathbf {y}$, and 0 for all others. The product inside the summation depends on tag transition probabilities and tag emission probabilities, as well as token-level “weights" over the tagset. These weights can be seen as defining a soft gold labeling for each token, corresponding to confidence in each label. For clarity, define the soft gold labeling over each token $x_i$ as $\mathbf {G}_i \in [0,1]^{L}$, where $L$ is the size of the labelset. Now, we may define $q(.)$ as: Where $G_i^{y_i}$ is understood as the weight in $\mathbf {G}_i$ that corresponds to the label $y_i$. We incorporate our instance weights in this model with the following intuitions. Recall that if an instance weight $v_i=0$, this indicates low confidence in the label on token $x_i$, and therefore the labeling should not update the model at training time. Conversely, if $v_i=1$, then this label is to be trusted entirely. If $v_i=0$, we set the soft labeling weights over $x_i$ to be uniform, which is as good as no information. Since $v_i$ is defined as confidence in the O label, the soft labeling weight for O increases proportionally to $v_i$. Any remaining probability mass is distributed evenly among the other labels. To be precise, for tokens in $N$, we calculate values for $\mathbf {G}_i$ as follows: For example, consider phase 1 of Constrained Binary Learning, in which the labelset is collapsed to two labels ($L=2$). Assuming that the O label has index 0, then if $v_i=0$, then $\mathbf {G}_i = [0.5, 0.5]$. If $v_i=0.6$, then $\mathbf {G}_i = [0.6, 0.4]$. For tokens in $P$ (which have some entity label with high confidence), we always set $\mathbf {G}_i$ with 1 in the given label index, and 0 elsewhere. We use pretrained GloVe BIBREF28 word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. The other languages are distributed with monolingual text BIBREF23, which we used to train our own skip-n-gram vectors. <<</Neural Model>>> <<</NER Models>>> <<<Baselines>>> We compare against several baselines, including two from prior work. <<<Raw annotations>>> The simplest baseline is to do nothing to the partially annotated data and train on it as is. <<</Raw annotations>>> <<<Instance Weights>>> Although CBL works with no initialization (that is, all tokens with weight 1), we found that a good weighting scheme can boost performance for certain models. We design weighting schemes that give instances in $N$ weights corresponding to an estimate of the label confidence. For example, non-name tokens such as respectfully should have weight 1, but possible names, such as Russell, should have a low weight, or 0. We propose two weighting schemes: frequency-based and window-based. For the frequency-based weighting scheme, we observed that names have relatively low frequency (for example, Kennebunkport, Dushanbe) and common words are rarely names (for example the, and, so). We weigh each instance in $N$ according to its frequency. where $freq(x_i)$ is the frequency of the $i^{th}$ token in $N$ divided by the count of the most frequent token. In our experiments, we computed frequencies over $P+N$, but these could be estimated on any sufficiently large corpus. We found that the neural model performed poorly when the weights followed a Zipfian distribution (e.g. most weights very small), so for those experiments, we took the log of the token count before normalizing. For the window-based weighting scheme, noting that names rarely appear immediately adjacent to each other in English text, we set weights for tokens within a window of size 1 of a name (identified in $P$) to be $1.0$, and for tokens farther away to be 0. where $d_i$ is the distance of the $i^{th}$ token to the nearest named entity in $P$. Finally, we combine the two weighting schemes as: <<</Instance Weights>>> <<<Self-training with Marginal CRF>>> BIBREF17 propose a model based on marginal CRF BIBREF27 (described in Section SECREF26). They follow a self-training framework with cross-validation, using the trained model over all but one fold to update gold labeling distributions in the final fold. This process continues until convergence. They use a partial-CRF framework similar to ours, but taking predictions at face value, without constraints. <<</Self-training with Marginal CRF>>> <<<Neural Network with Noise Adaptation>>> Following BIBREF30, we used a neural network with a noise adaptation layer. This extra layer attempts to correct noisy examples given a probabilistic confusion matrix of label noise. Since this method needs a small amount of labeled data, we selected 500 random tokens to be the gold training set, in addition to the partial annotations. As with our BiLSTM experiments, we use pretrained GloVe word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. We omit results from the remaining languages because the scores were substantially worse even than training on raw annotations. <<</Neural Network with Noise Adaptation>>> <<</Baselines>>> <<<Experimental Setup and Results>>> We show results from our experiments in Table TABREF30. In all experiments, the training data is perturbed at 90% precision and 50% recall. These parameters are similar to the scores obtained by human annotators in a foreign language (see Section SECREF5). We evaluate each experiment with both non-neural and neural methods. First, to get an idea of the difficulty of NER in each language, we report scores from models trained on gold data without perturbation (Gold). Then we report results from an Oracle Weighting scheme (Oracle Weighting) that takes partially annotated data and assigns weights with knowledge of the true labels. Specifically, mislabeled entities in set $N$ are given weight 0, and all other tokens are given weight 1.0. This scheme is free from labeling noise, but should still get lower scores than Gold because of the smaller number of entities. Since our method estimates these weights, we do not expect CBL to outperform the Oracle method. Next, we show results from all baselines. The bottom two sections are our results, first with no initialization (Raw), and CBL over that, then with Combined Weighting initialization, and CBL over that. <<</Experimental Setup and Results>>> <<<Analysis>>> Regardless of initialization or model, CBL improves over the baselines. Our best model, CBL-Raw BiLSTM-CRF, improves over the Raw Annotations BiLSTM-CRF baseline by 11.2 points F1, and the Self-training prior work by 2.6 points F1, showing that it is an effective way to address the problem of partial annotation. Further, the best CBL version for each model is within 3 points of the corresponding Oracle ceiling, suggesting that this weighting framework is nearly saturated. The Combined weighting scheme is surprisingly effective for the non-neural model, which suggests that the intuition about frequency as distinction between names and non-names holds true. It gives modest improvement in the neural model. The Self-training method is effective, but is outperformed by our best CBL method, a difference we discuss in more detail in Section SECREF43. The Noise Adaptation method outperforms the Raw annotations Cogcomp baseline in most cases, but does not reach the performance of the Self-training method, despite using some fully labeled data. It is instructive to compare the neural and non-neural versions of each setup. The neural method is better overall, but is less able to learn from the knowledge-based initialization weights. In the non-neural method, the difference between Raw and Combined is nearly 20 points, but the difference in the neural model is less than 3 points. Combined versions of the non-neural method outperform the neural method on 3 languages: Dutch, Arabic, and Hindi. Further, in the neural method, CBL-Raw is always worse than CBL-Combined. This may be due to the way that weights are used in each model. In the non-neural model, a low enough weight completely cancels the token, whereas in the neural model it is still used in training. Since the neural model performs well in the Oracle setting, we know that it can learn from hard weights, but it may have trouble with the subtle differences encoded in frequencies. We leave it to future work to discover improved ways of incorporating instance weights in a BiLSTM-CRF. In seeking to understand the details of the other results, we need to consider the precision/recall tradeoff. First, all scores in the Gold row had higher precision than recall. Then, training on raw partially annotated data biases a classifier strongly towards predicting few entities. All results from the Raw annotations row have precision more than double the recall (e.g. Dutch Precision, Recall, F1 were: 91.5, 32.4, 47.9). In this context, the problem this paper explores is how to improve the recall of these datasets without harming the precision. <<</Analysis>>> <<<Difference from Prior Work>>> While our method has several superficial similarities with prior work, most notably BIBREF17, there are some crucial differences. Our methods are similar in that they both use a model trained at each step to assign a soft gold-labeling to each token. Each algorithm iteratively trains models using weights from the previous steps. One difference is that BIBREF17 use cross-validation to train, while we follow BIBREF18 and retrain with the entire training set at each round. However, the main difference has to do with the focus of each algorithm. Recall the discussion in Section SECREF3 regarding the two possible approaches of 1) find the false negatives and label them correctly, and 2) find the false negatives and remove them. Conceptually, the former was the approach taken by BIBREF17, the latter was our approach. Another way to look at this is as focusing on predicting correct tag labels (BIBREF17) or focus on predicting O tags with high confidence (ours). Even though they use soft labeling (which they show to be consistently better than hard labeling), it is possible that the predicted tag distribution is incorrect. Our approach allows us to avoid much of the inevitable noise that comes from labelling with a weak model. <<</Difference from Prior Work>>> <<</Experiments>>> <<<Bengali Case Study>>> So far our experiments have shown effectiveness on artificially perturbed labels, but one might argue that these systematic perturbations don't accurately simulate real-world noise. In this section, we show how our methods work in a real-world scenario, using Bengali data partially labeled by non-speakers. <<<Non-speaker Annotations>>> In order to compare with prior work, we used the train/test split from ZPWVJKM16. We removed all gold labels from the train split, romanized it BIBREF31, and presented it to two non-Bengali speaking annotators using the TALEN interface BIBREF32. The instructions were to move quickly and annotate names only when there is high confidence (e.g. when you can also identify the English version of the name). They spent about 5 total hours annotating, without using Google Translate. This sort of non-speaker annotation is possible because the text contains many `easy' entities – foreign names – which are noticeably distinct from native Bengali words. For example, consider the following: Romanized Bengali: ebisi'ra giliyyaana phinnddale aaja pyaalestaaina adhiinastha gaajaa theke aaja raate ekhabara jaaniyyechhena . Translation: ABC's Gillian Fondley has reported today from Gaza under Palestine today. The entities are Gillian Findlay, ABC, Palestine, and Gaza. While a fast-moving annotator may not catch most of these, `pyaalestaaina' could be considered an `easy' entity, because of its visual and aural similarity to `Palestine.' A clever annotator may also infer that if Palestine is mentioned, then Gaza may be present. Annotators are moving fast and being intentionally non-thorough, so the recall will be low. Since they do not speak Bengali, there are likely to be some mistakes, so the precision may drop slightly also. This is exactly the noisy partial annotation scenario addressed in this paper. The statistics of this data can be seen in Table TABREF49, including annotation scores computed with respect to the gold training data for each annotator, as well as the combined score. We show results in Table TABREF50, using the BiLSTM-CRF model. We compare against other low-resource approaches published on this dataset, including two based on Wikipedia BIBREF33, BIBREF12, another based on lexicon translation from a high-resource language BIBREF34. These prior methods operate under somewhat different paradigms than this work, but have the same goal: maximizing performance in the absence of gold training data. Raw annotations is defined as before, and gives similar high-precision low-recall results. The Combined Weighting scheme improves over Raw annotations by 10 points, achieving a score comparable to the prior state of the art. Beyond that, CBL-Raw outperforms the prior best by nearly 6 points F1, although CBL-Combined again underwhelms. To the best of our knowledge, this is the first result showing a method for non-speaker annotations to produce high-quality NER scores. The simplicity of this method and the small time investment for these results gives us confidence that this method can be effective for many low-resource languages. <<</Non-speaker Annotations>>> <<</Bengali Case Study>>> <<<Conclusions>>> We explore an understudied data scenario, and introduce a new constrained iterative algorithm to solve it. This algorithm performs well in experimental trials in several languages, on both artificially perturbed data, and in a truly low-resource situation. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Bengali,English, German, Spanish, Dutch,Amharic,Arabic,Hindi,Somali " ], "type": "extractive" }
2003.09586
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How much is decoding speed increased by increasing encoder and decreasing decoder depth? Context: <<<Title>>> Analyzing Word Translation of Transformer Layers <<<Abstract>>> The Transformer translation model is popular for its effective parallelization and performance. Though a wide range of analysis about the Transformer has been conducted recently, the role of each Transformer layer in translation has not been studied to our knowledge. In this paper, we propose approaches to analyze the translation performed in encoder / decoder layers of the Transformer. Our approaches in general project the representations of an analyzed layer to the pre-trained classifier and measure the word translation accuracy. For the analysis of encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one and transform the source encoding to the target side with the merged alignment matrix to align source tokens with target translations while bridging different input - output lengths. While analyzing decoder layers, we additionally study the effects of the source context and the decoding history in word prediction through bypassing the corresponding self-attention or cross-attention sub-layers. Our analysis reveals that the translation starts at the very beginning of the"encoding"(specifically at the source word embedding layer), and shows how translation evolves during the forward computation of layers. Based on observations gained in our analysis, we propose that increasing encoder depth while removing the same number of decoder layers can simply but significantly boost the decoding speed. Furthermore, simply inserting a linear projection layer before the decoder classifier which shares the weight matrix with the embedding layer can effectively provide small but consistent and significant improvements in our experiments on the WMT 14 English-German, English-French and WMT 15 Czech-English translation tasks (+0.42, +0.37 and +0.47 respectively). <<</Abstract>>> <<<Introduction>>> Neural Machine Translation (NMT) has achieved great success in the last few years BIBREF0, BIBREF1, BIBREF2. The popular Transformer BIBREF2 model, which outperforms previous RNN/CNN based translation models BIBREF0, BIBREF1, is based on multi-layer self-attention networks and can be paralleled effectively. Recently, a wide range of analysises BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 related to the Transformer have been conducted. For example, bisazza2018lazy perform a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder, they find no correlation between the accuracy of source morphology encoding and translation quality, and morphological features only in context and only to the extent directly transferable to the target words are captured. voita2019bottom study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. However, how the Transformer translation model transforms individual source tokens into corresponding target tokens (word translations, as shown in Figure FIGREF1), and specifically, what is the role of each Transformer layer in translation, at which layer a target word is translated has not been studied to our knowledge. To detect roles of Transformer layers in translation, in this paper, we follow previous probing approaches BIBREF11, BIBREF12, BIBREF13, and propose to measure the word translation accuracy of output representations of individual Transformer layers by probing corresponding target translation tokens in these representations. In addition to analyzing the role of each encoder / decoder layer, we also analyze the contribution of the source context and the decoding history in translation by testing the effects of the self-attention sub-layer and the cross-attention sub-layer in decoder layers. Our analysis reveals that the translation already starts at the source embedding layer, which offers an explanation for bisazza2018lazy. It also demonstrates how the word translation evolves across encoder / decoder layers and the effects of the source “encoding” and the decoding history on the translation of target tokens. Based on the observations from our analysis, we find that: 1) the proper use of more encoder layers with fewer decoder layer can significantly boost decoding speed without harming quality; 2) inserting a linear projection layer before the decoder classifier can provide small but significant and consistent improvements in our experiments on the WMT 14 English-German, English-French and WMT 15 Czech-English news translation tasks ($+0.42$, $+0.37$ and $+0.47$ BLEU respectively). <<</Introduction>>> <<<Word Translation Accuracy Analysis>>> To analyze word translation accuracy of the Transformer, we first freeze a trained Transformer model so its behavior is consistent in how it performs in translation during our analysis, then we compute the forward pass and extract output representations of the layer analyzed. Finally, we apply a linear projection layer to extract and enhance features related to translation and feed projected representations to the frozen decoder classifier of the converged Transformer. The linear projection layer is the only module trained and updated on the training set with the original Transformer being frozen, thus it will only transform between vector spaces without generating new features for the word translation. An illustration of our analysis approach for encoder / decoder layers is shown in Figure FIGREF2. <<<Analysis of Encoder Layers>>> Analyzing word translation accuracy of encoder layers requires us to align source tokens with corresponding target token. We use the alignment matrices computed by cross-attention sub-layers in decoder layers to align source tokens with target tokens. As there are multiple matrices produced by each sub-layer (due to the multi-head attention mechanism) and multiple decoder layers, we have to ensemble them into one matrix of high alignment accuracy using weights. Assume there are $d$ decoder layers with $h$ attention heads in each multi-head attention sub-layer, which results in $d * h$ alignment matrices $A_1, ... A_{d * h}$. We use a $d * h$ dimension weight vector $w$ to combine all these attention matrices. The weight vector is first normalized by softmax to a probability distribution $p$: where $i$ indicates the $i$th element in $w$. Then we use $p$ as the weights of corresponding attention matrices and merge them into 1 alignment matrix $A$. $w$ can be trained during backpropagation together with the linear projection layer. After we obtain the alignment matrix $A$, instead of selecting the target token with the highest alignment weight as the translation of a source token, we perform matrix multiplication between the encoded source representations $E$ (size: source sentence length $*$ input dimension) and the alignment matrix $A$ (size: source sentence length $*$ target sentence length) to transform / re-order source representations to the target side $T_E$: where $A^T$ and $\times $ indicate the transpose of $A$ and matrix multiplication. Thus $T_E$ has the same length as the gold translation sequence, and the target sequence can be used directly as translations representing by $T_E$. Though source representations are transformed to the target side, we suggest this does not involve any target side information as the pre-trained Transformer is frozen and the transformation does not introduce any representation from the decoder side. We do not retrieve target tokens with highest alignment score as word translations of corresponding source tokens because translation may involve one/none/multiple source token(s) to one/none/multiple target token(s) alignment, and we suggest that using a soft alignment (attention weights) may lead to more reliable gradients than the hard alignment. <<</Analysis of Encoder Layers>>> <<<Analysis of Decoder Layers>>> The analysis of predicting accuracy of the decoder is simpler than the encoder, as we can directly use the shifted target sequence without the requirement to bridge the different sequence length of the source sentence and the target while analyzing the encoder. We can simply use the output representations of the analyzed layer, and evaluate its prediction accuracy after projection. However, as studied by li2019word, the decoder involves 2 kinds of “translation”, one (performed by the self-attention sub-layer) translates the history token sequence to the next token, another (performed by the cross-attention sub-layer) translates by attending source tokens. We additionally analyze the effects of these 2 kinds of translation on predicting accuracy by dropping the corresponding sub-layer of the analyzed decoder layer (i.e. we only compute the other sub-layer and the feed-forward layer with only the residual connection kept as the computation of the skipped sub-layer). <<</Analysis of Decoder Layers>>> <<</Word Translation Accuracy Analysis>>> <<<Analysis Experiments>>> <<<Settings>>> We conducted experiments based on the Neutron implementation of the Transformer BIBREF14. We first trained a Transformer base model for our analysis following all settings of vaswani2017attention on the WMT 14 English to German news translation task. The input dimension of the model and the hidden dimension of the feed-forward sub-layer were 512 and $2,048$ respectively. We employed a $512 * 512$ parameter matrix as the linear projection layer. The source embedding matrix, the target embedding matrix and the weight of the classifier were bound. We applied joint Byte-Pair Encoding (BPE) BIBREF15 with $32k$ merge operations to address the unknown word issue. We only kept sentences with a maximum of 256 sub-word tokens for training. We removed repeated data in the training set, and the training set was randomly shuffled in every training epoch. The concatenation of newstest 2012 and newstest 2013 was used for validation and newstest 2014 as the test set. The number of warm-up steps was set to $8k$ . Each training batch contained at least $25k$ target tokens, and the model was trained for $100k$ training steps. The large batch size is achieved by gradient accumulation. We used a dropout of $0.1$ and employed a label smoothing BIBREF16 value of $0.1$. We used the Adam optimizer BIBREF17 with $0.9$, $0.98$ and $10^{-9}$ as $\beta _{1}$, $\beta _{2}$ and $\epsilon $. Parameters were uniformly initialized under the Lipschitz constraint BIBREF18. We averaged the last 5 checkpoints saved with an interval of $1,500$ training steps. For decoding, we used a beam size of 4, and evaluated tokenized case-sensitive BLEU . The averaged model achieved a BLEU score of $27.96$ on the test set. The linear projection layer and the weight vector $w$ of 48 elements for alignment during the analysis of encoder layers were trained on the training set. We monitored the accuracy on the development set during their training, and reported results on the test set. <<</Settings>>> <<<Analysis>>> The analysis results of the trained Transformer are shown in Table TABREF8. Layer 0 stands for the embedding layer. “Acc” indicates the prediction accuracy. “-Self attention” and “-Cross attention” in the decoder layer analysis mean bypassing the computation of the self-attention sub-layer and the cross-attention sub-layer respectively of the analyzed decoder layer. In layer analysis of the encoder and decoder, “$\Delta $” indicates improvements in word translation accuracy of the analyzed layer over the previous layer. While analyzing the self-attention and cross-attention sub-layers, “$\Delta $” is the accuracy loss when we remove the computation of the corresponding sub-layer. The results of encoder layers in Table TABREF8 shows that: 1) surprisingly but reasonably the translation already starts at the embedding layer, and an amazingly sound word translation accuracy is obtained at the source embedding layer! This indicates that the translation already begins at the very beginning of “encoding” (specifically, the source embedding layer) instead of at the decoder. 2) With the stacking of encoder layers, the word translation accuracy improves (i.e. encoder layers gradually fix word translations of the source embedding layer), and improvements brought by different layers are relatively similar. While analyzing decoder layers, Table TABREF8 shows that: 1) shallow decoder layers (0, 1, 2 and 3) perform significantly worse compared to corresponding encoder layers (until reaching the 4th decoder layer, where a word translation accuracy which surpasses the embedding layer of the encoder is achieved); 2) The improvements brought by different decoder layers are quite different. Specifically, layer 4 and 5 bring more improvements than the others. While analyzing the effects of the source context (the self-attention sub-layer is responsible for the target language re-ordering, and “-Self attention” prevents using the decoding history in the analyzed decoder layer) and the decoding history (“-Cross attention” prevents copying translation from the source “encoding”), Table TABREF8 shows that in shallow decoder layers (layer 1-3), the decoding history plays a similarly important role like the source “encoding”, while in deep layers, the source “encoding” plays a more vital role than the decoding history. Thus, we suggest our comparison sheds light on the importance of translation performed by the encoder. <<</Analysis>>> <<<Translation from Encoder Layers>>> Since our approach extracts features for translation from output representations of encoder layers while analyzing them, is it possible to perform word translation with only these features from encoder layers without using the decoder? To achieve this goal, we feed output representations from an encoder layer to the corresponding linear projection layer, and feed the output of the linear projection layer directly to the decoder classifier, and retrieve tokens with highest probabilities as “translations”. Even though such “translations” from encoder layers have a same length and a same word-order as source sentences, individual source tokens are translated to the target language to some extent. We evaluated BPEized case-insensitive BLEU and BLEU 1 (1-gram BLEU, indicates the word translation quality), and results are shown in Table TABREF13. “FULL” is the performance of the whole Transformer model (decoding with a beam size of 4). “$\Delta $” means the improvements obtained by the introduced layer (or the decoder for “FULL”) over the previous layer. Table TABREF13 shows that though there is a significant gap in BLEU scores between encoder layers and the full Transformer, the gap in BLEU 1 is relatively smaller than in BLEU. It is reasonable that encoder layers achieve a comparably high BLEU 1 score while a low BLEU score, as they perform word translation in the same order as the source sentence without any word re-ordering of the target language. We suggest the BLEU 1 score achieved by only the source embedding layer (i.e. translating with only embeddings) surprising and worth noting. <<</Translation from Encoder Layers>>> <<</Analysis Experiments>>> <<<Findings Based on Observations>>> <<<Trade Decoder Layers for Encoder Layers>>> From our analysis of the 6-layer Transformer base model (Table TABREF8), we find that in contrast to the improvements of the word translation accuracy with increasing depth on the encoder side, some decoder layers contribute significantly fewer improvements than the others (i.e. Layer 4 and 5 bring more word translation accuracy improvements than that from layer 1, 2, 3 and 6 in Table TABREF8). We suggest there might be more “lazy” layers in the decoder than in the encoder, which means that it might be easier to compress the decoder than the encoder, and further conjecture that simply removing some decoder layers while adding the same number of encoder layers may improve the performance of the Transformer. The other motivations for doing so are: Each decoder layer has one more cross-attention sub-layer than an encoder layer, and increasing encoder layers while decreasing the same number of decoder layers will reduce the number of parameters and computational cost; The decoder has to compute the forward pass for every decoding step (the decoding of each target token), and the acceleration of reducing decoder layers will be more significant in decoding, which is of productive value. <<</Trade Decoder Layers for Encoder Layers>>> <<<Linear Projection Layer before Classifier>>> We compare the word translation accuracy achieved by the last decoder layer (with the linear projection layer) during analysis and that of the pre-trained standard Transformer (without the projection layer before the decoder classifier), and results are shown in Table TABREF20. Table TABREF20 shows that feeding the representations from the last decoder layer after the linear projection to the decoder classifier leads to slightly higher word prediction accuracy than feeding them directly to the classifier. We conjecture potential reasons might be: We follow vaswani2017attention binding the weight matrix of the classifier with the embedding matrix. Processing the inserted linear projection layer followed by the classifier is equivalent to using only a classifier but with a new weight matrix (equivalent to the matrix multiplication of the linear projection layer's weight matrix and the embedding matrix), which indirectly detaches the classifier weight matrix with the embedding matrix; As described in our analysis approach, the linear projection layer is expected to enhance the part of its input representations which relates to the classification while fading the other parts irrelevant to the word prediction, which may benefit the performance. Thus, we suggest that inserting a linear projection layer which simply performs matrix multiplication between input representations and a weight matrix before the decoder classifier may help improve the word translation accuracy and further lead to improved translation quality. <<</Linear Projection Layer before Classifier>>> <<<Results and Analysis>>> <<<Effects of Encoder/Decoder Depth>>> We examine the effects of reducing decoder depth while adding corresponding numbers of encoder layers, and results are shown in Table TABREF24. The decoding speed is measured on the test set which contains $3,003$ sentences with a beam size of 4. “Speed up” stands for the decoding acceleration compared to the 6-layer Transformer. Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU. Though the Transformer with 11 encoder layers and only 1 decoder layer fails to achieve a comparable performance comparing with the 6-layer Transformer, our results still suggest that using more encoder layers with fewer but sufficient decoder layers can significantly boost the decoding speed, which is simple but effective and valuable for production applications. We demonstrate the word accuracy analysis results of the 10 encoder layer 2 decoder layer Transformer in Table TABREF27. Comparing Table TABREF27 with Table TABREF8, we find that: 1) The differences in improvements ($1.17$ vs. $0.11$) brought by individual layers of the 10-layer encoder are larger than those of the 6-layer encoder ($1.90$ vs. $0.87$), indicating that there might be some “lazy” layers in the 10-layer encoder; 2) Decreasing the depth of the decoder removes those “lazy” decoder layers in the 6-layer decoder and makes decoder layers rely more on the source “encoding” (by comparing the effects of skipping the self-attention sub-layer and cross-attention sub-layer on performance). <<</Effects of Encoder/Decoder Depth>>> <<<Effects of the Projection Layer>>> To study the effects of the linear projection layer on performance, we conducted experiments on the WMT 14 English-French and WMT 15 Czech-English news translation tasks in addition to the WMT 14 English-German task. We also conducted significance tests BIBREF19. Results are tested on newstest 2014 and 2015 respectively and shown in Table TABREF28. Table TABREF28 shows that the linear projection layer is able to provide small but consistent and significant improvements in all 3 tasks. <<</Effects of the Projection Layer>>> <<</Results and Analysis>>> <<</Findings Based on Observations>>> <<<Related Work>>> <<<Analysis of NMT Models.>>> li2019word analyze the word alignment quality in NMT with prediction difference, and further analyze the effect of alignment errors on translation errors, which demonstrates that NMT captures good word alignment for those words mostly contributed from source, while their word alignment is much worse for those words mostly contributed from target. voita2019analyzing evaluate the contribution of individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. yang2019assessing propose a word reordering detection task to quantify how well the word order information is learned by Self-Attention Networks (SAN) and RNN, and reveal that although recurrence structure makes the model more universally-effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation. tsai2019transformer regard attention as applying a kernel smoother over the inputs with the kernel scores being the similarities between inputs, and analyze individual components of the Transformer’s attention with the new formulation via the lens of the kernel. tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. he2019towards measure the word importance by attributing the NMT output to every input word and reveal that words of certain syntactic categories have higher importance while the categories vary across language pairs. voita2019bottom use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). An early work BIBREF3 performs a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder. While they are unable to find any correlation between the accuracy of source morphology encoding and translation quality, they discover that morphological features are only captured in context and only to the extent that they are directly transferable to the target words, thus they suggest encoder layers are “lazy”, while our analysis offers an explanation for their results as the translation already starts at the source embedding layer, and possibly source embeddings already represent linguistic features of their translations more than those of themselves. <<</Analysis of NMT Models.>>> <<<Analysis of BERT.>>> BERT BIBREF20 uses the Transformer encoder, and analysis of BERT may provide valuable references for analyzing the Transformer. jawahar2019bert provide novel support that BERT networks capture structural information, and perform a series of experiments to unpack the elements of English language structure learned by BERT. tenney2019bert employ the edge probing task suite to explore how the different layers of the BERT network can resolve syntactic and semantic structure within a sentence, and find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. pires2019multilingual present a large number of probing experiments, and show that Multilingual-BERT’s robust ability to generalize cross-lingually is underpinned by a multilingual representation. <<</Analysis of BERT.>>> <<<Accelerating Decoding.>>> zhang2018accelerating propose average attention as an alternative to the self-attention network in the Transformer decoder to accelerate its decoding. wu2018pay introduce lightweight convolution and dynamic convolutions which are simpler and more efficient than self-attention. The number of operations required by their approach scales linearly in the input length, whereas self-attention is quadratic. zhang2018speeding apply cube pruning into neural machine translation to speed up the translation. zhang2018exploring propose to adapt an n-gram suffix based equivalence function into beam search decoding, which obtains similar translation quality with a smaller beam size, making NMT decoding more efficient. Non-Autoregressive Translation (NAT) BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27 enables parallelized decoding, while there is still a significant quality drop compared to traditional autoregressive beam search, our findings on using more encoder layers might also be adapted to the NAT. <<</Accelerating Decoding.>>> <<</Related Work>>> <<<Conclusion>>> We propose approaches for the analysis of word translation accuracy of Transformer layers to investigate how translation is performed. To measure word translation accuracy, our approaches train a linear projection layer which bridges representations from the analyzing layer and the pre-trained classifier. While analyzing encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one, and transforms the source “encoding” to the target shape by multiplying the merged alignment matrix. For the analysis of decoder layers, we additionally analyze the effects of the source context and the decoding history in word prediction through bypassing the corresponding sub-layers. Two main findings of our analysis are: 1) the translation starts at the very beginning of “encoding” (specifically at the source word embedding layer), and evolves further with the forward computation of layers; 2) translation performed by the encoder is very important for the evolution of word translation of decoder layers, especially for Transformers with few decoder layers. Based on our analysis, we propose to increase encoder depth while removing the same number of decoder layers to boost the decoding speed. We further show that simply inserting a linear projection layer before the decoder classifier which shares the weight matrix with the embedding layer can effectively provide small but consistent and significant improvements. <<</Conclusion>>> <<</Title>>>
{ "references": [ "the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer" ], "type": "extractive" }
1911.03270
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Does the paper report the performance on the task of a Neural Machine Translation model? Context: <<<Title>>> Char-RNN and Active Learning for Hashtag Segmentation <<<Abstract>>> We explore the abilities of character recurrent neural network (char-RNN) for hashtag segmentation. Our approach to the task is the following: we generate synthetic training dataset according to frequent n-grams that satisfy predefined morpho-syntactic patterns to avoid any manual annotation. The active learning strategy limits the training dataset and selects informative training subset. The approach does not require any language-specific settings and is compared for two languages, which differ in inflection degree. <<</Abstract>>> <<<Introduction>>> A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation. The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags. Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0. Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word. Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions: We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0. What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns. As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags. A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance. <<</Introduction>>> <<<Neural Model for Hashtag Segmentation>>> <<<Sequence Labeling Approach>>> We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$ The neural model for hashtag segmentation consists of three layers. The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters. The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter. The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $. Each character is assigned with the most probable label. The parameters of the char-RNN are the following: Embedding layer = 50 input dimensions; Feature layer = 64 bidirectional LSTM units; Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs. <<</Sequence Labeling Approach>>> <<</Neural Model for Hashtag Segmentation>>> <<<Dataset>>> In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN. <<<Russian dataset>>> To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually. We followed the same strategy to create an English language dataset. <<<Training Dataset Generation>>> We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example: remove all whitespace: wordwordworddigits Examples: ЁлкаВЗазеркалье, нескольколетназад replace all whitespace with an underscore: word_word_digits Examples: увд_юга_столицы remove some whitespace and replace other spaces with an underscore: word_worddigits. Examples: ищусвоегогероя_уфпс A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits. In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples: The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter. Examples: word_2017, NASA_2017, word_word Two or three words, which are separated by an underscore. Examples: Word_Word, word_word_word <<</Training Dataset Generation>>> <<<Test Dataset Annotation>>> We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts. The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing. <<</Test Dataset Annotation>>> <<</Russian dataset>>> <<<English dataset>>> We used the dataset, released by BIBREF0. This dataset consists of: a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian; a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations. The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing. <<</English dataset>>> <<</Dataset>>> <<<Active Learning>>> We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details. <<</Active Learning>>> <<<Experiments>>> <<<Baseline>>> As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset. For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation: where In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model: where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1. In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same. <<</Baseline>>> <<<Neural Model>>> In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”. Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English. <<</Neural Model>>> <<<Visualization>>> In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below. <<</Visualization>>> <<</Experiments>>> <<<Related Work>>> The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14. Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20. The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task. The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function: where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service. Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words. BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications. To our knowledge there has been little work done for word or hashtag segmentation in Russian. <<<Active Learning in NLP>>> Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability. Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9: <<</Active Learning in NLP>>> <<<Training on synthetic data>>> The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation. <<</Training on synthetic data>>> <<</Related Work>>> <<<Conclusions>>> In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training. The results show that: When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing. In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian). The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality. <<</Conclusions>>> <<</Title>>>
{ "references": [ "No" ], "type": "boolean" }
1911.03270
Please answer the following question with yes or no based on the given text. You only need to output 'Yes' or 'No' without any additional explanation. Question: Is the RNN model evaluated against any baseline? Context: <<<Title>>> Char-RNN and Active Learning for Hashtag Segmentation <<<Abstract>>> We explore the abilities of character recurrent neural network (char-RNN) for hashtag segmentation. Our approach to the task is the following: we generate synthetic training dataset according to frequent n-grams that satisfy predefined morpho-syntactic patterns to avoid any manual annotation. The active learning strategy limits the training dataset and selects informative training subset. The approach does not require any language-specific settings and is compared for two languages, which differ in inflection degree. <<</Abstract>>> <<<Introduction>>> A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation. The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags. Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0. Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word. Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions: We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0. What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns. As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags. A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance. <<</Introduction>>> <<<Neural Model for Hashtag Segmentation>>> <<<Sequence Labeling Approach>>> We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$ The neural model for hashtag segmentation consists of three layers. The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters. The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter. The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $. Each character is assigned with the most probable label. The parameters of the char-RNN are the following: Embedding layer = 50 input dimensions; Feature layer = 64 bidirectional LSTM units; Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs. <<</Sequence Labeling Approach>>> <<</Neural Model for Hashtag Segmentation>>> <<<Dataset>>> In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN. <<<Russian dataset>>> To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually. We followed the same strategy to create an English language dataset. <<<Training Dataset Generation>>> We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example: remove all whitespace: wordwordworddigits Examples: ЁлкаВЗазеркалье, нескольколетназад replace all whitespace with an underscore: word_word_digits Examples: увд_юга_столицы remove some whitespace and replace other spaces with an underscore: word_worddigits. Examples: ищусвоегогероя_уфпс A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits. In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples: The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter. Examples: word_2017, NASA_2017, word_word Two or three words, which are separated by an underscore. Examples: Word_Word, word_word_word <<</Training Dataset Generation>>> <<<Test Dataset Annotation>>> We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts. The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing. <<</Test Dataset Annotation>>> <<</Russian dataset>>> <<<English dataset>>> We used the dataset, released by BIBREF0. This dataset consists of: a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian; a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations. The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing. <<</English dataset>>> <<</Dataset>>> <<<Active Learning>>> We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details. <<</Active Learning>>> <<<Experiments>>> <<<Baseline>>> As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset. For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation: where In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model: where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1. In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same. <<</Baseline>>> <<<Neural Model>>> In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”. Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English. <<</Neural Model>>> <<<Visualization>>> In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below. <<</Visualization>>> <<</Experiments>>> <<<Related Work>>> The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14. Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20. The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task. The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function: where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service. Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words. BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications. To our knowledge there has been little work done for word or hashtag segmentation in Russian. <<<Active Learning in NLP>>> Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability. Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9: <<</Active Learning in NLP>>> <<<Training on synthetic data>>> The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation. <<</Training on synthetic data>>> <<</Related Work>>> <<<Conclusions>>> In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training. The results show that: When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing. In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian). The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality. <<</Conclusions>>> <<</Title>>>
{ "references": [ "Yes" ], "type": "boolean" }
1911.03270
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which languages are used in the paper? Context: <<<Title>>> Char-RNN and Active Learning for Hashtag Segmentation <<<Abstract>>> We explore the abilities of character recurrent neural network (char-RNN) for hashtag segmentation. Our approach to the task is the following: we generate synthetic training dataset according to frequent n-grams that satisfy predefined morpho-syntactic patterns to avoid any manual annotation. The active learning strategy limits the training dataset and selects informative training subset. The approach does not require any language-specific settings and is compared for two languages, which differ in inflection degree. <<</Abstract>>> <<<Introduction>>> A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation. The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags. Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0. Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word. Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions: We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0. What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns. As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags. A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance. <<</Introduction>>> <<<Neural Model for Hashtag Segmentation>>> <<<Sequence Labeling Approach>>> We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$ The neural model for hashtag segmentation consists of three layers. The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters. The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter. The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $. Each character is assigned with the most probable label. The parameters of the char-RNN are the following: Embedding layer = 50 input dimensions; Feature layer = 64 bidirectional LSTM units; Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs. <<</Sequence Labeling Approach>>> <<</Neural Model for Hashtag Segmentation>>> <<<Dataset>>> In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN. <<<Russian dataset>>> To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually. We followed the same strategy to create an English language dataset. <<<Training Dataset Generation>>> We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example: remove all whitespace: wordwordworddigits Examples: ЁлкаВЗазеркалье, нескольколетназад replace all whitespace with an underscore: word_word_digits Examples: увд_юга_столицы remove some whitespace and replace other spaces with an underscore: word_worddigits. Examples: ищусвоегогероя_уфпс A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits. In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples: The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter. Examples: word_2017, NASA_2017, word_word Two or three words, which are separated by an underscore. Examples: Word_Word, word_word_word <<</Training Dataset Generation>>> <<<Test Dataset Annotation>>> We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts. The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing. <<</Test Dataset Annotation>>> <<</Russian dataset>>> <<<English dataset>>> We used the dataset, released by BIBREF0. This dataset consists of: a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian; a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations. The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing. <<</English dataset>>> <<</Dataset>>> <<<Active Learning>>> We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details. <<</Active Learning>>> <<<Experiments>>> <<<Baseline>>> As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset. For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation: where In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model: where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1. In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same. <<</Baseline>>> <<<Neural Model>>> In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”. Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English. <<</Neural Model>>> <<<Visualization>>> In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below. <<</Visualization>>> <<</Experiments>>> <<<Related Work>>> The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14. Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20. The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task. The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function: where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service. Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words. BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications. To our knowledge there has been little work done for word or hashtag segmentation in Russian. <<<Active Learning in NLP>>> Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability. Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9: <<</Active Learning in NLP>>> <<<Training on synthetic data>>> The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation. <<</Training on synthetic data>>> <<</Related Work>>> <<<Conclusions>>> In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training. The results show that: When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing. In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian). The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality. <<</Conclusions>>> <<</Title>>>
{ "references": [ "English,Russian" ], "type": "extractive" }
2004.03762
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What metrics are used for evaluation? Context: <<<Title>>> Generating Narrative Text in a Switching Dynamical System <<<Abstract>>> Early work on narrative modeling used explicit plans and goals to generate stories, but the language generation itself was restricted and inflexible. Modern methods use language models for more robust generation, but often lack an explicit representation of the scaffolding and dynamics that guide a coherent narrative. This paper introduces a new model that integrates explicit narrative structure with neural language models, formalizing narrative modeling as a Switching Linear Dynamical System (SLDS). A SLDS is a dynamical system in which the latent dynamics of the system (i.e. how the state vector transforms over time) is controlled by top-level discrete switching variables. The switching variables represent narrative structure (e.g., sentiment or discourse states), while the latent state vector encodes information on the current state of the narrative. This probabilistic formulation allows us to control generation, and can be learned in a semi-supervised fashion using both labeled and unlabeled data. Additionally, we derive a Gibbs sampler for our model that can fill in arbitrary parts of the narrative, guided by the switching variables. Our filled-in (English language) narratives outperform several baselines on both automatic and human evaluations. <<</Abstract>>> <<<A Switching Dynamical System for Narrative Generation>>> In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models. <<<Narrative Dynamics in a Dynamical System>>> The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System: Where $A$ is a matrix, shared across all narratives, and $\Sigma $ is a noise term that takes into consideration idiosyncrasies different narratives will have. The fact that the shared transition matrix $A$ is linear means that narratives will have linearly analogous trajectories through time, despite having different details (comparable to stories with different settings but matching structures such as Ran/King Lear, Ulysses/Odyssey, etc). Of course, the fatal flaw of the model is that it assumes there exists only one transition matrix, and thus only one possible way to transition through a narrative! <<</Narrative Dynamics in a Dynamical System>>> <<<Narrative Scaffolds as Switching Variables>>> A more fitting model would thus be a Switching Linear Dynamical System BIBREF1, BIBREF2, BIBREF3. In an SLDS, we assume there exists a set of $K$ different sets of dynamics, $\lbrace (A_1, \Sigma _1),...(A_K,\Sigma _K)\rbrace $. At time step $i+1$, one of these sets of dynamics is used. The one used depends on the value of a discrete variable at time step $i+1$ called the switching variable, $S_{i+1} \in \lbrace 1,...K\rbrace $: There is a switching variable $S_i$ associated with each time step. The switching variable value itself evolves over time by a prior Markov process, $P(S_{i+1} | S_{i})$. This top level chain of switching variables thus forms our narrative scaffold, indicating what transitions we must go through in the narrative, with the dynamics matrices indicating how they transition. <<</Narrative Scaffolds as Switching Variables>>> <<<Narrative Scaffold - Emotional Trajectory>>> What the switching variables actually represent can be chosen by the user. Straightforward narrative scaffolds include event sequences BIBREF6, keywords BIBREF7, or latent template ids BIBREF8. More complex but potentially more informative scaffolds may be created using concepts such as story grammar non-terminals BIBREF9, BIBREF10, or character action taken throughout a story BIBREF11. In our work, we use the sentiment trajectory of the narrative as the scaffold. That is, each $S_i$ for a sentence indicates the overall coarse sentiment of the sentence (Positive, Negative, or Neutral). Though simple, the overall sentiment trajectory of a narrative is important in defining the high level `shape' of a narrative often shared among different narratives BIBREF12, BIBREF13. Furthermore, sentiment trajectory has been shown to be fairly useful in story understanding tasks BIBREF14, BIBREF15. We discuss in the conclusion future directions for using different types of scaffolds. <<</Narrative Scaffold - Emotional Trajectory>>> <<<The Full Model>>> The final component of the model is a conditional language model that generates sentence $i$ conditioned on the current $Z_i$, and all previous sentences, $X_{:i}$. Generation continues until an <eos> is reached. This conditional language model may be parameterized as desired, but in this work, we parameterize it as an RNN neural network language model. The graphical model for our SLDS is pictured in Figure FIGREF8. The model consists of three sets of variables: (1) Switching variables $S_1,...,S_N$, (2) Latent state variables $Z_1,...,Z_N$ capturing the details of the narrative at sentence $i$, (3) The sentences themselves $X_1,...X_N$, where each sentence $X_i$ has $n_i$ words, $x^i_1,...x^i_{n_i}$. The joint over all variables factorizes as below into the following components ($X_{:i}$ stands for all sentence before $X_i$): ❶ Narrative Scaffold Planner: The factor $P(S_i | S_{i-1})$ is a transition matrix, which we calculate via count based statistics from training. It is fed in as prior knowledge and fixed. ❷ Narrative Dynamics Network: The factor $P(Z_i | Z_{i-1}, S_i)$ is determined like a switching linear dynamical system: which is equivalent to drawing $Z_i$ from a Normal distribution with mean $A_{S_i}Z_{i-1}$ and variance $B_{S_i}B_{S_i}^T$. ❸ Conditional Language model: The factor $P(X_i | Z_i, X_{:i})$ is parameterized by an RNN language model conditioned on the latent $Z_i$. <<</The Full Model>>> <<</A Switching Dynamical System for Narrative Generation>>> <<<Learning and Posterior Inference>>> Due to the conditionals parameterized by neural networks we use amortized variational inference in a manner similar to Variational AutoEncoders BIBREF16, both to learn an approximate posterior $q(S, Z | X)$ and to learn the generative model parameters by maximizing a lower bound on the data likelihood (ELBO). We assume that the approximate posterior factorizes as follows: Like in VAEs, computing these individual factors is done through a parameterized function called the inference or recognition network whose parameters are trained jointly with the generative model. In our case there are two forms for the factors in our posterior: (1) The first form, $q(S_i | \textbf {X}) = q_{S_i}$ is parameterized by a classifier that takes in the set of sentences $\mathbf {X}$ and outputs a categorical distribution over the switching variables. (2) The second form, $q(Z_i| Z_{i-1}, S_i, X_{:i}, X_{i}) = q_{Z_i}$ is realized by functions $f_{\mu }(Z_{i-1}, S_i, X_{:i}, X_{i})$ and $f_\sigma (Z_{i-1}, S_i, X_{:i}, X_{i})$ that output the mean and variance, respectively, of a Gaussian over $Z_i$. Borrowing terminology from VAEs, the approximate posterior (the factors given above) act as an `encoder', while the generative model from the previous section can be seen as the `decoder'. This type of training has been previously used in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. <<<Lower bound formula & exact training algorithm>>> As mentioned previously, we optimize all parameters (including the variational factor functions) by optimizing a lower bound on the data likelihood. The model may be trained either with supervision labels for the switching states (in our case, sentiment labels) or without supervised labels. If one is training without the sentiment labels, then the lower bound on the marginal likelihood (and thus our optimization objective) may be written as follows: The derivation for this objective is identical to that found in BIBREF18, BIBREF19, and simply relies on using properties of iterated expectations. All expectations are estimated with Monte Carlo samples. If training with the sentiment labels $S_1,...,S_N$, then the objective is similar (but without the sampling of the switching states), and is augmented with an additional supervision objective as done in BIBREF22: Final training procedure for a single narrative is: For each sentence (starting from the first), sample the switching state $S_i$ from $q(S_i | \textbf {X})$. For each sentence (starting from the first), sample the latent $Z_i$ from $q(Z_i | S_i, Z_{i-1}, X)$. Evaluate the data likelihood and KL term(s) with these samples. Take the gradients of the objective function w.r.t. all parameters, using the reparameterization trick for $q_{Z_i}$ BIBREF16 or the Gumbel-Softmax trick for $q_{S_i}$ BIBREF23, and optimize. <<</Lower bound formula & exact training algorithm>>> <<</Learning and Posterior Inference>>> <<<Interpolations via Gibbs Sampling>>> One of the benefits of probabilistic formulation is the possibility (if an inference procedure can be found) of generating narratives with specific constraints, where the constraints may be specified as clamped variables in the model. In this section, we show how narratives may be generated conditioned on arbitrary bits and pieces of the narrative already filled in, using approximate Gibbs sampling. This allows one to, for example, interpolate a narrative given the first and the last sentence (similar to how earlier story generation systems were able to generate with a given end goal in mind). Some examples of these interpolations generated by our system can be found in Table TABREF37. We give the equations and summarize the algorithm in the next sections. <<<Conditionals for Gibbs Sampling>>> For our Gibbs sampling algorithm we give the narrative scaffold (switching variables), $S_1,...,S_T \in \mathbf {S}$ and a set of observed sentences, $\mathbf {X^+}$. This may be any set of sentences (the first and last, just the second sentence, etc) as inputs to the system. We wish to find values for the unobserved sentences in set $\mathbf {X^-}$ by sampling from the distribution $P(\mathbf {X^-}, Z_1,...,Z_T | \mathbf {S},\mathbf {X^+})$. We perform this sampling via Gibbs sampling. Two different forms of conditionals need to be derived to do Gibbs sampling. One over some $Z_i$ conditioned on everything else, and one over some $X_i$ conditioned on everything else. By using the d-separation properties of the graph, and substituting the true posterior over $Z_{i}$ with our approximate posterior $q$, we can show the first distribution is approximately proportional to The last line is the product between a Gaussian density over $Z_{i+1}$ and $Z_{i}$, respectively. With some algebraic manipulations, one can show the last line is proportional to a single Gaussian PDF over $Z_i$: To find the second conditional, one can use the d-separation properties of the graph to find that it is proportional to: These two distributions are simply factors of our conditional language model, and both terms can thus be evaluated easily. In theory, one could use this fact to sample the original conditional via Metropolis-Hastings . Unfortunately, we found this approach to be much too slow for practical purposes. We observed that the simple heuristic of deterministically assigning $X_i$ to be the greedy decoded output of the conditional language model $P(X_{i} | X_{:i}, Z_{i})$ works well, as evidenced by the empirical results. We leave it for future work to research different conditional language model parameterizations that allow easy sampling from this conditional <<</Conditionals for Gibbs Sampling>>> <<<Gibbs Sampling Interpolation Overview>>> The variables in the Gibbs sampler are first initialized using some heuristics (see Supplemental Materials for details). After initialization, performing the interpolations with Gibbs sampling follows the below two step process: For each $Z_i$, sample a value $Z^\prime $ from equation $(1)$ and set $Z_i$ to $Z^\prime $. For each $X_i$ in $\mathbf {X}^-$, find a new value for $X_i$ by running greedy decoding using the conditional language model. <<</Gibbs Sampling Interpolation Overview>>> <<</Interpolations via Gibbs Sampling>>> <<<Training Details>>> <<<Dataset and Preprocessing>>> We use the ROCStories corpora introduced in BIBREF27. It contains 98,159 short commonsense stories in English as training, and 1,570 stories for validation and test each. Each story in the dataset has five-sentences and captures causal and temporal commonsense relations. We limit our vocabulary size to 16,983 based on a per-word frequency cutoff set to 5. For sentiment tags, we automatically tag the entirety of the corpus with the rule based sentiment tagger, Vader BIBREF28, and bucket the polarity scores of Vader into three tags: neutral, negative, and positive. These tags form the label set of the $S$ variables in our SLDS model. We tokenize the stories with Spacy tokenizer. Each sentences in the input narrative has an <eos> tag except for the S2S model discussed below. <<</Dataset and Preprocessing>>> <<<Switching Linear Dynamical System (SLDS)>>> SLDS has RNN encoder and decoder networks with single layer GRU cells of hidden size 1024. Model uses an embedding size of 300. We train the model using Adam optimizer with the defaults used by PyTorch. We stop training the models when the validation loss does not decrease for 3 consecutive epochs. Training details remain same as above unless otherwise mentioned. <<</Switching Linear Dynamical System (SLDS)>>> <<<Baselines>>> Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512. Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting). Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS. Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%. <<</Baselines>>> <<</Training Details>>> <<<Evaluations>>> As described above, our model is able to perform narrative interpolations via an approximate Gibbs sampling procedure. At the core of our evaluations is thus a fill-in-the-sentences task. We provide 1 or 2 sentences, and require the model to generate the rest of the narrative . We evaluate this via automatic evaluations as well as with crowd-sourced human evaluations. We also report perplexity to evaluate the models' ability to fit the data. Lastly, we look at whether the transitions learned by the SLDS models capture what they are intended to capture: does using the transition matrix associated with a sentiment tag (positive/negative/neutral) lead to a generated sentence with that sentiment? <<<Generating the Interpolations>>> For the SLDS models, the interpolations are generated via the Gibbs sampling algorithm described earlier. In all experiments for the SLDS models we draw 50 samples (including burn in samples) and output the interpolation that maximizes the probability of the given sentence(s). Since the baselines do not have the means for doing interpolations, we simulate `interpolations' for the baselines; we draw 1000 samples using top k (with k=15) truncated sampling (conditioned on the given initial sentences, if available). We then output the sample that maximizes the probability of the clamped sentences around which we are interpolating the others. We allow the S2S access to the gold sentiment tags. To give a lower bound on the performance of the SLDS model, we do not provide it with gold tags. We instead provide the SLDS model with the semi-noisy tags that are output from $q(S_i | X)$. <<</Generating the Interpolations>>> <<<Automatic Evaluation of Interpolations>>> We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section. <<</Automatic Evaluation of Interpolations>>> <<<Human Evaluation of Interpolations>>> <<<Annotation Scheme>>> As automatic evaluation metrics are not sufficient to assess the quality of any creative task such as narrative generation, we measure the quality of the generations through human evaluation of 200 stories on the Amazon Mechanical Turk platform. We provided Turkers with two generated narratives from two different models, each with five sentences. The first and last sentences were fed to each model as input, and the middle three sentences were generated. Each pair of narratives is graded by 3 users each with two tasks: (1) to rank on a scale of 0-3 each of the sentences except the first one on the basis of its coherency with the previous sentence(s) and (2) compare and rank the two narratives based on their overall coherency, ie how well the story connects the starting/ending sentences. <<</Annotation Scheme>>> <<<Human Evaluation Results>>> Table TABREF41 reports the result of human evaluations of SLDS and baseline generations. We can observe that people preferred narratives generated by SLDS over the ones generated by baseline models (LM and S2S) as they found the former model more coherent, which is an important criteria for narrative generation. 51.3% of the time SLDS generates better narratives than the LM model while LM in turn does it only 35.0% of the times. 13.7% of the generations end up in tie. The mean sentence level coherence score for SLDS is around 12.5% larger than that of the LM, with a slightly lower standard deviation. We see similar results when compared against the S2S model. <<</Human Evaluation Results>>> <<</Human Evaluation of Interpolations>>> <<<Language Modeling Perplexity Score>>> As our models are essentially language models, we evaluated their per-sentence negative log-likelihood and per-word perplexity scores, which can be viewed as an indirect measure of how well a system works as a generative model of narrative text. For the SLDS and LDS models these scores are approximations, an upper bound (the negative of the ELBO) to the actual values. For the other two models the scores are exact. A good model should assign low perplexity scores to its test set. In Table TABREF44 SLDS achieves the lowest scores, implying that it is able to model the data distribution well. In Table TABREF45 we also calculate the perplexity scores for the semi-supervised SLDS models to assess the effectiveness of semi-supervised training. Surprisingly, the models with less supervision scored better in terms of perplexity. One possibility for this might be the use of the soft Gumbel-Softmax in the semi-supervised models. The soft Gumbel-Softmax variant does not commit to using a single transition matrix at each time step (instead linearly combining them, weighted by the Softmax weights). This fact may permit the model greater flexibility in fitting the training data. While this leads to better scores in metrics such as perplexity or BLEU, it does leads to transitions that are worse in capturing the properties they should be capturing, as we shall see in the next section. <<</Language Modeling Perplexity Score>>> <<<Evaluation of Transition Dynamics>>> One matter of interest is whether or not the transitions are capturing what they are supposed to capture, appropriate sentiment. Since we used the sentiment tagger Vader for training tags, we again utilize it to evaluate whether using transitions of a certain sentiment actually leads the model to produce outputs with the given sentiment. To perform this evaluation, we give as input to our models (and the S2S baseline) the sentiment tags for a sentence and allow it to generate a sentence conditioned on these sentiment tags. We then tag the generated sentences with Vader and see if the sentiment tags match the originals. We calculate the F1 score across all sentiment tags and report the macro average. In Table TABREF47 we see that having labels is incredibly important for meaningful transitions. There is a large drop in F1 as the amount of labels given to the model is decreased. The SLDS model that is trained with 100% of the labels performs a little better than even S2S, despite not having direct access to the sentiment labels (SLDS only uses the sentiment labels to decide which transition to use while the S2S model uses attention directly on the sentiment labels). <<</Evaluation of Transition Dynamics>>> <<</Evaluations>>> <<<Related Work>>> Story/narrative generation has a rich history in the field of AI. Many early systems were based on structured formalisms for describing common narrative structures BIBREF9, BIBREF10, BIBREF31, many being inspired by the initial work of BIBREF0. There has been a swath of recent work that has looked to add some semblance of a `narrative scaffold' back into generation methods BIBREF32, BIBREF6, BIBREF7, BIBREF33. Many of these methods work as conditional LMs (conditioned directly on the scaffold). This line of work may be combined with our formalization as well, by conditioning the generation on the switching state as well, as done in the model of BIBREF4. Recent work by BIBREF34 has similar goals to ours in permitting more controlability in generation systems, developing a RL-based system that allows users to specify an end goal for a story (by specifying the event class that is desired to appear at the end). Their work differs from ours in that it does not deal with text directly, modeling only the sequences of events in the narrative. It may be possible to utilize this model as the scaffolding component in our model (utilizing their RL policy for the scaffold planner, rather than the simple Markovian distribution used here). <<</Related Work>>> <<<Conclusion and Future Work>>> In this paper, we formulated the problem of narrative generation as a switching dynamical system. We showed how this formulation captures notions important in narrative generation, such as narrative dynamics and scaffolds. We developed an approximate Gibbs sampling algorithm for the model that permits the system to generate interpolations conditioned on arbitrary parts of the narrative, and evaluated these interpolations using both human and automatic evaluations. Though in this work we used sentiment tags for our scaffolds/switching variables, future work may look at utilizing different kinds of information to guide the generation of narratives. Utilizing the main predicate of a sentence as a scaffold would be a logical next step, and may prove more informative then the sentiment trajectory. A scaffold such as this can take on many more possible values then a sentiment tag, and as such, it may prove difficult to assign a set of dynamics to each value. Another avenue for future work would deal with this possible problem. One potential solution could be to associate each switching variable value with a (learned) vector in a probability simplex, and use this vector to combine a small set of “primitive" dynamics matrices in order to get that value's associated set of dynamics. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "ROUGE BIBREF29 and METEOR BIBREF30" ], "type": "extractive" }
2004.03762
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: What baselines are used? Context: <<<Title>>> Generating Narrative Text in a Switching Dynamical System <<<Abstract>>> Early work on narrative modeling used explicit plans and goals to generate stories, but the language generation itself was restricted and inflexible. Modern methods use language models for more robust generation, but often lack an explicit representation of the scaffolding and dynamics that guide a coherent narrative. This paper introduces a new model that integrates explicit narrative structure with neural language models, formalizing narrative modeling as a Switching Linear Dynamical System (SLDS). A SLDS is a dynamical system in which the latent dynamics of the system (i.e. how the state vector transforms over time) is controlled by top-level discrete switching variables. The switching variables represent narrative structure (e.g., sentiment or discourse states), while the latent state vector encodes information on the current state of the narrative. This probabilistic formulation allows us to control generation, and can be learned in a semi-supervised fashion using both labeled and unlabeled data. Additionally, we derive a Gibbs sampler for our model that can fill in arbitrary parts of the narrative, guided by the switching variables. Our filled-in (English language) narratives outperform several baselines on both automatic and human evaluations. <<</Abstract>>> <<<A Switching Dynamical System for Narrative Generation>>> In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models. <<<Narrative Dynamics in a Dynamical System>>> The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System: Where $A$ is a matrix, shared across all narratives, and $\Sigma $ is a noise term that takes into consideration idiosyncrasies different narratives will have. The fact that the shared transition matrix $A$ is linear means that narratives will have linearly analogous trajectories through time, despite having different details (comparable to stories with different settings but matching structures such as Ran/King Lear, Ulysses/Odyssey, etc). Of course, the fatal flaw of the model is that it assumes there exists only one transition matrix, and thus only one possible way to transition through a narrative! <<</Narrative Dynamics in a Dynamical System>>> <<<Narrative Scaffolds as Switching Variables>>> A more fitting model would thus be a Switching Linear Dynamical System BIBREF1, BIBREF2, BIBREF3. In an SLDS, we assume there exists a set of $K$ different sets of dynamics, $\lbrace (A_1, \Sigma _1),...(A_K,\Sigma _K)\rbrace $. At time step $i+1$, one of these sets of dynamics is used. The one used depends on the value of a discrete variable at time step $i+1$ called the switching variable, $S_{i+1} \in \lbrace 1,...K\rbrace $: There is a switching variable $S_i$ associated with each time step. The switching variable value itself evolves over time by a prior Markov process, $P(S_{i+1} | S_{i})$. This top level chain of switching variables thus forms our narrative scaffold, indicating what transitions we must go through in the narrative, with the dynamics matrices indicating how they transition. <<</Narrative Scaffolds as Switching Variables>>> <<<Narrative Scaffold - Emotional Trajectory>>> What the switching variables actually represent can be chosen by the user. Straightforward narrative scaffolds include event sequences BIBREF6, keywords BIBREF7, or latent template ids BIBREF8. More complex but potentially more informative scaffolds may be created using concepts such as story grammar non-terminals BIBREF9, BIBREF10, or character action taken throughout a story BIBREF11. In our work, we use the sentiment trajectory of the narrative as the scaffold. That is, each $S_i$ for a sentence indicates the overall coarse sentiment of the sentence (Positive, Negative, or Neutral). Though simple, the overall sentiment trajectory of a narrative is important in defining the high level `shape' of a narrative often shared among different narratives BIBREF12, BIBREF13. Furthermore, sentiment trajectory has been shown to be fairly useful in story understanding tasks BIBREF14, BIBREF15. We discuss in the conclusion future directions for using different types of scaffolds. <<</Narrative Scaffold - Emotional Trajectory>>> <<<The Full Model>>> The final component of the model is a conditional language model that generates sentence $i$ conditioned on the current $Z_i$, and all previous sentences, $X_{:i}$. Generation continues until an <eos> is reached. This conditional language model may be parameterized as desired, but in this work, we parameterize it as an RNN neural network language model. The graphical model for our SLDS is pictured in Figure FIGREF8. The model consists of three sets of variables: (1) Switching variables $S_1,...,S_N$, (2) Latent state variables $Z_1,...,Z_N$ capturing the details of the narrative at sentence $i$, (3) The sentences themselves $X_1,...X_N$, where each sentence $X_i$ has $n_i$ words, $x^i_1,...x^i_{n_i}$. The joint over all variables factorizes as below into the following components ($X_{:i}$ stands for all sentence before $X_i$): ❶ Narrative Scaffold Planner: The factor $P(S_i | S_{i-1})$ is a transition matrix, which we calculate via count based statistics from training. It is fed in as prior knowledge and fixed. ❷ Narrative Dynamics Network: The factor $P(Z_i | Z_{i-1}, S_i)$ is determined like a switching linear dynamical system: which is equivalent to drawing $Z_i$ from a Normal distribution with mean $A_{S_i}Z_{i-1}$ and variance $B_{S_i}B_{S_i}^T$. ❸ Conditional Language model: The factor $P(X_i | Z_i, X_{:i})$ is parameterized by an RNN language model conditioned on the latent $Z_i$. <<</The Full Model>>> <<</A Switching Dynamical System for Narrative Generation>>> <<<Learning and Posterior Inference>>> Due to the conditionals parameterized by neural networks we use amortized variational inference in a manner similar to Variational AutoEncoders BIBREF16, both to learn an approximate posterior $q(S, Z | X)$ and to learn the generative model parameters by maximizing a lower bound on the data likelihood (ELBO). We assume that the approximate posterior factorizes as follows: Like in VAEs, computing these individual factors is done through a parameterized function called the inference or recognition network whose parameters are trained jointly with the generative model. In our case there are two forms for the factors in our posterior: (1) The first form, $q(S_i | \textbf {X}) = q_{S_i}$ is parameterized by a classifier that takes in the set of sentences $\mathbf {X}$ and outputs a categorical distribution over the switching variables. (2) The second form, $q(Z_i| Z_{i-1}, S_i, X_{:i}, X_{i}) = q_{Z_i}$ is realized by functions $f_{\mu }(Z_{i-1}, S_i, X_{:i}, X_{i})$ and $f_\sigma (Z_{i-1}, S_i, X_{:i}, X_{i})$ that output the mean and variance, respectively, of a Gaussian over $Z_i$. Borrowing terminology from VAEs, the approximate posterior (the factors given above) act as an `encoder', while the generative model from the previous section can be seen as the `decoder'. This type of training has been previously used in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. <<<Lower bound formula & exact training algorithm>>> As mentioned previously, we optimize all parameters (including the variational factor functions) by optimizing a lower bound on the data likelihood. The model may be trained either with supervision labels for the switching states (in our case, sentiment labels) or without supervised labels. If one is training without the sentiment labels, then the lower bound on the marginal likelihood (and thus our optimization objective) may be written as follows: The derivation for this objective is identical to that found in BIBREF18, BIBREF19, and simply relies on using properties of iterated expectations. All expectations are estimated with Monte Carlo samples. If training with the sentiment labels $S_1,...,S_N$, then the objective is similar (but without the sampling of the switching states), and is augmented with an additional supervision objective as done in BIBREF22: Final training procedure for a single narrative is: For each sentence (starting from the first), sample the switching state $S_i$ from $q(S_i | \textbf {X})$. For each sentence (starting from the first), sample the latent $Z_i$ from $q(Z_i | S_i, Z_{i-1}, X)$. Evaluate the data likelihood and KL term(s) with these samples. Take the gradients of the objective function w.r.t. all parameters, using the reparameterization trick for $q_{Z_i}$ BIBREF16 or the Gumbel-Softmax trick for $q_{S_i}$ BIBREF23, and optimize. <<</Lower bound formula & exact training algorithm>>> <<</Learning and Posterior Inference>>> <<<Interpolations via Gibbs Sampling>>> One of the benefits of probabilistic formulation is the possibility (if an inference procedure can be found) of generating narratives with specific constraints, where the constraints may be specified as clamped variables in the model. In this section, we show how narratives may be generated conditioned on arbitrary bits and pieces of the narrative already filled in, using approximate Gibbs sampling. This allows one to, for example, interpolate a narrative given the first and the last sentence (similar to how earlier story generation systems were able to generate with a given end goal in mind). Some examples of these interpolations generated by our system can be found in Table TABREF37. We give the equations and summarize the algorithm in the next sections. <<<Conditionals for Gibbs Sampling>>> For our Gibbs sampling algorithm we give the narrative scaffold (switching variables), $S_1,...,S_T \in \mathbf {S}$ and a set of observed sentences, $\mathbf {X^+}$. This may be any set of sentences (the first and last, just the second sentence, etc) as inputs to the system. We wish to find values for the unobserved sentences in set $\mathbf {X^-}$ by sampling from the distribution $P(\mathbf {X^-}, Z_1,...,Z_T | \mathbf {S},\mathbf {X^+})$. We perform this sampling via Gibbs sampling. Two different forms of conditionals need to be derived to do Gibbs sampling. One over some $Z_i$ conditioned on everything else, and one over some $X_i$ conditioned on everything else. By using the d-separation properties of the graph, and substituting the true posterior over $Z_{i}$ with our approximate posterior $q$, we can show the first distribution is approximately proportional to The last line is the product between a Gaussian density over $Z_{i+1}$ and $Z_{i}$, respectively. With some algebraic manipulations, one can show the last line is proportional to a single Gaussian PDF over $Z_i$: To find the second conditional, one can use the d-separation properties of the graph to find that it is proportional to: These two distributions are simply factors of our conditional language model, and both terms can thus be evaluated easily. In theory, one could use this fact to sample the original conditional via Metropolis-Hastings . Unfortunately, we found this approach to be much too slow for practical purposes. We observed that the simple heuristic of deterministically assigning $X_i$ to be the greedy decoded output of the conditional language model $P(X_{i} | X_{:i}, Z_{i})$ works well, as evidenced by the empirical results. We leave it for future work to research different conditional language model parameterizations that allow easy sampling from this conditional <<</Conditionals for Gibbs Sampling>>> <<<Gibbs Sampling Interpolation Overview>>> The variables in the Gibbs sampler are first initialized using some heuristics (see Supplemental Materials for details). After initialization, performing the interpolations with Gibbs sampling follows the below two step process: For each $Z_i$, sample a value $Z^\prime $ from equation $(1)$ and set $Z_i$ to $Z^\prime $. For each $X_i$ in $\mathbf {X}^-$, find a new value for $X_i$ by running greedy decoding using the conditional language model. <<</Gibbs Sampling Interpolation Overview>>> <<</Interpolations via Gibbs Sampling>>> <<<Training Details>>> <<<Dataset and Preprocessing>>> We use the ROCStories corpora introduced in BIBREF27. It contains 98,159 short commonsense stories in English as training, and 1,570 stories for validation and test each. Each story in the dataset has five-sentences and captures causal and temporal commonsense relations. We limit our vocabulary size to 16,983 based on a per-word frequency cutoff set to 5. For sentiment tags, we automatically tag the entirety of the corpus with the rule based sentiment tagger, Vader BIBREF28, and bucket the polarity scores of Vader into three tags: neutral, negative, and positive. These tags form the label set of the $S$ variables in our SLDS model. We tokenize the stories with Spacy tokenizer. Each sentences in the input narrative has an <eos> tag except for the S2S model discussed below. <<</Dataset and Preprocessing>>> <<<Switching Linear Dynamical System (SLDS)>>> SLDS has RNN encoder and decoder networks with single layer GRU cells of hidden size 1024. Model uses an embedding size of 300. We train the model using Adam optimizer with the defaults used by PyTorch. We stop training the models when the validation loss does not decrease for 3 consecutive epochs. Training details remain same as above unless otherwise mentioned. <<</Switching Linear Dynamical System (SLDS)>>> <<<Baselines>>> Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512. Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting). Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS. Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%. <<</Baselines>>> <<</Training Details>>> <<<Evaluations>>> As described above, our model is able to perform narrative interpolations via an approximate Gibbs sampling procedure. At the core of our evaluations is thus a fill-in-the-sentences task. We provide 1 or 2 sentences, and require the model to generate the rest of the narrative . We evaluate this via automatic evaluations as well as with crowd-sourced human evaluations. We also report perplexity to evaluate the models' ability to fit the data. Lastly, we look at whether the transitions learned by the SLDS models capture what they are intended to capture: does using the transition matrix associated with a sentiment tag (positive/negative/neutral) lead to a generated sentence with that sentiment? <<<Generating the Interpolations>>> For the SLDS models, the interpolations are generated via the Gibbs sampling algorithm described earlier. In all experiments for the SLDS models we draw 50 samples (including burn in samples) and output the interpolation that maximizes the probability of the given sentence(s). Since the baselines do not have the means for doing interpolations, we simulate `interpolations' for the baselines; we draw 1000 samples using top k (with k=15) truncated sampling (conditioned on the given initial sentences, if available). We then output the sample that maximizes the probability of the clamped sentences around which we are interpolating the others. We allow the S2S access to the gold sentiment tags. To give a lower bound on the performance of the SLDS model, we do not provide it with gold tags. We instead provide the SLDS model with the semi-noisy tags that are output from $q(S_i | X)$. <<</Generating the Interpolations>>> <<<Automatic Evaluation of Interpolations>>> We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section. <<</Automatic Evaluation of Interpolations>>> <<<Human Evaluation of Interpolations>>> <<<Annotation Scheme>>> As automatic evaluation metrics are not sufficient to assess the quality of any creative task such as narrative generation, we measure the quality of the generations through human evaluation of 200 stories on the Amazon Mechanical Turk platform. We provided Turkers with two generated narratives from two different models, each with five sentences. The first and last sentences were fed to each model as input, and the middle three sentences were generated. Each pair of narratives is graded by 3 users each with two tasks: (1) to rank on a scale of 0-3 each of the sentences except the first one on the basis of its coherency with the previous sentence(s) and (2) compare and rank the two narratives based on their overall coherency, ie how well the story connects the starting/ending sentences. <<</Annotation Scheme>>> <<<Human Evaluation Results>>> Table TABREF41 reports the result of human evaluations of SLDS and baseline generations. We can observe that people preferred narratives generated by SLDS over the ones generated by baseline models (LM and S2S) as they found the former model more coherent, which is an important criteria for narrative generation. 51.3% of the time SLDS generates better narratives than the LM model while LM in turn does it only 35.0% of the times. 13.7% of the generations end up in tie. The mean sentence level coherence score for SLDS is around 12.5% larger than that of the LM, with a slightly lower standard deviation. We see similar results when compared against the S2S model. <<</Human Evaluation Results>>> <<</Human Evaluation of Interpolations>>> <<<Language Modeling Perplexity Score>>> As our models are essentially language models, we evaluated their per-sentence negative log-likelihood and per-word perplexity scores, which can be viewed as an indirect measure of how well a system works as a generative model of narrative text. For the SLDS and LDS models these scores are approximations, an upper bound (the negative of the ELBO) to the actual values. For the other two models the scores are exact. A good model should assign low perplexity scores to its test set. In Table TABREF44 SLDS achieves the lowest scores, implying that it is able to model the data distribution well. In Table TABREF45 we also calculate the perplexity scores for the semi-supervised SLDS models to assess the effectiveness of semi-supervised training. Surprisingly, the models with less supervision scored better in terms of perplexity. One possibility for this might be the use of the soft Gumbel-Softmax in the semi-supervised models. The soft Gumbel-Softmax variant does not commit to using a single transition matrix at each time step (instead linearly combining them, weighted by the Softmax weights). This fact may permit the model greater flexibility in fitting the training data. While this leads to better scores in metrics such as perplexity or BLEU, it does leads to transitions that are worse in capturing the properties they should be capturing, as we shall see in the next section. <<</Language Modeling Perplexity Score>>> <<<Evaluation of Transition Dynamics>>> One matter of interest is whether or not the transitions are capturing what they are supposed to capture, appropriate sentiment. Since we used the sentiment tagger Vader for training tags, we again utilize it to evaluate whether using transitions of a certain sentiment actually leads the model to produce outputs with the given sentiment. To perform this evaluation, we give as input to our models (and the S2S baseline) the sentiment tags for a sentence and allow it to generate a sentence conditioned on these sentiment tags. We then tag the generated sentences with Vader and see if the sentiment tags match the originals. We calculate the F1 score across all sentiment tags and report the macro average. In Table TABREF47 we see that having labels is incredibly important for meaningful transitions. There is a large drop in F1 as the amount of labels given to the model is decreased. The SLDS model that is trained with 100% of the labels performs a little better than even S2S, despite not having direct access to the sentiment labels (SLDS only uses the sentiment labels to decide which transition to use while the S2S model uses attention directly on the sentiment labels). <<</Evaluation of Transition Dynamics>>> <<</Evaluations>>> <<<Related Work>>> Story/narrative generation has a rich history in the field of AI. Many early systems were based on structured formalisms for describing common narrative structures BIBREF9, BIBREF10, BIBREF31, many being inspired by the initial work of BIBREF0. There has been a swath of recent work that has looked to add some semblance of a `narrative scaffold' back into generation methods BIBREF32, BIBREF6, BIBREF7, BIBREF33. Many of these methods work as conditional LMs (conditioned directly on the scaffold). This line of work may be combined with our formalization as well, by conditioning the generation on the switching state as well, as done in the model of BIBREF4. Recent work by BIBREF34 has similar goals to ours in permitting more controlability in generation systems, developing a RL-based system that allows users to specify an end goal for a story (by specifying the event class that is desired to appear at the end). Their work differs from ours in that it does not deal with text directly, modeling only the sequences of events in the narrative. It may be possible to utilize this model as the scaffolding component in our model (utilizing their RL policy for the scaffold planner, rather than the simple Markovian distribution used here). <<</Related Work>>> <<<Conclusion and Future Work>>> In this paper, we formulated the problem of narrative generation as a switching dynamical system. We showed how this formulation captures notions important in narrative generation, such as narrative dynamics and scaffolds. We developed an approximate Gibbs sampling algorithm for the model that permits the system to generate interpolations conditioned on arbitrary parts of the narrative, and evaluated these interpolations using both human and automatic evaluations. Though in this work we used sentiment tags for our scaffolds/switching variables, future work may look at utilizing different kinds of information to guide the generation of narratives. Utilizing the main predicate of a sentence as a scaffold would be a logical next step, and may prove more informative then the sentiment trajectory. A scaffold such as this can take on many more possible values then a sentiment tag, and as such, it may prove difficult to assign a set of dynamics to each value. Another avenue for future work would deal with this possible problem. One potential solution could be to associate each switching variable value with a (learned) vector in a probability simplex, and use this vector to combine a small set of “primitive" dynamics matrices in order to get that value's associated set of dynamics. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "a two layer recurrent neural language model with GRU cells of hidden size 512,a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512,a linear dynamical system,semi-supervised SLDS models with varying amount of labelled sentiment tags" ], "type": "extractive" }
1909.07593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: Which model is used to capture the implicit structure? Context: <<<Title>>> Learning Explicit and Implicit Structures for Targeted Sentiment Analysis <<<Abstract>>> Targeted sentiment analysis is the task of jointly predicting target entities and their associated sentiment information. Existing research efforts mostly regard this joint task as a sequence labeling problem, building models that can capture explicit structures in the output space. However, the importance of capturing implicit global structural information that resides in the input space is largely unexplored. In this work, we argue that both types of information (implicit and explicit structural information) are crucial for building a successful targeted sentiment analysis model. Our experimental results show that properly capturing both information is able to lead to better performance than competitive existing approaches. We also conduct extensive experiments to investigate our model's effectiveness and robustness. <<</Abstract>>> <<<Introduction>>> Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing). Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative. Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures. However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space. On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost. However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging. Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work: We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence. We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model. <<</Introduction>>> <<<Approach>>> Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets. <<<Explicit Structure>>> Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows. $\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target. $\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target. $\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity. We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans. The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above. The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$. Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme. Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space. We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as: where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$: where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$. The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure. <<</Explicit Structure>>> <<<Implicit Structure>>> We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention. Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$. As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows: Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span. As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$: where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence. Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges: where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets. <<</Implicit Structure>>> <<</Approach>>> <<<Experimental Setup>>> <<<Data>>> We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics. <<</Data>>> <<<Evaluation Metrics>>> Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct. <<</Evaluation Metrics>>> <<<Hyperparameters>>> We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25. <<</Hyperparameters>>> <<<Training and Implementation>>> We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26. <<</Training and Implementation>>> <<<Baselines>>> We consider the following baselines: Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together. Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags. Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings. SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space. E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space. EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space. <<</Baselines>>> <<</Experimental Setup>>> <<<Results and Discussion>>> <<<Main Results>>> The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons. In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost. Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space. We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial. Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space. Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features. Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task. <<<Robustness>>> Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space. We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups. Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities. <<</Robustness>>> <<<Error Analysis>>> We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult. <<</Error Analysis>>> <<</Main Results>>> <<<Effect of Implicit Structures>>> In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components: EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase. EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly. Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential. <<</Effect of Implicit Structures>>> <<<Qualitative Analysis>>> We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention. <<</Qualitative Analysis>>> <<<Additional Experiments>>> We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training. We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian. <<</Additional Experiments>>> <<</Results and Discussion>>> <<<Related Work>>> We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics. <<<Predicting sentiment for a given target>>> Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification. <<</Predicting sentiment for a given target>>> <<<Jointly predicting targets and their associated sentiment>>> Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work. <<</Jointly predicting targets and their associated sentiment>>> <<</Related Work>>> <<<Conclusion and Future Work>>> In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information. Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "Bi-directional LSTM,self-attention " ], "type": "extractive" }
1909.07593
Please extract a concise answer without any additional explanation for the following question based on the given text. Question: How is the robustness of the model evaluated? Context: <<<Title>>> Learning Explicit and Implicit Structures for Targeted Sentiment Analysis <<<Abstract>>> Targeted sentiment analysis is the task of jointly predicting target entities and their associated sentiment information. Existing research efforts mostly regard this joint task as a sequence labeling problem, building models that can capture explicit structures in the output space. However, the importance of capturing implicit global structural information that resides in the input space is largely unexplored. In this work, we argue that both types of information (implicit and explicit structural information) are crucial for building a successful targeted sentiment analysis model. Our experimental results show that properly capturing both information is able to lead to better performance than competitive existing approaches. We also conduct extensive experiments to investigate our model's effectiveness and robustness. <<</Abstract>>> <<<Introduction>>> Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing). Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative. Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures. However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space. On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost. However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging. Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work: We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence. We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model. <<</Introduction>>> <<<Approach>>> Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets. <<<Explicit Structure>>> Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows. $\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target. $\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target. $\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity. We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans. The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above. The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$. Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme. Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space. We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as: where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$: where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$. The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure. <<</Explicit Structure>>> <<<Implicit Structure>>> We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention. Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$. As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows: Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span. As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$: where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence. Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges: where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets. <<</Implicit Structure>>> <<</Approach>>> <<<Experimental Setup>>> <<<Data>>> We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics. <<</Data>>> <<<Evaluation Metrics>>> Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct. <<</Evaluation Metrics>>> <<<Hyperparameters>>> We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25. <<</Hyperparameters>>> <<<Training and Implementation>>> We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26. <<</Training and Implementation>>> <<<Baselines>>> We consider the following baselines: Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together. Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags. Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings. SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space. E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space. EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space. <<</Baselines>>> <<</Experimental Setup>>> <<<Results and Discussion>>> <<<Main Results>>> The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons. In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost. Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space. We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial. Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space. Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features. Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task. <<<Robustness>>> Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space. We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups. Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities. <<</Robustness>>> <<<Error Analysis>>> We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult. <<</Error Analysis>>> <<</Main Results>>> <<<Effect of Implicit Structures>>> In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components: EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase. EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly. Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential. <<</Effect of Implicit Structures>>> <<<Qualitative Analysis>>> We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention. <<</Qualitative Analysis>>> <<<Additional Experiments>>> We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training. We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian. <<</Additional Experiments>>> <<</Results and Discussion>>> <<<Related Work>>> We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics. <<<Predicting sentiment for a given target>>> Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification. <<</Predicting sentiment for a given target>>> <<<Jointly predicting targets and their associated sentiment>>> Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work. <<</Jointly predicting targets and their associated sentiment>>> <<</Related Work>>> <<<Conclusion and Future Work>>> In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information. Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems. <<</Conclusion and Future Work>>> <<</Title>>>
{ "references": [ "10-fold cross validation" ], "type": "extractive" }