2023 COGENAbductiveCommonsenseLangua

From GM-RKB
(Redirected from Zandie et al., 2023)
Jump to navigation Jump to search

Subject Headings: Abductive Reasoning, Counterfactual Reasoning.

Notes

Cited By

Quotes

Abstract

Reasoning is one of the most important elements in achieving Artificial General Intelligence (AGI), specifically when it comes to Abductive and counterfactual reasoning. In order to introduce these capabilities of reasoning in Natural Language Processing (NLP) models, there have been recent advances towards training NLP models to better perform on two main tasks - Abductive Natural Language Inference (alphaNLI) and Abductive Natural Language Generation Task (alphaNLG). This paper proposes CoGen, a model for both alphaNLI and alphaNLG tasks that employ a novel approach of combining the temporal commonsense reasoning for each observation (before and after a real hypothesis) from pre-trained models with contextual filtering for training. Additionally, we use state-of-the-art semantic entailment to filter out the contradictory hypothesis during the inference. Our experimental results show that CoGen outperforms current models and set a new state of the art in regards to alphaNLI and alphaNLG tasks. We make the source code of CoGen model publicly available for reproducibility and to facilitate relevant future research.

1 Introduction

Different kinds of reasoning can be categorized into three classes (Walton, 2014): Deduction, Induction, and Abduction. In deduction, the truth of the conclusion is already provided in the premise, therefore, it is impossible that the premises are true and the conclusion is false. Induction is the process of going from the truth of some premises to the conclusion. Finally, abduction is the process of forming the most plausible hypothesis based on incomplete observations. The focus of this paper is on abductive reasoning.

The abductive inference could be viewed as going backward from the conclusions of a valid deductive inference to the premises to find its plausible causes and effects. In terms of classical logic, this is a fallacy (Andersen, 1973). Abductive reasoning is defeasible (and also non-monotonic) which means the conclusions can be refuted in the light of new data. Although abductive reasoning forms one of the core abilities of human cognition, its research in the area of NLP is still widely unexplored.

Recent work on large language models like GPT-3 (Brown et al., 2020) and GPT-Neo (Gao et al., 2020) had impressive results on different NLP tasks but still struggled with Abductive Natural Language Inference (αNLI) tasks. These models embed a great deal of world knowledge (Petroni et al., 2019; Wang et al., 2020), but their potential for commonsense reasoning (e.g. abductive reasoning) has not been fully harnessed. The task of abductive commonsense language generation can be defined as generating reasons given incomplete observations.

Abductive commonsense language generation can be formulated as a controlled language generation task. Like other controllable language generation problems that involve maintaining fluency and relevance of the generated text conditioned on some property, such as sentiment (Lample et al., 2018), topic (Zandie and Mahoor, 2021), and style (Shen et al., 2017), the abductive commonsense language generation can be viewed as a controllable language generation task that is conditioned on incomplete observations.

In this paper, we introduce COGEN[1], a model for generating and inferring abductive reasons that are compatible with observations. This combines temporal commonsense reasoning for each observation (before and after the hypothesis) from pretrained models with contextual filtering for training. Contextual filtering refers to the technique of refining temporal entailment during text generation to produce more coherent and contextually relevant output. We also use state-of-the-art semantic entailment to filter out contradictory hypotheses during the inference. Our results show that COGEN outperforms all previous models regarding αNLI and αNLG tasks.

...
Figure 1: COGEN first uses Temporal Reasoner to produce before and after commonsense then with a Cross-Encoder it filters unrelated temporal commonsense based on the context. With GPT-2 the system takes both observations and contextual knowledge as inputs a set of hypotheses H will be generated that in semantic entailment will be cleaned up from contradictions by a BERT model. The bold arrows indicate a set of inputs.

Our main contributions are the following:

  1. . Using temporal commonsense reasoning for augmenting the observations - a crucial step in the abductive hypothesis generation as this task requires understanding the temporal relationships such as causes, effects, reasons, and intents.
  2. . Using contextual filtering to help narrow down the space of generated commonsense reasoning to the ones that are relevant to both observations.
  3. . Using the semantic entailment filtering to rule out the possibility of generating contradictory hypotheses given both observations.
  4. . Releasing the source code of the COGEN model for reproducibility and assisting relevant future research.

2 Related Work

Previous research on reasoning in NLP mainly focuses on monotonic reasoning, which is usually about finding the “entailment”, “contradiction” or “neutral” relationships between a premise and a hypothesis. For example, SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) are both datasets that focus on monotonic inference. There is a choice of plausible reasoning task with the COPA dataset (Roemmele et al., 2011) which is designed for causal reasoning.

In (Qin et al., 2019), the authors introduced the TimeTravel dataset which contains over 28k counterfactual instances. The results show the current language models lack understanding of the reasoning behind the stories, sometimes even adding more samples will not improve the quality of the generation. (Qin et al., 2020) proposes Delorean, a new unsupervised decoding algorithm based on backpropagation that incorporates observations from the past and future to generate constrained text in between. They used the ART dataset (Bhagavatula et al., 2019) which contains 20k samples.

The most relevant work to COGEN is Abductive Commonsense reasoning (COMeTEmb+GPT2) (Bhagavatula et al., 2019), which introduces ART dataset consisting of 20k commonsense narrative contexts with 200k explanations. They also introduced two tasks: abductive NLI (αNLI) a multiplechoice task for choosing the best hypothesis and abductive NLG (αNLG) which generates an abductive hypothesis given the two before and after contextual observations. Results showed that abductive NLG is much more challenging compared to (αNLI) and needs further research. They also used GPT-2 and COMET (Bosselut et al., 2019) for commonsense reasoning to generate new abductive hypotheses. The human judgment results show that only 44.56 percent of these generated hypotheses make sense to evaluators. In (Paul and Frank, 2021), they consider possible events emerging from the candidate hypothesis and then select the one that is most similar to the observed outcome. Their approach outperforms COMeTEmb+GPT2 on the αNLI task and achieves 72.2 on the test set. (Ji et al., 2020) proposed GRF, which is based on GPT-2 and dynamic multi-hop reasoning for multi-relational paths extracted from ConceptNet for αNLG.

REFLECTIVE DECODING (West et al., 2020) is an unsupervised text generation algorithm for text infilling that uses two pretrained forward and backward language models. This algorithm outperforms all unsupervised methods, but is still significantly behind the finetuned model of COMeTEmb+GPT2 in abductive generation.

...

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2023 COGENAbductiveCommonsenseLanguaRohola Zandie
Diwanshu Shekhar
Mohammad Mahoor
{COGEN}: Abductive Commonsense Language Generation2023
  1. Codes and Data are publicly available at: https://github.com/roholazandie/abduction_modeling