ArgMining 2021
The 8th Workshop on Argument Mining, 2021
co-located with EMNLP 2021, in Punta Cana, Dominican Republic

Shared Task

Quantitative Summarization – Key Point Analysis Shared Task

Key Point Analysis (KPA) is a new NLP task, with strong relations to Computational Argumentation, Opinion Analysis, and Summarization (Bar-Haim et al., ACL-2020; Bar-Haim et al., EMNLP-2020.). Given an input corpus, consisting of a collection of relatively short, opinionated texts focused on a topic of interest, the goal of KPA is to produce a succinct list of the most prominent key-points in the input corpus, along with their relative prevalence. Thus, the output of KPA is a bullet-like summary, with an important quantitative angle and an associated well-defined evaluation framework. Successful solutions to KPA can be used to gain better insights from public opinions as expressed in social media, surveys, and so forth, giving rise to a new form of a communication channel between decision makers and people that might be impacted by the decision.

In this first-of-its-kind key point analysis shared task, we invite teams to participate in two tracks described below. Participating teams will be invited to submit their works for presentation at the EMNLP 2021 Computational Argumentation workshop and accepted papers will appear in the workshop proceedings.

Note: All participating teams must take part in track 1, while track 2 is optional. Scoring as one of the top 10 teams on track 1 is a perquisite for being evaluated on track 2.

Track 1 – Key-Point Matching

Given a debatable topic, a set of key points per stance, and a set of crowd arguments supporting or contesting the topic, report for each argument its match score for each of the key points under the same stance towards the topic.

Track 2 - Key Points Generation and Matching

Given a debatable topic and a set of crowd arguments supporting or contesting the topic, generate a set of key points for each stance of the topic and report for each given argument its match score for each of the key points under the same topic and in the same stance.

Key Point Analysis Examples

Following is an example of key point analysis, as obtained by human labeling on key points provided by an expert, on the topic "Homeschooling should be banned", on the pro stance arguments (taken from ArgKP dataset):

Key point Matched arguments count
Mainstream schools are essential to develop social skills. 61
Parents are not qualified as teachers. 20
Homeschools cannot be regulated/standardized. 15
Mainstream schools are of higher educational quality. 9

A few examples of concrete key point to argument matches:

Argument Matched Key Point
Children can not learn to interact with their peers when taught at home. Mainstream schools are essential to develop social skills.
Homeschooling a child denies them valuable lifeskills, particularly interaction with their own age group and all experiences stemming from this.
To homeschool is in one way giving a child an immersive educational experience, but not giving them the social skills and cooperative skills they need throughout life, so should be banned.
Parents are usually not qualified to provide a suitable curriculum for their children. additionally, children are not exposed to the real world. Parents are not qualified as teachers.
It is impossible to ensure that homeschooled children are being taught properly. Homeschools cannot be regulated/standardized.

Task schedule

  • Apr. 22 - Leaderboard available
  • June 24 - Test data released
  • June 28 30 - Evaluation end, submission closed
  • July 8 - Results announce
  • Aug. 5 - Paper submission due
  • Sept. 5 - Notification to authors
  • Sept. 15 - Camera-ready version due
  • Nov. 10-11 - ArgMining 2021 workshop (EMNLP)

Shared Task Organizers

  • Roni Friedman-Melamed - IBM Research AI, Israel
  • Lena Dankin - IBM Research AI, Israel
  • Yufang Hou - IBM Research AI, Ireland
  • Noam Slonim - IBM Research AI, Israel

Contact

Terms and conditions

By submitting results to this competition, you consent to the public release of your scores at the ArgMining workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers. You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgment that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science. You further agree that your system may be named according to the team name provided at the time of submission, or to a suitable shorthand as determined by the task organizers. Wherever appropriate, academic citation for the sending group would be added (e.g. in a paper summarizing the task).

Competitions should comply with any general rules of EMNLP. The organizers are free to penalize or disqualify for any violation of the above rules or for misuse, unethical behaviour or other behaviours they agree are not accepted in a scientific competition in general and in the specific one at hand.