---
_id: '34025'
abstract:
- lang: eng
  text: "Controversial topics like abortion or capital punishment inherently lack\r\nof
    correct answers or the right way to deal with. Thus, in order to find what is
    true,\r\nwhat is good, or what should be done, the involved parties need to debate.
    For the\r\npurpose of forming an opinion on a controversial topic someone needs
    to take in a\r\nlot of arguments on that topic to gather information which can
    be a time-consuming\r\nprocess. To increase efficiency, someone can use an argument
    search engine to quicken\r\nthe retrieval of relevant arguments. Although the
    usage of such a service reduces the\r\ntime to find arguments, there is still
    a lot of textual data that needs to be read. To this\r\nend, computational summarization
    approaches for arguments can limit the necessary\r\ntime for information review
    by generating short snippets capturing the main gist of\r\neach argument. Yet,
    we suggest that approaches that consider one argument at a\r\ntime show potential
    for further improvement in terms of efficiency during information\r\nreview. In
    fact, arguments on the same topic, like those retrieved by a search engine for\r\na
    certain query, partially cover the same content, e. g. arguments regarding the
    death\r\npenalty probably use deterrence as a point in favor of it. However, if
    the same aspect\r\nis central in multiple arguments, their snippets reflect this,
    which leads to redundancy\r\namong the snippets. Consequently, someone interested
    in gathering information on a\r\ncontroversial topic does not necessarily find
    new information in each snippet he or she\r\nreads.\r\nWe introduce the task of
    Contrastive Argument Summarization (CAS) which addresses\r\nthe aforementioned
    problem regarding existing argument summarization. An approach\r\nthat addresses
    CAS aims to produce contrastive snippets for each argument in a set\r\nof topic-related
    arguments. A contrastive snippet should represent the main gist of its\r\nargument,
    it should account for the argumentative nature of the text, and it should be\r\ndissimilar
    to the other topic-related arguments in order to reduce redundancy among\r\nthe
    snippets.\r\nWe propose two approaches addressing CAS, namely an extended version
    of the\r\nLexRank derivation by Alshomary et al. (2020), and an advancement of
    the work\r\nby Bista et al. (2020). Additionally, we develop two automatic measures
    to assess to\r\nwhich extent the snippets of one set are opposed. For evaluation,
    we compile a corpus\r\nusing the args.me search engine Wachsmuth et al. (2017b)
    to come close to the suggested area of application. Moreover, we conduct a manual
    annotation study to assess\r\napproaches’ effectiveness. We find that the graph-based
    approach is superior when it\r\ncomes to contrastiveness (i. e. snippets being
    dissimilar to topic-related arguments),\r\nand that the second approach outperforms
    the previous one and the unmodified version of Alshomary et al. (2020) when it
    comes to representativeness (i. e. snippets\r\ncapturing the main gist of an argument)."
author:
- first_name: Jonas
  full_name: Rieskamp, Jonas
  id: '77643'
  last_name: Rieskamp
citation:
  ama: Rieskamp J. <i>Contrastive Argument Summarization Using Supervised and Unsupervised
    Machine Learning</i>.; 2022.
  apa: Rieskamp, J. (2022). <i>Contrastive Argument Summarization Using Supervised
    and Unsupervised Machine Learning</i>.
  bibtex: '@book{Rieskamp_2022, title={Contrastive Argument Summarization Using Supervised
    and Unsupervised Machine Learning}, author={Rieskamp, Jonas}, year={2022} }'
  chicago: Rieskamp, Jonas. <i>Contrastive Argument Summarization Using Supervised
    and Unsupervised Machine Learning</i>, 2022.
  ieee: J. Rieskamp, <i>Contrastive Argument Summarization Using Supervised and Unsupervised
    Machine Learning</i>. 2022.
  mla: Rieskamp, Jonas. <i>Contrastive Argument Summarization Using Supervised and
    Unsupervised Machine Learning</i>. 2022.
  short: J. Rieskamp, Contrastive Argument Summarization Using Supervised and Unsupervised
    Machine Learning, 2022.
date_created: 2022-11-07T13:57:08Z
date_updated: 2022-11-07T13:57:37Z
language:
- iso: eng
main_file_link:
- url: https://en.cs.uni-paderborn.de/fileadmin/informatik/fg/css/teaching/theses/thesis_final.pdf
status: public
supervisor:
- first_name: Milad
  full_name: Alshomary, Milad
  id: '73059'
  last_name: Alshomary
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
title: Contrastive Argument Summarization Using Supervised and Unsupervised Machine
  Learning
type: mastersthesis
user_id: '77643'
year: '2022'
...
---
_id: '45790'
author:
- first_name: Juela
  full_name: Palushi, Juela
  last_name: Palushi
citation:
  ama: Palushi J. <i>Domain-Aware Text Professionalization Using Sequence-to-Sequence
    Neural Networks</i>.; 2022.
  apa: Palushi, J. (2022). <i>Domain-aware Text Professionalization using Sequence-to-Sequence
    Neural Networks</i>.
  bibtex: '@book{Palushi_2022, title={Domain-aware Text Professionalization using
    Sequence-to-Sequence Neural Networks}, author={Palushi, Juela}, year={2022} }'
  chicago: Palushi, Juela. <i>Domain-Aware Text Professionalization Using Sequence-to-Sequence
    Neural Networks</i>, 2022.
  ieee: J. Palushi, <i>Domain-aware Text Professionalization using Sequence-to-Sequence
    Neural Networks</i>. 2022.
  mla: Palushi, Juela. <i>Domain-Aware Text Professionalization Using Sequence-to-Sequence
    Neural Networks</i>. 2022.
  short: J. Palushi, Domain-Aware Text Professionalization Using Sequence-to-Sequence
    Neural Networks, 2022.
date_created: 2023-06-27T12:57:57Z
date_updated: 2023-07-05T07:31:17Z
department:
- _id: '600'
language:
- iso: eng
project:
- _id: '9'
  grant_number: '160364472'
  name: 'SFB 901 - B1: SFB 901 - Parametrisierte Servicespezifikation (Subproject
    B1)'
- _id: '1'
  grant_number: '160364472'
  name: 'SFB 901: SFB 901: On-The-Fly Computing - Individualisierte IT-Dienstleistungen
    in dynamischen Märkten '
- _id: '3'
  name: 'SFB 901 - B: SFB 901 - Project Area B'
status: public
supervisor:
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
title: Domain-aware Text Professionalization using Sequence-to-Sequence Neural Networks
type: bachelorsthesis
user_id: '477'
year: '2022'
...
---
_id: '45789'
author:
- first_name: Vinaykumar
  full_name: Budanurmath, Vinaykumar
  last_name: Budanurmath
citation:
  ama: Budanurmath V. <i>Propaganda Technique Detection Using Connotation Frames</i>.;
    2022.
  apa: Budanurmath, V. (2022). <i>Propaganda Technique Detection Using Connotation
    Frames</i>.
  bibtex: '@book{Budanurmath_2022, title={Propaganda Technique Detection Using Connotation
    Frames}, author={Budanurmath, Vinaykumar}, year={2022} }'
  chicago: Budanurmath, Vinaykumar. <i>Propaganda Technique Detection Using Connotation
    Frames</i>, 2022.
  ieee: V. Budanurmath, <i>Propaganda Technique Detection Using Connotation Frames</i>.
    2022.
  mla: Budanurmath, Vinaykumar. <i>Propaganda Technique Detection Using Connotation
    Frames</i>. 2022.
  short: V. Budanurmath, Propaganda Technique Detection Using Connotation Frames,
    2022.
date_created: 2023-06-27T12:56:04Z
date_updated: 2023-07-05T07:33:45Z
department:
- _id: '600'
language:
- iso: eng
project:
- _id: '9'
  grant_number: '160364472'
  name: 'SFB 901 - B1: SFB 901 - Parametrisierte Servicespezifikation (Subproject
    B1)'
- _id: '1'
  grant_number: '160364472'
  name: 'SFB 901: SFB 901: On-The-Fly Computing - Individualisierte IT-Dienstleistungen
    in dynamischen Märkten '
- _id: '3'
  name: 'SFB 901 - B: SFB 901 - Project Area B'
status: public
supervisor:
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
title: Propaganda Technique Detection Using Connotation Frames
type: mastersthesis
user_id: '477'
year: '2022'
...
---
_id: '45788'
author:
- first_name: Jonas
  full_name: Bülling, Jonas
  last_name: Bülling
citation:
  ama: 'Bülling J. <i>Political Speaker Transfer: Learning to Generate Text in the
    Styles of Barack Obama and Donald Trump</i>.; 2021.'
  apa: 'Bülling, J. (2021). <i>Political Speaker Transfer: Learning to Generate Text
    in the Styles of Barack Obama and Donald Trump</i>.'
  bibtex: '@book{Bülling_2021, title={Political Speaker Transfer: Learning to Generate
    Text in the Styles of Barack Obama and Donald Trump}, author={Bülling, Jonas},
    year={2021} }'
  chicago: 'Bülling, Jonas. <i>Political Speaker Transfer: Learning to Generate Text
    in the Styles of Barack Obama and Donald Trump</i>, 2021.'
  ieee: 'J. Bülling, <i>Political Speaker Transfer: Learning to Generate Text in the
    Styles of Barack Obama and Donald Trump</i>. 2021.'
  mla: 'Bülling, Jonas. <i>Political Speaker Transfer: Learning to Generate Text in
    the Styles of Barack Obama and Donald Trump</i>. 2021.'
  short: 'J. Bülling, Political Speaker Transfer: Learning to Generate Text in the
    Styles of Barack Obama and Donald Trump, 2021.'
date_created: 2023-06-27T12:54:30Z
date_updated: 2023-07-05T07:32:18Z
department:
- _id: '600'
language:
- iso: eng
project:
- _id: '9'
  grant_number: '160364472'
  name: 'SFB 901 - B1: SFB 901 - Parametrisierte Servicespezifikation (Subproject
    B1)'
- _id: '1'
  grant_number: '160364472'
  name: 'SFB 901: SFB 901: On-The-Fly Computing - Individualisierte IT-Dienstleistungen
    in dynamischen Märkten '
- _id: '3'
  name: 'SFB 901 - B: SFB 901 - Project Area B'
status: public
supervisor:
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
title: 'Political Speaker Transfer: Learning to Generate Text in the Styles of Barack
  Obama and Donald Trump'
type: mastersthesis
user_id: '477'
year: '2021'
...
---
_id: '45787'
author:
- first_name: Avishek
  full_name: Mishra, Avishek
  last_name: Mishra
citation:
  ama: Mishra A. <i>Computational Text Professionalization Using Neural Sequence-to-Sequence
    Models</i>.; 2021.
  apa: Mishra, A. (2021). <i>Computational Text Professionalization using Neural Sequence-to-Sequence
    Models</i>.
  bibtex: '@book{Mishra_2021, title={Computational Text Professionalization using
    Neural Sequence-to-Sequence Models}, author={Mishra, Avishek}, year={2021} }'
  chicago: Mishra, Avishek. <i>Computational Text Professionalization Using Neural
    Sequence-to-Sequence Models</i>, 2021.
  ieee: A. Mishra, <i>Computational Text Professionalization using Neural Sequence-to-Sequence
    Models</i>. 2021.
  mla: Mishra, Avishek. <i>Computational Text Professionalization Using Neural Sequence-to-Sequence
    Models</i>. 2021.
  short: A. Mishra, Computational Text Professionalization Using Neural Sequence-to-Sequence
    Models, 2021.
date_created: 2023-06-27T12:51:08Z
date_updated: 2023-07-05T07:32:50Z
department:
- _id: '600'
language:
- iso: eng
project:
- _id: '9'
  grant_number: '160364472'
  name: 'SFB 901 - B1: SFB 901 - Parametrisierte Servicespezifikation (Subproject
    B1)'
- _id: '1'
  grant_number: '160364472'
  name: 'SFB 901: SFB 901: On-The-Fly Computing - Individualisierte IT-Dienstleistungen
    in dynamischen Märkten '
- _id: '3'
  name: 'SFB 901 - B: SFB 901 - Project Area B'
status: public
supervisor:
- first_name: Henning
  full_name: Wachsmuth, Henning
  id: '3900'
  last_name: Wachsmuth
title: Computational Text Professionalization using Neural Sequence-to-Sequence Models
type: mastersthesis
user_id: '477'
year: '2021'
...
