---
_id: '61444'
abstract:
- lang: eng
  text: Backchannels and fillers are important linguistic expressions in dialogue,
    but often treated as ‘noise’ to be bypassed in modern transformer-based language
    models. Our work studies the representation of them in language models using three
    fine-tuning strategies. The models are trained on three dialogue corpora in English
    and Japanese, where backchannels and fillers are preserved and annotated, to investigate
    how fine-tuning can help LMs learn their representations. We first apply clustering
    analysis to the learnt representation of backchannels and fillers, and have found
    increased silhouette scores in representations from fine-tuned models, which suggests
    that fine-tuning enables LMs to distinguish the nuanced semantic variation in
    different backchannel and filler use. We also use natural language generation
    (NLG) metrics and qualitative analysis to confirm that the utterances generated
    by fine-tuned language models resemble human-produced utterances more closely.
    Our findings suggest the potentials of transforming general LMs into conversational
    LMs that are more capable of producing human-like languages adequately.
author:
- first_name: Yu
  full_name: Wang, Yu
  last_name: Wang
- first_name: Leyi
  full_name: Lao, Leyi
  last_name: Lao
- first_name: Langchu
  full_name: Huang, Langchu
  last_name: Huang
- first_name: Gabriel
  full_name: Skantze, Gabriel
  last_name: Skantze
- first_name: Yang
  full_name: Xu, Yang
  last_name: Xu
- first_name: Hendrik
  full_name: Buschmeier, Hendrik
  id: '76456'
  last_name: Buschmeier
  orcid: 0000-0002-9613-5713
citation:
  ama: Wang Y, Lao L, Huang L, Skantze G, Xu Y, Buschmeier H. Investigating the representation
    of backchannels and fillers in fine-tuned language models.
  apa: Wang, Y., Lao, L., Huang, L., Skantze, G., Xu, Y., &#38; Buschmeier, H. (n.d.).
    <i>Investigating the representation of backchannels and fillers in fine-tuned
    language models</i>. 64th Annual Meeting of the Association for Computational
    Linguistics, San Diego, CA, USA.
  bibtex: '@inproceedings{Wang_Lao_Huang_Skantze_Xu_Buschmeier, title={Investigating
    the representation of backchannels and fillers in fine-tuned language models},
    author={Wang, Yu and Lao, Leyi and Huang, Langchu and Skantze, Gabriel and Xu,
    Yang and Buschmeier, Hendrik} }'
  chicago: Wang, Yu, Leyi Lao, Langchu Huang, Gabriel Skantze, Yang Xu, and Hendrik
    Buschmeier. “Investigating the Representation of Backchannels and Fillers in Fine-Tuned
    Language Models,” n.d.
  ieee: Y. Wang, L. Lao, L. Huang, G. Skantze, Y. Xu, and H. Buschmeier, “Investigating
    the representation of backchannels and fillers in fine-tuned language models,”
    presented at the 64th Annual Meeting of the Association for Computational Linguistics,
    San Diego, CA, USA.
  mla: Wang, Yu, et al. <i>Investigating the Representation of Backchannels and Fillers
    in Fine-Tuned Language Models</i>.
  short: 'Y. Wang, L. Lao, L. Huang, G. Skantze, Y. Xu, H. Buschmeier, in: n.d.'
conference:
  end_date: 2026-07-07
  location: San Diego, CA, USA
  name: 64th Annual Meeting of the Association for Computational Linguistics
  start_date: 2026-07-02
date_created: 2025-09-25T19:00:23Z
date_updated: 2026-04-07T09:56:43Z
department:
- _id: '660'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2509.20237
oa: '1'
project:
- _id: '112'
  name: 'TRR 318; TP A02: Verstehensprozess einer Erklärung beobachten und auswerten'
publication_status: accepted
quality_controlled: '1'
status: public
title: Investigating the representation of backchannels and fillers in fine-tuned
  language models
type: conference
user_id: '76456'
year: '2026'
...
