---
res:
  bibo_abstract:
  - Backchannels and fillers are important linguistic expressions in dialogue, but
    often treated as ‘noise’ to be bypassed in modern transformer-based language models.
    Our work studies the representation of them in language models using three fine-tuning
    strategies. The models are trained on three dialogue corpora in English and Japanese,
    where backchannels and fillers are preserved and annotated, to investigate how
    fine-tuning can help LMs learn their representations. We first apply clustering
    analysis to the learnt representation of backchannels and fillers, and have found
    increased silhouette scores in representations from fine-tuned models, which suggests
    that fine-tuning enables LMs to distinguish the nuanced semantic variation in
    different backchannel and filler use. We also use natural language generation
    (NLG) metrics and qualitative analysis to confirm that the utterances generated
    by fine-tuned language models resemble human-produced utterances more closely.
    Our findings suggest the potentials of transforming general LMs into conversational
    LMs that are more capable of producing human-like languages adequately.@eng
  bibo_authorlist:
  - foaf_Person:
      foaf_givenName: Yu
      foaf_name: Wang, Yu
      foaf_surname: Wang
  - foaf_Person:
      foaf_givenName: Leyi
      foaf_name: Lao, Leyi
      foaf_surname: Lao
  - foaf_Person:
      foaf_givenName: Langchu
      foaf_name: Huang, Langchu
      foaf_surname: Huang
  - foaf_Person:
      foaf_givenName: Gabriel
      foaf_name: Skantze, Gabriel
      foaf_surname: Skantze
  - foaf_Person:
      foaf_givenName: Yang
      foaf_name: Xu, Yang
      foaf_surname: Xu
  - foaf_Person:
      foaf_givenName: Hendrik
      foaf_name: Buschmeier, Hendrik
      foaf_surname: Buschmeier
      foaf_workInfoHomepage: http://www.librecat.org/personId=76456
    orcid: 0000-0002-9613-5713
  dct_date: 2026^xs_gYear
  dct_language: eng
  dct_title: Investigating the representation of backchannels and fillers in fine-tuned
    language models@
...
