Syntactic complexity measures from L2SCA [40].

<div><p>This study investigates the syntactic complexity of argumentative essays generated by ChatGPT in comparison to those written by native speakers. By examining cross-rhetorical-stage variation in syntactic complexity, we explore how ChatGPT’s writing aligns with or diverges from hu...

Full description

Saved in:
Bibliographic Details
Main Author: Wenlong Liu (1672576) (author)
Other Authors: Xianming Liu (1602160) (author)
Published: 2025
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<div><p>This study investigates the syntactic complexity of argumentative essays generated by ChatGPT in comparison to those written by native speakers. By examining cross-rhetorical-stage variation in syntactic complexity, we explore how ChatGPT’s writing aligns with or diverges from human argumentative writing. The results reveal that ChatGPT and native speakers exhibit similar patterns in mean length of sentence in the thesis stage, mean length of T-unit and complex nominals per T-unit in the conclusion stage. However, ChatGPT showed a preference for coordination structures across all stages, relying more on parallel constructions, and native speakers used subordination structure and verb phrases more frequently across all stages. Additionally, ChatGPT’s syntactic complexity was characterized by lower variability across multiple measures, indicating a more uniform and formulaic output. These findings underscore the differences between ChatGPT and native speakers in syntactic complexity and rhetorical functions in argumentative essays, therefore contributing to our understanding of ChatGPT’s argumentative writing performance and providing valuable insights for ChatGPT integration into writing instruction.</p></div>