Research Review: How Does Utah Compose Affect Student Writing Outcomes?

Utah Compose facilitates cycles of practice and feedback, which are essential for developing writing proficiency. However, without the integration of effective instructional support, it alone “will not transform writing outcomes.” Modest improvements have been more frequently observed during revisions of an essay than in initial drafts of different essays. Despite its limited reach, automated feedback through scores and suggestions promotes growth by offering motivation and guidance for progress. As a result, students engage in more revisions and learn to utilize feedback more effectively. In the long term, increased revision is associated with improved performance on statewide tests. 


The Key Findings that follow include reference to MI Write. Utah Compose uses the same technology infrastructure as MI Write and includes the same features and functionality. 


Key Findings


Automated feedback helps students engage in the revision process to strengthen skills.


•    Students in classes using MI Write revised more, and completed more drafts, than students in classes using Google Docs; teachers reported having more time to give higher-level feedback. [5]
•    Automated feedback supports growth especially in mechanical aspects such as spelling and grammar, but limited in capability to give higher-level feedback on writing quality and content. [7; 2]
•    Students who wrote fewer essays but revised many times showed greater gains in writing proficiency than students who wrote many essays over the school year but revised less. [1]


Writing growth across revisions is nonlinear.


•    Study of usage data from Utah Compose (a state-specific version of MI Write) over two years, in Grades 4–11, indicates improvements occur within the first five draft revisions. [2]
•    Data from a statewide assessment of Grade 4–8 students showed growth over revisions with diminishing returns and plateau at 11 revisions. [7]
•    Students in Grades 3–5 using MI Write with district writing curriculum did not show improvement in first draft performance from pretest to posttest, but used feedback more effectively and improved faster on subsequent prompts. [1; 6] 


Increased revision is positively associated with better performance on state English language arts tests.


•    In a small-scale study of Grade 6 students using MI Write vs. Google Docs as a feedback condition. [8]
•    In a naturalistic study of usage of Utah Compose for Grades 4–11 over two years. [2]


There are differential effects on students using MI Write.


•    District-wide implementation in Grades 3–5, boys produced lower quality first drafts than girls but improved more quickly using feedback (in revising a fall but not a spring essay). While not disadvantaging any specific population (no equity issues other than availability of technology at different schools), the tool was not observed to close achievement gaps between groups of students in higher and lower socioeconomic statuses. [1]
•    Students with disabilities produced lower quality initial drafts but improved more quickly than typically developing peers. [3]
•    Elementary school teachers observed that automated scores may be less reliable for English learners and students with disabilities. [4]

 

References


1.    Huang, Y., & Wilson, J. (2021). Using automated feedback to develop writing proficiency. Computers and Composition, 62, 102675. https://doi.org/10.1016/j.compcom.2021.102675


2.    Potter, A., & Wilson, J. (2021). Statewide implementation of automated writing evaluation: Analyzing usage and associations with state test performance in grades 4–11. Educational Technology Research and Development, 69(3), 1557–1578. https://doi.org/10.1007/s11423-021-10004-9


3.    Wilson, J. (2017). Associated effects of automated essay evaluation software on growth in writing quality for students with and without disabilities. Reading & Writing, 30, 691–718 http://dx.doi.org/10.1007/s11145-016-9695-z


4.    Wilson, J., Ahrendt, C., Fudge, E. A., Raiche, A., Beard, G., & MacArthur, C. (2021). Elementary teachers’ perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers & Education, 168, 104208. https://doi.org/10.1016/j.compedu.2021.104208


5.    Wilson, J., & Czik, A. (2016). Automated essay evaluation software in English language arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers and Education, 100, 94–109. https://doi.org/10.1016/j.compedu.2016.05.004


6.    Wilson, J., Huang, Y., Palermo, C., Beard, G., & MacArthur, C. A. (2021). Automated feedback and automated scoring in the elementary grades: Usage, attitudes, and associations with writing outcomes in a districtwide implementation of MI Write. International Journal of Artificial Intelligence in Education, 31, 234–276. https://doi.org/10.1007/s40593-020-00236-w


7.    Wilson, J., Olinghouse N. G., & Andrada, G. N. (2014). Does automated feedback improve writing quality? Learning Disabilities: A Contemporary Journal, 12, 93–118.


8.    Wilson, J., & Roscoe, R. D. (2020). Automated writing evaluation and feedback: Multiple metrics of efficacy. Journal of Educational Computing Research, 58(1), 87–125. https://doi.org/10.1177/0735633119830764