Skip to search
Skip to main content
Search in
Keyword
Title (keyword)
Author (keyword)
Subject (keyword)
Title starts with
Subject (browse)
Author (browse)
Author (sorted by title)
Call number (browse)
search for
Search
Advanced Search
Bookmarks
(
0
)
Princeton University Library Catalog
Start over
Cite
Send
to
SMS
Email
EndNote
RefWorks
RIS
Printer
Bookmark
The Routledge International Handbook of Automated Essay Evaluation.
Author
Shermis, Mark D.
[Browse]
Format
Book
Language
English
Εdition
1st ed.
Published/Created
Oxford : Taylor & Francis Group, 2024.
©2024.
Description
1 online resource (647 pages)
Availability
Available Online
Taylor & Francis eBooks Complete
Routledge Handbooks Online Complete
Details
Subject(s)
Artificial intelligence
[Browse]
Educational technology
[Browse]
Related name
Wilson, Joshua
[Browse]
Library of Congress genre(s)
Essays
[Browse]
Series
Routledge International Handbooks Series
[More in this series]
Summary note
This is a definitive guide at the intersection of automation, artificial intelligence, and education. This volume encapsulates the ongoing advancement of AEE, reflecting its application in both large-scale and classroom-based assessments to support teaching and learning endeavours.
Source of description
Description based on publisher supplied metadata and other sources.
Part of the metadata in this record was created by AI, based on the text of the resource.
Contents
Cover
Half Title
Series Information
Title Page
Copyright Page
Table of Contents
About the Editors
List of Contributors
Foreword
Acknowledgments
Reviewer Acknowledgments
Section 1 Introduction to AEE and Modern AEE Systems
1 Introduction to Automated Essay Evaluation
1.1 Introduction
1.2 The Evolution of Automated Scoring and Automated Feedback On Writing
1.2.1 The 2012 Hewlett Trials and Their Outcomes
1.2.2 The National Assessment of Educational Progress (NAEP) Trials
1.3 Current Use Cases for Automated Essay Evaluation
1.3.1 Evaluating Essays With 150 Words Or More
1.3.2 Short-Form Constructed Responses With Fewer Than 150 Words
1.3.3 Content-Intensive Responses
1.3.4 Content-Superficial Responses
1.3.5 Summatively Scored Essays
1.3.6 Formative Assessment
1.4 Frameworks for Validating AEE
1.5 Lingering and New Concerns Related to AEE
1.6 The Current Handbook: Apprising the State of the Art and Fostering Future Development
References
2 Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand Scoring in the Summative Assessment Program
2.1 Introduction
2.2 Progressive Hybrid Scoring Approaches
2.2.1 Overview
2.2.2 Project Essay Grade
2.2.2.1 PEG Architecture
2.2.2.2 PEG Hybrid Scoring Applications
2.2.2.3 Evidence for Use
2.2.3 Requirements
2.2.3.1 Training Data
2.2.3.2 Validation
2.2.4 Training
2.2.5 Hybrid Scoring Process
2.2.5.1 Role of Humans
2.2.5.2 Role of the Engine
2.3 Implications
2.3.1 Future Directions
Notes
3 Exploration of the Stacking Ensemble Learning Algorithm for Automated Scoring of Constructed-Response Items in Reading Assessment
3.1 Introduction
3.2 Methods
3.2.1 Data
3.2.2 Model Building Process
3.2.2.1 Text Preprocessing and Processing.
3.2.2.2 Feature Extraction
3.2.2.3 Automated Scoring Classifier Development
3.2.3 Model Evaluation
3.3 Results
3.3.1 Automated Scoring Classifier Development
3.4 Summary and Discussion
4 Scoring Essays Written in Persian Using a Transformer-Based Model: Implications for Multilingual AES
4.1 Introduction
4.1.1 Persian as a Unique Case Study for Multilingual AES
4.1.2 Purpose of the Chapter
4.2 Overview of a Transformer-Based System for AES
4.2.1 Introduction to Transformers
4.2.2 Bidirectional Encoder Representations From Transformers
4.2.3 Multilingual BERT
4.3 Scoring Persian Essays Using MBERT Transformer Model
4.3.1 Data Set
4.3.2 Model Architecture
4.3.2.1 Word Embedding Word2Vec Model
4.3.2.2 Transformer MBERT Model
4.3.2.3 Hyperparameter Tuning
4.3.3 Performance Measures
4.4 Comparing the Performance of the Word Embedding and Transformer Models
4.4.1 Performance of Models Overall
4.4.2 Performance of Models By Score Level
4.5 Conclusions and Implications for Multilingual AES
4.5.1 The Importance of Transformers for Multilingual AES
4.5.2 Using MBERT to Score Essays Written in Persian
4.5.3 Assessment Technology, Equity, and Opportunity
Note
Appendix A
Appendix B
.......
....
Instruction
Topics
5 SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese Foreign Language Learners
5.1 Introduction
5.2 Related Works
5.2.1 DNN-Based AES Systems
5.2.2 Chinese Automatic Essay Scoring and ACES
5.3 Details of SW-M
5.3.1 Preprocessing Module
5.3.2 Textual Features
5.3.3 Typos
5.3.4 Grammatical Errors
5.3.5 Scoring Model: A Fuzzy-Based Approach
5.4 Performance of SWM
5.5 Future Studies
References.
6 NLP Application in the Hebrew Language for Assessment and Learning
6.1 Introduction
6.2 Hebrew Orthography and Morphology
6.2.1 Hebrew Orthography
6.2.2 Hebrew Morphology
6.2.2.1 The Verb System
6.2.2.2 The Noun System
6.2.2.3 Prepositions, Conjunctions, and Determiners
6.2.3 Text Length and Density
6.2.3.1 Hebrew Versus English Lexicon
6.2.3.2 Text Length
6.3 Morphological Lexicon and Corpora
6.3.1 Morphological Lexicon
6.3.2 Hebrew Corpora
6.3.2.1 M1 Corpus and the Annotated Corpus
6.3.2.2 News Corpus
6.3.3 Language Models
6.4 Computational Infrastructure for NLP in Hebrew
6.4.1 Tokenizer
6.4.2 Morphological Analyzer
6.4.3 Morphological Disambiguator
6.4.4 Semantic Disambiguator
6.4.5 Feature Extraction
6.4.5.1 Statistical Or "Surface" Features
6.4.5.2 Lexical Features
6.4.5.3 Morphological Features
6.4.5.4 Syntactic Features
6.4.5.5 Semantic Features
6.4.6 Grouping Text Features Into Linguistic Factors
6.4.7 Text Analysis Pipeline
6.5 Automated Essay Scoring
6.5.1 Score Prediction Algorithms
6.5.2 Grouping Features Into Macro-Features and Factors
6.5.3 Validity of NiteRater
6.5.3.1 Face and Content Validity - Identifying and Scoring Aberrant Essays
6.5.3.2 Predictive Or Criterion-Related Validity - Scoring of Classroom Essays
6.5.3.3 Predictive Or Criterion-Related Validity - Scoring of Tests for Admission to Higher Education
6.5.3.4 "True" Validity - Agreement With True Scores
6.5.3.5 Content Validity - Generalizing the Prediction Equation Across Prompts
6.5.4 Validity of Combined Computer and Human Scores
6.5.5 Quality Assurance of Essay Scoring
6.6 Other Applications of the Hebrew-NLP System
6.6.1 Providing Feedback to Essay Writers
6.6.2 Readability Assessment
6.6.2.1 Application to Textbooks (CET).
6.6.2.2 Simplification of Hebrew Legal Texts
6.6.3 Online Service to the Research Community
6.7 Summary, Open Issues, and Future Directions
Section 2 Expanding Automated Evaluation: Reading, Speech, Mathematics, and Writing Research
7 Automated Scoring for NAEP Short-Form Constructed Responses in Reading
7.1 Introduction
7.1.1 Short-Form Constructed Responses
7.1.2 The Current Study
7.2 Method
7.2.1 Prompt-Specific Competition
7.2.1.1 Participants
7.2.1.2 Instruments
7.2.1.3 Procedure
7.2.1.4 Results
7.2.2 Generic Competition
7.2.2.1 Participants and Instruments
7.2.2.2 Procedure
7.2.2.3 Results
7.3 Discussion
7.3.1 Limitations
8 Automated Scoring and Feedback for Spoken Language
8.1 Introduction
8.2 Automated Scoring of Spoken Vs. Written Language
8.3 From the Rubric to Speech Features
8.4 Automated Speech Scoring System Architecture
8.4.1 Automatic Speech Recognition
8.4.2 Computing Speech Features
8.4.3 Filtering Models
8.4.4 Scoring Models
8.5 Operational Considerations
8.6 Providing Feedback to Language Learners
8.7 Speech Scoring Without Curated Features
8.8 Open Research Issues
8.9 Conclusion
9 Automated Scoring of Math Constructed-Response Items
9.1 Introduction
9.2 Anatomy of a Math Item
9.3 Challenges of Math Automated Scoring
9.3.1 Representation of Mathematics
9.3.2 Equivalence of Expressions
9.3.3 Evaluation of Mathematics
9.3.4 Extracting Mathematics From Prose
9.3.5 Understanding Reasoning
9.4 Injecting Mathematical Reasoning Into NLP Scoring Models
9.4.1 Scoring of Math-Only Responses
9.4.2 Scoring of Responses Containing Prose
9.4.3 Brief Comment On the Validity of Automated Scoring of Math CR Items
9.5 Empirical Study.
9.5.1 Ablation Study Results
9.5.2 Large Language Models for Math CR Scoring
9.6 Conclusion
10 We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale Writing Research Projects
10.1 Introduction
10.1.1 We Write Intervention
10.1.2 Theoretical Framework
10.2 Developing a ChatGPT-Based Scoring Algorithm to Evaluate the Efficacy of the We Write Intervention
10.2.1 Design of Measures
10.2.2 Human Scoring Scheme for Essay Quality
10.2.3 ChatGPT Scoring Model Architecture/Details
10.2.3.1 Refinement of Scoring
10.3 Score Validation: Comparing Human and ChatGPT Scoring
10.4 Discussion and Future Research
10.4.1 Score Tendencies
10.4.2 Agreement Between Scores
10.4.3 Generosity of Scoring
10.4.4 Correlation Across Proficiency Levels
10.4.5 Efficiency
10.5 Limitations
10.6 Conclusion
Section 3 Innovations in Automated Writing Evaluation
11 Exploring the Role of Automated Writing Evaluation as a Formative Assessment Tool Supporting Self-Regulated Learning in Writing
11.1 Introduction
11.1.1 The Present Chapter
11.2 Does AWE Help Students Learn Evaluation Criteria?
11.2.1 Learning Evaluation Criteria: Summary and Future Directions
11.3 Does AWE Help Students Practice Writing Skills and Processes?
11.3.1 Practice Writing Skills and Processes: Summary and Future Directions
11.4 Does AWE Provide Understandable and Actionable Feedback?
11.4.1 Understandable and Actionable Feedback: Summary and Future Directions
11.5 Does AWE-Supported Peer Review Offer Benefits for Reviewers and Writers?
11.5.1 AWE-Supported Peer Review: Summary and Future Directions
11.6 Does AWE Support Students Taking Ownership of Their Learning?
11.6.1 Ownership of Learning: Summary and Future Directions
11.7 Conclusion.
Show 227 more Contents items
ISBN
9781040033241
1040033245
9781003397618
1003397611
9781040033340
1040033342
Statement on responsible collection description
Princeton University Library aims to describe library materials in a manner that is respectful to the individuals and communities who create, use, and are represented in the collections we manage.
Read more...
Other views
Staff view
Ask a Question
Suggest a Correction
Supplementary Information