The 11th Workshop on Innovative Use of NLP for Building Educational Applications

NAACL 2016 Workshops


Sponsors - Workshop Description - AESW Shared Task - Submission Info - Important Dates - Workshop Program - Organizing Committee - Program Committee - Related Links - Upcoming Education / NLP Events

Conference: NAACL 2016

Organization: Joel Tetreault (Yahoo Labs), Jill Burstein (Educational Testing Service), Claudia Leacock (Consultant), Helen Yannakoudakis (University of Cambridge)

Contact Email: bea.nlp.workshop@gmail.com

Date : June 16, 2016

Venue : San Diego, CA, USA


Sponsors

We are pleased to announce that Educational Testing Service, Grammarly, Turnitin Lightside Labs, Pacific Metrics, Cambridge Assessments and American Institutes for Research are all gold level sponsors of the BEA11 workshop and iLexIR and Cognii are our silver level sponsors! If you or your company or institution are interested in sponsoring the BEA11, please send us an email at bea.nlp.workshop@gmail.com. Sponsorship goes toward subsidizing dinner for students attending the workshop and free t-shirts with registration.

Gold Level Sponsors



Silver Level Sponsors


Workshop Description

The BEA Workshop is a leading venue for NLP innovation for educational applications. It is one of the largest one-day workshops in the ACL community. The workshop’s continuous growth illustrates an alignment between societal need and technology advances. NLP capabilities now support an array of learning domains, including writing, speaking, reading, science, and mathematics. Within these domains, the community continues to develop and deploy innovative NLP approaches for use in educational settings. In the writing and speech domains, automated writing evaluation (AWE) and speech scoring applications, respectively, are commercially deployed in high-stakes assessment, and instructional contexts (including Massive Open Online Courses (MOOCs), and K-12 settings). Commercially-deployed plagiarism detection in K-12 and higher education settings is also prevalent. The current educational and assessment landscape in K-12 and higher education fosters a strong interest in technologies that yield analytics to support proficiency measures for complex constructs across learning domains. For writing, there is a focus innovation that supports writing tasks requiring source use, argumentative discourse, and factual content accuracy. For speech, there is an interest in advancing automated scoring to include the evaluation of discourse and content features in responses to spoken assessments. General advances in speech technology have promoted a renewed interest in spoken dialog and multimodal systems for instruction and assessment. The explosive growth of mobile applications for game-based and simulation applications for instruction and assessment is another place where NLP can play a large role.

The use of NLP in educational applications has gained visibility outside of the NLP community. First, the Hewlett Foundation reached out to public and private sectors and sponsored two competitions: one for automated essay scoring, and the other for scoring of short response items. The motivation driving these competitions was to engage the larger scientific community in this enterprise. MOOCs are now also beginning to incorporate AWE systems to manage the thousands of assignments that may be received during a single MOOC course. Learning @ Scale is a relatively new venue for NLP research in education. Another breakthrough for educational applications within the CL community is the presence of a number of shared-task competitions over the last four years – including three shared tasks on grammatical error correction alone. In 2014 alone, there were four shared tasks in NLP/Education related areas. Most recently, the 2015 ACL-IJCNLP Workshop on Natural Language Processing Techniques for Educational Applications workshop had a shared task in Chinese error diagnosis. All of these competitions increased the visibility of, and interest in, our field.

The workshop will have oral presentation sessions and a large poster session in order to maximize the amount of original work presented. This year, we are planning an invited Industry Panel comprised of representatives of companies that work in the NLP and Education space. We expect that the workshop will continue to expose the NLP community to technologies that identify novel opportunities for the use of NLP in education in English, and languages other than English. The workshop will solicit both full papers and short papers for either oral or poster presentation. We will solicit papers that incorporate NLP methods, including, but not limited to: automated scoring of open-ended textual and spoken responses; game-based instruction and assessment; intelligent tutoring; peer review, grammatical error detection; learner cognition; spoken dialog; multimodal applications; tools for teachers and test developers; and use of corpora. Research that incorporates NLP methods for use with mobile and game-based platforms will be of special interest. Specific topics include:


AESW Shared Task

We are pleased to announce that the first edition of the "Automated Evaluation of Scientific Writing" Shared Task on grammatical error detection will be co-located with BEA11 this year. The Shared Task will be organized independently from the BEA11. System description papers submitted and accepted to the AESW Shared Task will be presented as posters at the BEA Poster Session. In addition, the AESW organizers will summarize the results of the Shared Task in an oral presentation during the BEA. For more information on the task, as well as important dates and submission information, please go to: http://textmining.lt/aesw/index.html. Registration closes February 1st.

Submission Information

We will be using the NAACL Submission Guidelines for the BEA11 Workshop this year. Authors are invited to submit a full paper of up to 9 pages of content with up to 2 additional pages for references. We also invite short papers of up to 5 pages of content, including 2 additional pages for references. Please note that unlike previous years, final, camera ready versions of accepted papers will not be given an additional page to address reviewer comments.

Papers which describe systems are also invited to give a demo of their system. If you would like to present a demo in addition to presenting the paper, please make sure to select either "full paper + demo" or "short paper + demo" under "Submission Category" in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

Please use the NAACL style sheets for composing your paper: http://naacl.org/naacl-pubs/ .

We will be using the START conference system to manage submissions: https://www.softconf.com/naacl2016/BEA11/.

Important Dates

Presentation Information

Oral Presentations: Long papers accepted for oral presentations are allotted 20 minutes for the talk and 5 minutes for questions. Short papers that are accepted for oral presentations are allotted 15 minutes for the talk and 5 minutes for questions.

Poster Presentations: All papers accepted for a poster presentation will be presented in the session after lunch between 2:00 and 3:30. The posterboards will be self-standing, on top of tables (giving room for laptops, business cards, handouts, etc). The posterboards measure 36 inches high and 48 inches wide . Double-sided tape, pushpins, etc. for affixing the posters to the boards will be provided.

Workshop Program


BEA11 Proceedings

8:45 - 9:00 Loading in of Oral Presentations
9:00 - 9:15 Opening Remarks
9:15 - 9:40 The Effect of Multiple Grammatical Errors on Processing Non-Native Writing
Courtney Napoles, Aoife Cahill and Nitin Madnani
9:40 - 10:05 Text Readability Assessment for Second Language Learners
Menglin Xia, Ekaterina Kochmar and Ted Briscoe
10:05 - 10:30 Automatic Generation of Context-Based Fill-in-the-Blank Exercises Using Co-occurrence Likelihoods and Google n-grams
Jennifer Hill and Rahul Simha
10:30 - 11:00 Break
11:00 - 11:25 Automated classification of collaborative problem solving interactions in simulated science tasks
Michael Flor, Su-Youn Yoon, Jiangang Hao, Lei Liu and Alina von Davier
11:25 - 11:50 Computer-assisted stylistic revision with incomplete and noisy feedback. A pilot study
Christian M. Meyer and Johann Frerik Koch
11:50 - 12:15 A Report on the Automatic Evaluation of Scientific Writing Shared Task
Vidas Daudaravicius, Rafael E. Banchs, Elena Volodina and Courtney Napoles
12:25 - 2:00 Lunch
2:00 - 3:30 BEA11 Poster and Demo Session
2:00 - 2:45 BEA11 Poster and Demo Session A
Bundled Gap Filling: A New Paradigm for Unambiguous Cloze Exercises
Michael Wojatzki, Oren Melamud and Torsten Zesch
Unsupervised Modeling of Topical Relevance in L2 Learner Text
Ronan Cummins, Helen Yannakoudakis and Ted Briscoe
Pictogrammar: an AAC device based on a semantic grammar
Fernando Martínez-Santiago, Miguel Ángel García Cumbreras, Arturo Montejo Ráez and Manuel Carlos Díaz Galiano
Detecting Context Dependence in Exercise Item Candidates Selected from Corpora
Ildikó Pilán
Model Combination for Correcting Preposition Selection Errors
Nitin Madnani, Michael Heilman and Aoife Cahill
Automated scoring across different modalities
Anastassia Loukina and Aoife Cahill
Predicting the Spelling Difficulty of Words for Language Learners
Lisa Beinborn, Torsten Zesch and Iryna Gurevych
Topicality-Based Indices for Essay Scoring
Beata Beigman Klebanov, Michael Flor and Binod Gyawali
Shallow Semantic Reasoning from an Incomplete Gold Standard for Learner Language
Levi King and Markus Dickinson
Characterizing Text Difficulty with Word Frequencies
Xiaobin Chen and Detmar Meurers
The NTNU-YZU System in the AESW Shared Task: Automated Evaluation of Scientific Writing Using a Convolutional Neural Network
Lung-Hao Lee, Bo-Lin Lin, Liang-Chih Yu and Yuen-Hsien Tseng
Feature-Rich Error Detection in Scientific Writing Using Logistic Regression
Madeline Remse, Mohsen Mesgar and Michael Strube
UW-Stanford System Description for AESW 2016 Shared Task on Grammatical Error Detection
Dan Flickinger, Michael Goodman and Woodley Packard
2:45 - 3:30 BEA11 Poster and Demo Session B
Automatically Scoring Tests of Proficiency in Music Instruction
Nitin Madnani, Aoife Cahill and Brian Riordan
Automatically Extracting Topical Components for a Response-to-Text Writing Assessment
Zahra Rahimi and Diane Litman
Cost-Effectiveness in Building a Low-Resource Morphological Analyzer for Learner Language
Scott Ledbetter and Markus Dickinson
Spoken Text Difficulty Estimation Using Linguistic Features
Su-Youn Yoon, Yeonsuk Cho and Diane Napolitano
Enhancing STEM Motivation through Personal and Communal Values: NLP for Assessment of Utility Value in Student Writing
Beata Beigman Klebanov, Jill Burstein, Judith Harackiewicz, Stacy Priniski and Matthew Mulholland
Evaluation Dataset (DT-Grade) and Word Weighting Approach towards Constructed Short Answers Assessment in Tutorial Dialogue Context
Rajendra Banjade, Nabin Maharjan, Nobal Bikram Niraula, Dipesh Gautam, Borhan Samei and Vasile Rus
Candidate re-ranking for SMT-based grammatical error correction
Zheng Yuan, Ted Briscoe and Mariano Felice
Augmenting Course Material with Open Access Textbooks
Smitha Milli and Marti A. Hearst
Exploring the Intersection of Short Answer Assessment, Authorship Attribution, and Plagiarism Detection
Björn Rudzewitz
Linguistically Aware Information Retrieval: Providing Input Enrichment for Second Language Learners
Maria Chinkina and Detmar Meurers
Combined Tree Kernel-based classifiers for Assessing Quality of Scientific Text
Liliana Mamani Sanchez and Hector-Hugo Franco-Penya
Combining Off-the-shelf Grammar and Spelling Tools for the Automatic Evaluation of Scientific Writing (AESW) Shared Task 2016
René Witte and Bahar Sateli
Sentence-Level Grammatical Error Identification as Sequence-to-Sequence Correction
Allen Schmaltz, Yoon Kim, Alexander M. Rush and Stuart Shieber
3:30 - 4:00 Break
4:00 - 4:20 Sentence Similarity Measures for Fine-Grained Estimation of Topical Relevance in Learner Essays
Marek Rei and Ronan Cummins
4:20 - 4:45 Insights from Russian second language readability classification: complexity-dependent training requirements, and feature evaluation of multiple categories
Robert Reynolds
4:45 - 5:10 Investigating Active Learning for Short-Answer Scoring
Andrea Horbach and Alexis Palmer
5:10 - 5:25 Closing Remarks
6:30 - Post-workshop dinner
Stone Brewing World Bistro & Gardens (Liberty Station) at 2816 Historic Decatur Road, Ste 116
[ directions from hotel to restaurant ]

Organizing Committee

BEA11 Workshop


BEA Organization

Program Committee

Related Links

Other Upcoming Educational/NLP Events