Posted by Bryan Twarek on May 28, 2020
K-12 teacher takeaways from SIGCSE 2020 Portland
SIGCSE provides an important forum for computer science education research, and it is unfortunate that this year’s conference was canceled. In this four-part series, I’m excited to share some learnings and practical takeaways relevant to K-12 CS teachers to help ensure practitioners benefit from this great work. This third part focuses on K-12 assessment.
Parts 1 and 2 of the series introduced a variety of research takeaways and instructional strategies. Part 4 on K-12 curricula, tools, and platforms will be published tomorrow.

Assessment in K-12 computer science is a nascent area of research. Fortunately, several significant studies were featured in the SIGCSE program.

Assessing Middle School Student Projects for Program Evaluation

Scoring Rubric ElementsYvonne Kao, Irene Nolan, and Andrew Rothman presented their scoring process and rubric for assessing creative student projects as a means of program-level evaluation of student learning. They used rubrics to assess holistic review (program correctness, usability, project scale and complexity, and creativity), programming fundamentals, and programming style (use of comments, code readability).
See an example rubric for one concept (variables) and screenshot of a student project used to evaluate use of variables:

CodeMaster Automated Assessment of an App Inventor ProjectCodeMaster Automated Evaluation of Student Projects

Nathalia da Cruz Alves, Igor Solecki, and their colleagues developed the CodeMaster rubric to automate the analysis of App Inventor and Snap! programs that are created by students as part of complex, open-ended learning activities. Their CodeMaster tool automates the performance-based analysis of (1) algorithms and programming concepts and (2) user interface design concepts (i.e., usability and aesthetics of the applications). Their team evaluated the reliability and validity of the rubric based on the tests using thousands of projects in the App Inventor Gallery.
See an excerpt from their rubric used to evaluate use of CS concepts:

See the team’s two papers describing their studies evaluating the visual design and programming concepts and their slide deck.

Personalized Assessment Worksheets for Scratch (3-6)

Jean Salac, Diana Franklin, and their colleagues created the Personalized Assessment Worksheets for Scratch (PAWS) Tool, in order to create a scalable, automated technique to assess student comprehension of their own code. Upper elementary students all receive the same assessment items to measure their understanding of core CS concepts. When possible, generic assessment items are personalized by integrating segments of students’ code from their own Scratch projects. See sample items:

Sample question on sequence

Sample question on events

The authors found that:
  • When asked multiple-choice questions about their scripts or partial scripts in which the original meaning is retained, students answer similarly to or better than students receiving generic questions.
  • When explaining their code, students are more likely to answer the question, but they often do not describe the individual blocks as thoroughly as students receiving generic questions. 

See their paper, presentation video, slide deck, and assessment tool that aligns with projects found in the Scratch Act 1 and Scratch Encore curricula.

Measuring Introductory Programming Concepts in Middle School

Shuchi Grover presented her principled approach to design, pilot, and refine an assessment for middle school introductory programming concepts. Each assessment item targets focal knowledge, skills, and abilities (FKSAs), which are granular learning outcomes. See an example (partial) sample item and the associated FKSAs:

Pattern recognition question
Associated FKSAs:
  1. Describe the structural components of a pattern.
  2. Identify a pattern from a real-world phenomenon.
  3. Show that a loop involves a repeating pattern, that will terminate under a specified condition or after a certain number of repetitions.
  4. Identify the repeating pattern within a loop.

Assessing CT Practices in Upper Elementary

Satabdi Basu and her colleagues presented their computational thinking assessment for 4th to 6th grade students in Hong Kong. Their FKSAs and associated assessment items are based on the framework described by Brennan and Resnick (2012): reusing and remixing; algorithmic thinking; abstraction and modularization; and testing and debugging. Here is an example item for algorithmic thinking:
Robot Algorithm Problem
Associated FKSAs:
  • Ability to summarize the behavior of an algorithm when given a specific set of inputs.
  • Ability to compare the trade-offs between multiple approaches/techniques to implementing a problem based on certain evaluation criteria.
The first item asks students which of the three methods will take the robot from the Start to the Finish square. The subsequent items increase the number of constraints. The second item asks students to select the method that represents the fastest route from Start to Finish, while the third item has students selecting the method that incurs the least cost from Start to Finish. The fourth item combines time and cost constraints and asks students to select an option that will meet all the given criteria.

Measuring Data and Analysis Concepts

Basu and her colleagues also designed a playful, curriculum-neutral assessment aligned with the Data and Analysis concept (grades 6-8) from the K-12 CS framework.

Computational Thinking Concepts and Skills Test

Markeya Peteranetz, Patrick Morrow, and Leen-Kiat Soh developed and validated a Computational Thinking Concepts and Skills Test (CTCAST) to measure core computational thinking knowledge among undergraduate students. Assessed concepts include problem decomposition, pattern recognition, abstraction, generalization, algorithm design, and evaluation. They compared scores to another multiple choice assessment they developed called the Nebraska Assessment of Computing Knowledge (NACK), which contains both conceptual and application questions. While the CTCAST has only been administered to undergraduate students, the authors believe it might be useful to high school students. See two example items:
Abstraction
Assume that the U.S. Census Bureau wants to compute the average household income values for different household sizes and income categories, which of the following data is relevant in their computation? 
I. Yearly household income for each household 
II. The number of members for each household 
III. Gender of each household member 
IV. Job/employment status of each household member 
V. Age of each household member 
A. I & II only 
B. I, II, & IV only 
C. I, II, & V only 
D. All data
Pattern Decomposition
Consider the following modules and their required inputs, outputs, and time to execute: 
Module
Inputs
Outputs
Time Required (s)
A
p
q
5
B
s, r
t
3
C
q, t
u
7
D
p, u
v
2
Given the inputs p, s, r, and the above four modules, what is the minimal number of seconds needed to generate the output v?
A. 17 seconds 
B. 12 seconds 
C. 14 seconds 
D. 10 seconds

Assessing Creativity in K-12 Computing Classrooms

Assessing Creativity in Computing Classrooms Report (cover image)While not presented at SIGCSE, Karen Brennan, Paulina Haduong, and Emily Veno from the Creative Computing Lab just published a new report called Assessing Creativity in Computing Classrooms. They interviewed 80 K-12 computing teachers to answer a guiding question: How do K-12 computing teachers assess creative programming work? Across these interviews, they found five key principles that guide teachers’ assessment:
  • Foster a classroom culture that values assessment.
  • See student process as well as product.
  • Understand what is creative for the student. 
  • Support students by incorporating feedback from multiple perspectives (e.g., peers, families, other authentic audiences). 
  • Scaffold opportunities for students to develop judgment of their own work. 
Their report includes four detailed case studies and a selection of 50 assessments. For example, a high school teacher gathers multiple perspectives on projects, including the students’ own self-reflections, peer assessment, and evaluation from visiting community members:
peer and community feedback forms

This is only a small glimpse of the content prepared for SIGCSE 2020. See Parts 1 and 2 synthesizing a variety of research takeaways and instructional strategies. Part 4 on curricula, tools, and platforms will be published tomorrow. If you want to learn more, view SIGCSE 2020 Online and the entire Proceedings in the ACM Digital Library, which is currently free and open to everyone through June 30, 2020.

Please let us know what you find useful and what we’ve missed by writing to @csteachersorg and @btwarek.