Adrienne Decker
Department of Computer Science & Engineering
University at Buffalo
adrienne@cse.buffalo.edu
How Students Measure Up: Creation of an Assessment Tool for CS1
Computing Curricula 2001 (Curricula, 2001), as with the previous curricula before it, does not provide faculty with instructions for how to implement the suggestions and the guidelines contained within. This leaves faculty to take their own approaches to the material, and invent assignments, lab exercises and other teaching aides for specific courses outlined in the curriculum. Whenever a new curricular device is conceived, the next natural step in the investigation is to see if the innovation actually helps student’s understanding of the material. Investigations into some of these innovations has previously been measured by lab grade, overall course grade, resignation rate or exam grades (Cooper, Dann, & Pausch, 2003; Decker, 2003; Ventura, 2003)
The problem with using these types of metrics in a study is that often they are not proven reliable or valid. Reliability, or the degree of consistency among test scores, and validity, the relevance of the metric for the particular skill it is trying to assess are both essential whenever the results of these metrics are to be analyzed (Kaplan & Saccuzzo, 2001; Marshall & Hales, 1972; Ravid, 1994) .
With all of the claims of innovation in CS1 curriculum, we need a way of assessing student’s comprehension of the core CS1 material. The goal of this work is to create a reliable and validated assessment tool for CS1. The tool will be one that assesses the knowledge of a student who has taken a CS1 class using one of the programming-first approaches described in CC2001. This assessment should be independent of both the approach used for CS1 and should not rely on testing a student’s syntactic ability with a particular language.
Many have argued the best ways to teach introductory programming, particularly in regards to language and paradigm. Back in the days of heavy Pascal use, Pattis (1993) argued about the appropriate point in the curriculum to teach subprograms. Moving forward a few years, we see Culwin (1999) arguing for how to appropriately teach Object-Oriented programming, followed up by a strong course outline for an Objects-first CS1 advocated by Alphonce and Ventura (2002; Ventura, 2003) . For these as well as others, while there may be strong anecdotal evidence to support an approach, little empirical evidence has been presented as to the real effect of these methodologies on learning the appropriate material for CS1.
The ultimate goal of the research is to create a validated and reliable metric for assessing student's level of knowledge at the completion of a programming first CS1. The test should be language and paradigm independent. This test will then be available to assess not only student progress, but also as a way to gauge particular pedagogical advances and their true value within the classroom.
The current hypotheses are:
At the time of this writing, a proposal has been prepared and it is undergoing revisions from my committee. In progress is the analysis of CC2001 to determine what is the appropriate topical coverage for the tool.
Target audience for the tool has been a challenge. Looking at the various sanctioned methodologies for CS1 given in CC2001, much care was taken to figure out which of them overlapped. The programming-first approaches all had much in common. This overlap was not seen between the non-programming first approaches and the programming-first approaches, or even amongst the non-programming first approaches. Therefore, the decision was made to create an assessment tool for only the programming-first approaches.
Validating the test will be accomplished using an expert review methodology. After the tool is prepared, a pool of experts in the area will be asked to assess the test's appropriateness for students and the clarity and difficulty of the questions on the exam.
The exam will be field tested as a final exam for a CS1 course. After the exam has been administered, reliability will be computing using one of the standard statistical methods, either odds-evens or split-halves (Kaplan & Saccuzzo, 2001; Marshall & Hales, 1972; Ravid, 1994) .
Open issues for this research include
Defense of proposal in January and data collection to begin in the Spring semester.
I hope to gain input and feedback about my research ideas. I am also hoping for informed guidance on the approach I am taking towards my research and suggestions on how to proceed forward.
Alphonce, C. G., & Ventura, P. R. (2002). Object
orientation in CS1-CS2 by design. Paper presented at the 7th annual
conference on Innovation and Technology in Computer Science Education,
Aarhus, Denmark.
Cooper,
S., Dann, W., & Pausch, R. (2003). Teaching
objects-first in introductory computer science. Paper presented at the
34th SIGCSE technical symposium on Computer Science Education, Reno, Nevada.
Curricula,
T. J. T. F. o. C. (2001). Computing
curricula 2001 computer science. IEEE Computer Society & Association
for Computing Machinery. Retrieved October 30, 2003, from the World Wide
Web: http://www.computer.org/education/cc2001/final/index.htm
Decker,
A. (2003). A tale of two paradigms. Journal
of Computing Sciences in Colleges, 19(2), 238-246.
Evans,
G. E., & Simkin, M. G. (1989). What best predicts computer proficiency? Communications of the ACM, 32(11), 1322 - 1327.
Hagan,
D., & Markham, S. (2000). Does it
help to have some programming experience before beginning a computing degree
program? Paper presented at the 5th annual SIGCSE/SIGCUE conference on
Innovation and technology in computer science education.
Kaplan,
R. M., & Saccuzzo, D. P. (2001). Psychological
Testing: Principlies, Applications and Issues (Fifth ed.). Belmont,
California: Wadsworth/Thomson Learning.
Kurtz,
B. L. (1980). Inivestigating the
relationship between the development of abstract reasoning and performance
in an introductory programming class. Paper presented at the 11th SIGCSE
technical symposium on Computer Science Education, Kansas City, Missouri.
Leeper,
R. R., & Silver, J. L. (1982). Predicting
success in a first programming course. Paper presented at the 13th
SIGCSE technical symposium on computer science education, Indianapolis,
Indiana.
Marshall,
J. C., & Hales, L. W. (1972). Essentials
of Testing. Reading, Massachusetts: Addison-Wesley Publishing Co.
Mazlack,
L. J. (1980). Identifying potential to acquire programming skill. Communications
of the ACM, 23(1), 14 - 17.
McCracken,
M., Almstrum, V., Diaz, D., Guzdial, M., Hagan, D., Kolikant, Y. B.-D.,
Laxer, C., Thomas, L., Utting, I., & Wilusz, T. (2001). A
multi-national, multi-institutional study of assessment of programming
skills of the first-year CS students. SIGCSE
Bulletin, 33(4), 1 - 16.
Pattis,
R. (1993). The "Procedures
Early" approach in CS1: A heresy. Paper presented at the 24th
SIGCSE technical symposium on Computer science education, Indianapolis,
Indiana.
Ravid,
R. (1994). Practical Statistics for
Educators. Lanham: University Press of America.
Ventura,
P. R. (2003). On the origins of
programmers: Identifying predictors of success for an objects-first CS1.
Unpublished Doctoral, University at Buffalo, SUNY, Buffalo.
Wilson, B. C.,
& Shrock, S. (2001). Contributing
to success in an introductory computer science course: A study of twelve
factors. Paper presented at the 32nd SIGCSE technical symposium on
Computer Science Education, Charlotte, North Carolina.