Perspectives on the Integration of Technology and Assessment

From Section:
ICT & Teaching
Published:
Nov. 01, 2010
Winter, 2010

Source: Journal of Research on Technology in Education, Vol. 43, No. 2, p.119–134. (Winter, 2010)
(Reviewed by the Portal Team)

This article examines a few of the emergent cases that have used technology in educational assessment from the perspective of innovation and support for teaching and learning.

The assessment cases were drawn from contexts that include large-scale testing programs as well as classroom-based programs.

Technology-Enabled Assessments in State, National, and International Assessment Programs
An Example to an international assessment program is the Programme for International Student Assessment (PISA) pilot. In 2006, this pilot tested a Computer-Based Assessment of Science to test knowledge and inquiry processes not assessed in the paper-based booklets.

An example at the state level is Minnesota's online science test. This test contains tasks engaging students in simulated laboratory experiments or investigations of phenomena such as weather or the solar system.

Technology-Enabled Assessments for Classroom Instructional Uses
An example to classroom-based program is the ASSISTment system. This system is a pseudo-tutor for middle school level mathematics. Students must eventually reach the correct answer, and scaffolds/hints are limited to avoid giving away the answer (Feng, Heffernan, & Koedinger, 2006, 2009). Teachers receive feedback on student and class progress both on general summative measures (e.g., time to completion, percent correct) and on more specific knowledge components.
The results of this program reveaedl that more than 60% of students self-report that the ASSISTments help them with the standardized tests, and there is some evidence that scaffolds help students transfer knowledge better than hints, especially on difficult problems.

Assessment programs should be designed to produce results that allow educators and policy makers to address a variety of questions about how a nation, state, district, school, program, group, or individual is performing.

Conclusions and Implications

In numerous areas of the curriculum, information technologies are changing what is taught, when and how it is taught, and what students are expected to be able to do to demonstrate their knowledge and skill. This situation creates opportunities to center curriculum, instruction, and assessment around cognitive principles. With technology, assessment can become richer, timelier, and more seamlessly interwoven with multiple aspects of curriculum and instruction.
Thus, technology removes some of the constraints that previously made high-quality formative assessment of complex performances difficult or impractical for a classroom teacher. The examples described above illustrate how technology can help infuse ongoing formative assessment into the learning process.

Advances in curriculum, instruction, assessment, and technology are likely to continue to move educational practice toward more individualized and mastery-oriented approaches to learning, yet at the same time intertwine networking with resources, experts, and peers in problems requiring more complex forms of reasoning, problem solving, and collaboration.

Technology could offer ways of creating, over time, a complex stream of data about how students think and reason, independently and collaboratively, while engaged in important learning activities.
We could extract information for assessment purposes from this stream and use it to serve both classroom and external assessment needs, including providing customized feedback to students for reflection about their knowledge and skills, learning strategies, and habits.

Extensive technology-based systems that link curriculum, instruction, and assessment at the classroom level might enable a shift from today’s assessment systems to a balanced design that would ensure the three critical features of comprehensiveness, coherence, and continuity.

References
Feng, M., Heffernan, N. T., & Koedinger, K. (2006). Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In M. Ikeda, K. Ashley, & T-W. Chan (Eds.), Proceedings of the Eighth International Conference on Intelligent Tutoring Systems (pp. 31–40). Springer-Verlag: Berlin.
Feng, M., Heffernan, N. T., & Koedinger, K. R. (2009). Addressing the assessment challenge in an online system that tutors as it assesses. User Modeling and User-Adapted Interaction: The Journal of Personalization Research (UMUAI), 19(3), 243–266.


Updated: Jan. 17, 2017
Keywords:
Critical thinking | Educational evaluation | Formative evaluation | Program effectiveness | Students' evaluation | Summative evaluation | Technology integration