top of page

Digital Assessment Final Project

Introduction

man looking at computer and notebook
Photo by Mikhail Nilov from Pexels

During the Digital Assessment course, I learned many valuable concepts including what an assessment is, different ways to assess, standardized testing laws and STAAR, educational transformation using digital assessments, standards-based grading, data collection and instruction, and cheating. At some points, I was challenged to try new things or to think about things in new ways. At other times, I felt affirmed that I was doing things right in my classroom.



Assessment is a word that people define in various ways. To some, it is a unit or year-end test, and to others it is a continuous process of evaluating learning. It is both, and each of these ideas has a name. Summative assessments are the end-of-unit tests or standardized tests. Formative assessments are those that are done along the way to check for learning in the process (Stiggins, 2005).

When taking a course dealing with assessment, the driver that has put assessments and data in such a prominent spot in education must be examined. I spent time exploring standardized testing program requirements and implementation at the federal and state levels. While this information did not directly impact my instructional plans, it was good to understand this information and have more background. As a result of this course, my aim is to create the type of assessments described by the first half of the following quote from the U.S. Department of Education (2016),

High-quality assessments are essential to effectively educating students, measuring progress, and promoting equity. Done well and thoughtfully, they provide critical information for educators, families, the public, and students themselves and create the basis for improving outcomes for all learners. Done poorly, in excess, or without clear purpose, however, they take valuable time away from teaching and learning, and may drain creative approaches from our classrooms.


Important Topics for My Educational Practice

I used to think formative assessments were quizzes given along the way. Now, I know these types of assessments can come in other forms such as labs, performance assessments, oral questioning, presentations, projects, bell-ringers, and exit tickets. I do many of these things, but I had never really classified them as “assessments” because they were not quizzes. Using this variety of techniques allows me to identify student learning, to give feedback, correct misconceptions, and it gives students different ways to demonstrate their learning (Powell & Powell, 2012; Koch, 2012). I often get more valuable information from these than the summative tests because of the opportunity to identify and correct misconceptions in the moment.

I discovered several new digital assessment applications to add to my instructional resource bag. These applications provide a variety of ways to do formative assessments, and they provide readily accessible feedback to the student and teacher. I have already implemented a few of these into my instruction. For example, I have started using Formative as a three-question bell ringer quiz. These quizzes cover the content from the previous day, and I have been able to use that data to remediate on the spot if needed. So far, it seems to be working well. Hopefully, this targeted approach to daily assessment will pay off when it is time for unit tests, benchmarks and the STAAR.

As a result of this course, I now know more about how to give better summative assessments and how to prepare students for STAAR test. Multiple choice tests may not give a true picture of what students actually know. Tests should include a variety of question types including forced choice, short answer, and performance tasks in order to get a more accurate picture (Powell & Powell, 2012). In 2023, the STAAR test format will change, and at least 25% of the test will be non-multiple choice questions. As a result, teachers will have to change the format and increase rigor of their tests. This will force students to really think about the questions. This change is something we can prepare for in advance by using some of the test preparation strategies suggested by Powell & Powell (2012) including 1) familiarizing students with test content by using released questions and 2) practicing the test question and online administration format ahead of time.

After learning about the different types of assessments and the standardized testing programs, I examined how digital assessment can change education. Several of the course resources detailed ways digital assessments can be used in transformational ways. Based on my own experiences, I can attest that their premise is true. I have seen how digital assessments and assignments 1) can reduce the teaching workload, 2) save time due to automated grading, 3) give the ability to streamline feedback, and 4) easily gather data to identify areas where remediation is needed. Automated grading also reduces the chance for human error, allowing for more consistent grading (JISC, 2010; UNESCO, 2018). Digital assessments have also allowed me to differentiate assignments and tests, to provide accommodations and modifications, and use features to deter cheating and ensure test security (Dembitzer et al., 2017).

No study of using digital assessments would be complete without considering online cheating. Cheating is not a new phenomenon. However, with students having access to multiple forms of technology and the internet at their fingertips, the temptation to cheat is always there. A common form happens when students cut and paste from the internet without a citation. I often see students try to cheat during online tests. In order to prevent (or try to prevent) this, I use several strategies including 1) having the testing program scramble answer choices or questions, 2) using the monitoring program provided by the school to proctor the test, 3) requiring students to put notes and other forms of technology away in a secure location, and 4) including subjective questions that require their original thoughts. In our readings, Feeney (2017) suggested a few more strategies that are worthy of implementation going forward such as creating a culture of honesty, emphasizing academic integrity, discussing consequences, defining cheating, and giving shorter and more frequent quizzes.

The standardized tests mandated by ESSA are standards-based tests, which are designed to ensure that all students achieve a minimum level of competency (Stiggins, 2005). One thing that I had never really considered before that came up in the readings for this course was the fact that students, like educators, are data collectors. According to Stiggins (2005), students collect and assess data from the earliest grades, and they use their data to decide “whether success is within or beyond reach, whether the learning is worth the required effort, and so whether to try or not.” In order for all students to achieve minimum competency required by the state, we must consider how assessment can be used to help all students believe they can succeed (Stiggins, 2005). Several methods were proposed as ways to assess competency levels including 1) using summative tests as formative benchmarks, 2) the idea of assessment FOR learning, and 3) using standards-based grades (Stiggins, 2005; Townsley, 2014).

Assessment FOR learning utilizes a variety of ongoing assessment methods and involves students in the process. They track their progress and reflect on it to evaluate their own mastery levels (Stiggins, 2005). This idea of giving the students the opportunity to track their data was something that I have not done before, and it is definitely something I plan to incorporate into my classroom going forward.

Standards-based grading is a concept that I have only heard about before. Standards-based grades are based only on whether or not students have attained proficiency on specific standards (Townsley, 2014). When I was a traditional student, I worried only about my GPA. As an adult student, my goal is to learn as much as possible, and the grades are the natural outflow of that. I have often wondered how much more I could have learned when I was younger if I focused on the learning instead of the grade. The idea of standards-based grading intrigues me because this may be the way to get students to turn that corner. However, it would take a massive philosophy shift in my school for this to be implemented.


Data Driven Assessment

A major focus of the course that tied many of the previously mentioned topics together was the process outlined by Paul Bambrick-Santoyo in his book Driven by Data. The process described in this book (and briefly summarized in the paragraphs below) can be used for tracking learning and to help students reach desired competency levels through all types of assessments. There are four key components: 1) assessment, 2) analysis, 3) action, and 4) culture.

An initial assessment is required to determine where students’ strengths and weaknesses lie, which will be followed up with cumulative-to-date interim assessments every six to eight weeks. Each assessment should be aligned to state and college-readiness standards, and it should be written to the level of rigor as the end goal test. Teachers should create or have access to the assessment at the beginning, so as to have a clear view of the goal. Students should also know the end goal as well (pp. 11-13, 19, 28-29).

After each assessment, the data provided will be analyzed, and an action plan to address areas needing remediation will be developed. Analysis of interim assessment data is important because it can identify strengths and weaknesses of students, and goals can be set. Effective analysis will include four levels of analysis: 1) question level, 2) standards level, 3) individual student, and 4) whole class (p. 41). Ideally, the data report should be created within 48 hours and be just one page per class because longer reports are less likely to be used. The test should also be at hand when the analysis is being done (pp. 44, 47, 50). He recommended a simplified report on page 42, and I will use this for my benchmark exam.

Action plans should be created during analysis meetings based on what the data shows. These action plans should include new strategies and timelines for re-teaching difficult standards. Both administrators and teachers should do the analysis prior to a meeting where the data is discussed (pp. 44, 47, 50). The author included several pages of strategies that could be used in any content area. I appreciated this, and I got some fresh ideas to try in my own class. In agreement with the standards-based grading philosophy, it was also recommended that students be included in the action plans by giving them a template to help them track and assess their own mastery levels (pp. 96, 99). I was intrigued by the template provided in the book, and I plan to create one to use with my students on the upcoming semester exams.

The culture component of the data-driven instructional model must be built by implementing a well-trained leadership team, initial and ongoing professional development, a calendar that is planned to include adequate time for all components from the beginning, and being willing to borrow from the success of others (pp. 106, 119). As a teacher, I do not have a lot of control over most of drivers of the culture component at the school level. However, I can do my own analysis and create my own action plans. I can create a culture of data-driven in my own class.

In the past, I have done many of the things recommended in this book on my own, just not to the degree specified there. This is an area where I could grow.


Data, Data and More Data

After considering data-driven instruction, I then reviewed a related concept called decision driven data collection. Instead of doing data collection at the end to drive decisions for the future, it would make more sense to state the goals up front. Then generate an action plan to achieve that goal, which includes specific and measureable benchmarks along the way. Then data collection would be limited based on the goals and benchmarks, and only data that measured those goals and benchmarks would be collected. This could solve one of the problems with our data-saturated system because the amount of data would be manageable and relevant to the teachers in real time. If the goals/benchmarks have not been met, it will be easy to see, and remediation can happen then. (McGraw Hill PreK-12, 2017).

Whether approached from a data-driven or a decision driven data collection model, the truth is there is a ton of data on students today. This is a double edged sword. On the positive side, it could be used to personalize instruction and learning, provide responsive assessments, encourage collaborative learning, to promote more engaging pedagogy, and to do longitudinal analyses to track student growth over time. Privacy concerns in regard to data are a valid issue to be considered (Cope & Kalantzis, 2016).


Application

During this course, I had the opportunity to create a digital assessment that was aligned to selected TEKS. The quiz had varied question types, and my classmates were given the opportunity to take this assessment, and one of them did so. Using the Canvas LMS, I was able to score the quiz, mostly automatically. I only had to review the subjective questions. Then I analyzed the results of the quiz. I could not really do a question analysis for the class since I only had one student, but I could do an analysis of performance on the selected standards. Four of the five tested standards all showed 100% mastery. The remaining standard yielded a 75% mastery score (see table below). There were two questions for this standard, and If this was a real class, I would look more closely at the two-part question that the students missed more frequently to see what may have gone wrong there. Then I would address the misconception, re-teach the concept, and assess it again in following units where it naturally surfaces again. The preparation document for this quiz can be found here.


To get a little more practice, I looked back at the initial assessment I gave my students in August, and I identified some of the standards have been taught since then. I compared initial average scores for those standards with the average scores from unit tests (see table below). The tests were given on Google Forms, so the data analysis capabilities are not that great. I can easily do question and whole class analyses. If I retroactively go back and link a standard to each question, I can do an overall standards level analysis. Doing an individual student analysis for each standard and question would take more time than I would ever have available. My district currently uses Schoology, but they do not pay for the extra assessment features which would allow this type of analysis to happen easily. Maybe in the future, I can convince them of the need. The table below is a very basic summary of the standards analysis I was able to complete. There was obviously great improvement, but most units are still not up to mastery level for the whole class. Fortunately, the 5 standards that are not quite there yet all have natural overlaps in future units. I can look more closely at the questions where they did not do well and identify what went wrong before we get to the related unit. Remediation will also be built into the future units and semester and STAAR test reviews.


Conclusion

This course has been a very interesting and eye-opening course. I have been able to take some of the things I have learned and implement them already with success. I found myself discussing the concepts from this class with colleagues and administrators from school. I can see how if there was a focused and united effort to improve how we do assessments as a school, it would make a difference in our students’ learning and performance. I work in a rural school with many students who have a low socio-economic status. Many of them have never experienced much success personally, nor have they seen others do so. Some of the concepts I have learned here could turn that around. This has left me wondering how I can be an agent of change in my district in this area.




Works Cited

Bambrick-Santoyo, P. (2010). Driven by data: a practical guide to improve instruction. San Francisco, CA: Jossey-Bass.


Cope, B., & Kalantzis, M. (2016). Big Data Comes to School: Implications for Learning, Assessment, and Research. AERA Open. https://doi.org/10.1177/2332858416641907


Dembitzer, L., Zelikpvitz, S., & Kettler, R. (2017). Designing computer-based assessments: multidisciplinary findings and student perspectives. International Journal of Educational Technology, 4(3), 20-31.


Feeney, J., (2017). How to prevent cheating during online tests. Retrieved November 8, 2018 from https://www.schoology.com/blog/how-prevent-cheating-during-online-tests


JISC. (2010). Effective assessment in a digital age: a guide to technology-enhanced assessment and feedback. Bristol: HEFCE. Retrieved from http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf


Koch, J., (2012). Teach: introduction to teaching. Belmont, CA: Wadsworth

McGraw Hill PreK-12. (2017, April 24). Assessment: Why we assess - youtube. Retrieved October 25, 2021, from https://www.youtube.com/watch?v=HXpBZmeXrDo.


Powell, S. D., & Powell, S. D. (2012). Your introduction to education: explorations in teaching. Boston: Pearson


Stiggins, R. J. (2005). From formative assessment to assessment FOR learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4). Retrieved February 1, 2009 from http://www.pdkintl.org/kappan/k_v87/k0512sti.htm


Townsley, M., (2014). What is the difference between standards-based grading (or reporting) and competency-based education? Retrieved November 2, 2018 from https://www.competencyworks.org/analysis/what-is-the-difference-betweenstandards-based-grading/


UNESCO. (2018, March 5). Computer-based assessments: What you need to know - UNESCO-IAEA webinar. YouTube. Retrieved November 6, 2021, from https://www.youtube.com/watch?v=n0ij0Sq4kbk.


U.S. Department of Education. (2016, December 7). Assessments under Title I, Part A & Title I, Part B: Summary of Final Regulations. https://www2.ed.gov/policy/elsec/leg/essa/index.html. Retrieved October 31, 2021, from https://www2.ed.gov/policy/elsec/leg/essa/essaassessmentfactsheet1207.pdf.

Comments


bottom of page