Online Instructional Resources from Michigan State University

 Online Resources  Comments Off on Online Instructional Resources from Michigan State University
Feb 122013
 

I’ve been combing the web for sources of information and inspiration for those of you who feel pretty confident about your learning outcomes and are looking for additional ideas for assessing and documenting them. Here’s an impressive list of online instructional resources collated by Michigan State University’s Office of Faculty and Organizational Development. Be sure that you see the Expand All Topics/Collapse All Topics toggle link midway down the page.

Online Instructional Resources

–Rebecca Johnson

Clickers and Student Engagement

 Clickers  Comments Off on Clickers and Student Engagement
Feb 112013
 

Derek Bruff, director of the Vanderbilt University Center for Teaching, has a great blog. Here, I scavenge from it to bring you some interesting clicker strategies. Once you get to his blog, roam around. He’s got a lot to say and has found a bunch of great resources.

Clickers in Psychology: Change-Ups, Recaps, and Times for Telling

Asking Students to Write Their Own Clicker Questions

-Rebecca Johnson

The Documenting Outcomes with Tech Tools PowerPoint

 Assessment  Comments Off on The Documenting Outcomes with Tech Tools PowerPoint
Feb 142012
 

We had some requests to post the PowerPoint we used in our February 13 workshop. In it we give some examples on how one works across the matrix row from outcome to assessment type to plans for the future. Let us know if you have any questions. I will also post the grid that we based on what is available on the QEP section of the teaching.ua.edu website.

Documenting_Outcomes_with_Tech_Tools,_pt_2

QEP RUBRIC

A crash course in assessment vocabulary!

 Assessment  Comments Off on A crash course in assessment vocabulary!
Feb 142012
 

 

Formative assessments are those that help to informally measure a student’s understanding of a concept or concepts and thus have very low stakes or are not graded at all.  Regularly, formative assessments are used in classes to gage the effectiveness of the teaching being provided and/or as “check in” (often “on the spot”) activities to ascertain whether or not students are effectively learning the concepts being taught, and as such provide immediate qualitative feedback.  The results of formative assessments more often offer information about the environment for learning as opposed to evaluating precisely what is being learned.

Examples of formative assessments:

1. Brainstorming activities

2. Think-pair-share activities

3. Muddiest point

4. Likert Scale clicker questions

 

Summative assessments are those that formally measure student learning in a quantifiable way at a particular time.  In other words, these types of assessment aim to clearly and logically “summarize” learning at a specific point in the semester.  As opposed to formative assessments, which assess the environment for learning, summative assessments measure the rate, quality, etc. of learning itself.

Examples of summative assessments:

1. Tests

2. Quizzes

3. Exams

4. Graded papers

 

Direct Assessment Measures are those types of measure or activity that involve looking at or analyzing actual samples of student work produced to demonstrate that specific learning has taken place.  They are, in other words, a quantitative measure used to chart the progression, quality, rate, etc. of student learning.

Direct assessment measures include the analysis of:

1. The results of final exams

2. The results of capstone projects

3. The results of senior thesis projects

4. The results on exhibitions or performances

 

Indirect Assessment Measures are those types of measure that involve looking at or analyzing student-derived responses or opinions of their own learning and the learning experience.  Results such as these often imply that learning has taken place rather than document it specifically; they are a qualitative measure used to understand, for example, student satisfaction of the learning process, a student’s sense of his/or her own best learning practices or learning environment, student opinions of instruction, etc.

Indirect assessment measures include the analysis of:

1. The results of a student exit survey (from a course or from a program/degree)

2. Self-assessments

3. Student opinions of instruction

4. Career path or placement after graduation

 

Helpful tip:  Formative and summative indicate TYPES of assessment.  Direct and Indirect measures are METHODS for measuring student learning.

 

Related Terms:

Assessment Measure: Any assignment or task that is directed toward student learning, which may be summative or formative and whose results are evaluated directly or indirectly.

Quality Enhancement Plan (QEP):Programs put into place by accrediting bodies (e.g. SACS) via institutions (or by institutions on their own) that reflect and affirm the commitment to learning in higher education.  QEPs are usually premised on the assumption that student learning (and the caliber of that learning) is foundational to the mission of all institutions of higher learning.  QEPs usually begin by identifying an area for improvement, followed by the implementation of the means (or measures) by which to make the improvement, and then the assessment of those measures to indicate that improvement has taken place.

Examples of QEPs

UA’s QEP: http://www.ua.edu/qep/mission/content01.html

UAB’s QEP: http://main.uab.edu/Sites/DOE/QEP/50582/

University of Georgia’s QEP: http://www.qep.uga.edu/index.html

University of North Alabama’s QEP: http://www.una.edu/qep/

Matrix:  The chart used by the University of Alabama to organize and record data related to the QEP.

Student Learning Outcome (SLO):  An idea, concept, method, etc. that the student is expected to learn upon successful completion of a class.

Rubric:  A measurement or grading device used by individual instructors to assess the achievement of a student learning outcome (through a test, essay, project, etc.).

 

 

Turnitin for Teacher Self-Assessment

 Turnitin  Comments Off on Turnitin for Teacher Self-Assessment
Nov 072011
 

I’ve become a big fan of grading within Turnitin, so much so that I sometimes forget its additional purpose as a plagiarism prevention tool. The grading is convenient since I don’t have to lug around piles of papers and fast because rubrics can be built into the grading tool. These features are also useful from an assessment standpoint.

Because I’ve been using Turnitin for a few semesters now, I have access to all these graded student papers from past classes. At a glance, I can see my grade distribution for a particular assignment, and I can look through the rubrics to quickly determine which skills my students mastered for that particular paper.

In essence, I have a gold mine of information for self-assessment. By skimming through old papers, I can see what skills students needed more instruction in and which skills they mastered during that unit. I can track the progress of individual students throughout the semester, and I can look at how the class as a whole improved.  An analysis of rubric scores can provide numeric data if I’m so inclined.

If you haven’t used Turnitin or the grading features in Turnitin, this three minute video gives a good introduction to the program: https://turnitin.com/static/training/instructor_grademark_overview.php

Sep 202011
 

On September 7, The Chronicle of Higher Education published the article “Could Professors’ Dependence on Turnitin Lead to More Plagiarism?” On September 9, Inside Higher Ed published “Plagiarism Betrayal?” a more in-depth look at the same subject: Turnitin’s role in the fight against plagiarism. At first glance, these are troubling articles. “Plagiarism Betrayal?” is full of bellicose diction; The Chronicle’s piece uses a shoplifting metaphor. They both grapple with whether or not Turnitin’s parent company is betraying teachers by providing a confidential service that allows students to check their papers for plagiarism.

What’s missing from these articles? An appropriate emphasis on the collaborative process that teaching should be.

Turnitin can help students and teachers check for correct source usage, but it is only one part of the equation. I teach my students what plagiarism is and techniques to help them research and compose using correctly attributed sources. We practice summary, paraphrase, and quotation skills. We talk about the difference between original ideas and cited material. Turnitin becomes part of the class process. Students upload a draft; we talk about their problems with correct citation, quotation, etc. Students upload a final paper; I look at the Turnitin originality report, but I also know that I have to do due diligence if I suspect a problem not being picked up by Turnitin. I don’t use Turnitin solely for the plagiarism prevention, and I don’t expect the program to be my only check on whether or not students are ethically using sources.

Both articles mention the controversy surrounding Writecheck, the confidential student service mentioned above, but I hope that my students wouldn’t feel the need to pay for such a service. I always provide a draft Turnitin assignment for students to use. It is a space without penalty; it is a space for conversation if the student discovers a problem with source usage. I know that not all cases of plagiarism can be prevented, but a process-based approach to classroom writing takes away some of the incentive to engage in unethical writing practices.

The other thing these articles neglect is the functionality of Turnitin outside of plagiarism prevention. My students do online peer review through the program, and I grade online through Turnitin. Instead of focusing on how the program can “catch” undesirable writing behaviors, I focus on how it allows for peer collaboration, student-teacher dialogue, and transparent grading.

Because of a multi-pronged integration of Turnitin into my classroom, the concerns voiced in The Chronicle and Inside Higher Ed aren’t pressing to me. Instead of criticizing Turnitin, both articles seem to be calling for an active approach to teaching. No software will do the work for you. Turnitin is one tool; it won’t build the house by itself.

Sep 192011
 

Collaborative learning is usually interpreted as edspeak for working in small groups outside of class to accomplish a project of some significance. These types of exercises require that instructors assign groups, determine how to grade the group if members contribute unevenly, and commit significant time to a single project. However, collaborative learning exercises can be efficiently employed as in-class exercises, even in large lecture sections, with the help of clickers. One easy way to incorporate collaborative learning is to use the Think-Pair-Share technique (Barkley, E.F. et al. Collaborative Learning Techniques. San Francisco, CA: Wiley, 2005). With Think-Pair-Share, the instructor poses a conceptual question and asks students to think about possible answers. After a minute or so of thinking, students are asked to discuss with nearby classmates why they selected the answer they did. This technique acts as a catalyst for in-class discussions. Discussing a new concept with others gives the student a chance to try out his/her understanding of the new concept and receive feedback. It also provides an opportunity for a struggling student to hear an explanation of the concept from a peer who may be better able to communicate with the student (a foundation of peer-led instruction). As with any formative assessment technique, the instructor should set up the exercise by explaining what will happen, the purpose of the exercise, and what students are expected to do.

Adding clickers can extend the advantages of a Think-Pair-Share approach. I’ll take you through the process I use in class:

1) Ask a fairly difficult conceptual clicker question, possibly one that exploits a known misconception of the topic. Ideally you would like to have no more than 30-40 percent of students get the right answer. This technique also works with qualitative questions by forcing the students to defend their position. Clickers provide an added benefit because they require that students commit to an answer or position.

2) After students respond to the question, you can choose to either show or hide the initial results. I prefer to hide the results so that students won’t assume that the answer with the highest total must be correct.

3) Ask the students to discuss with the four or five people nearest them which answers they chose and why, in an attempt to convince others that their answer is correct. I usually allow 2-3 minutes for discussion and will travel around the room to check in on various clusters of students. In my experience, the first time you do this the students will need a bit of coaxing to actually talk with each other.

4) Re-poll the question. After polling has finished, the clicker system will let you display a comparison between the answers from both attempts. In theory, students will coalesce around the correct answer. This also gives them feedback on how the class did as a whole. Re-polling also serves to calm the class back down as they await the results of the new poll.

5) You can now ask a similar concept question on a quiz or exam later in the course.

Assessment Analysis: By building this technique into your teaching, you will have completed a formative (in-class, ungraded) assessment and a summative (quiz/exam) assessment, and you’ll have statistics that document initial understanding, peer-led instruction improvement (always very high and probably not very useful), and actual retention of the concept. Coupled with a pre-assessment of the students’ understanding prior to instruction you will have a better understanding of the effectiveness of your instructional methods as well as plenty of data to report for your T&P assessment matrix.

If anyone has implemented this technique in the classroom or has any questions, post a comment about your experience.