13 December 2008

Designing Assessment To Support Students' Learning

It has been reported that the single most power influence for student achievement is feedback (Hattie, 1987; Black & William, 1998). But it is not just any plain ol' feedback. Informative, timely, concise feedback from the instructors, and feedback which the students actually read and follow up on, are what really count. As such, conducting assessment and providing feedback is an art. Assessment can be enormously expensive, and can be perceived as ineffective and a poor representation of student learning. Gibbs and Simpson (2005) come up with a list of 10 plausible conditions under which assessment supports learning.
  1. Sufficient assessed tasks are provided for students to capture sufficient study time.
  2. These tasks are engaged with by students, orienting them to allocate appropriate amounts of time and effort to the most important aspects of the course.
  3. Tackling the assessed task engages students in productive learning activity of an appropriate kind.
  4. Sufficient feedback is provided, both often enough and in enough detail.
  5. The feedback focuses on students' performance, on their learning and on actions under the students' control, rather than on the students themselves and on their characteristics.
  6. The feedback is timely in that it is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance.
  7. Feedback is appropriate to the purposes of the assignment and to its criteria for success.
  8. Feedback is appropriate, in relation to students' understanding of what they are supposed to be doing.
  9. Feedback is received and attended to.
  10. Feedback is acted upon by the student.
Reference:

Hatte, J.A. (1987). Identifying the salient facets of a model of student learning: a synthesis of meta-analyses, International Journal of Educational Research, vol. 11, pp. 187-212.

Black, P. & William, D. (1998). Assessment and classroom learning, Assessment in Education, vol. 5, no. 1, pp 7-74.

Gibbs, G. & Simpson, C. (2005). Conditions Under Which Assessment Supports Students' Learning, Learning and Teaching in Higher Education, Issue 1, pp. 3 - 31.

08 December 2008

Class Observations

I recently had the opportunities to visit a number of CS classes both domestically and internationally. Here are some observations from these visits that I really like. I will keep adding to this list as I come across other gems!

  • Use analogies to explain difficult concepts ... e.g. passing of a hockey puck as an analogy to passing parameters, recipe as an analogy to algorithm, etc. 
  • Even in a big lecture, try to learn the names of at least some students.  One easy way is to get to know those who ask questions in class.  All of us like to be known!
  • When students ask what-if questions about programming languages, (e.g. what if you divide a integer number with a real number), instead of just giving them the answer, do a quick demo on the computer (if you have one set up).  This instills a culture of learning via experimentation.
  • Repeatedly ask the students if there are any questions throughout the entire lecture, and PAUSE. Students may not ask questions right away, but they know that the instructor is encouraging any questions they may have.
  • Instead of asking “Any Questions”, try “Who is comfortable with the material presented so far?” and take a poll. The poll can be done via either clicker or raising of hands.
  • Show Learning Goals at the beginning of class, show Learning Goals before each learning unit, show Learning Goals for each learning activity, show Learning Goals at the end of the class. The Learning Goals should reference back to the Learning Goals as stated on the course outline.
  • Great teachers are usually experts in what they teach. They know the subject well ... very well!
  • Students like a variety of presentation styles .. try video, simulation, debate, demonstration (such as having the instructor develop a piece of code live, or work through a problem after a number of unsuccessful attempts.) Students like to see the process of solving a problem rather than just the solution.

07 December 2008

Clicker Use in Computer Science Education

There have been a number of reports on the effective use of Clickers (or Classroom Response System, or Student Response System, or Classroom Communication System, or many other names) across different disciplines. The general comments have consistently been:

  • increased attendance
  • greater involvement (especially when good questions are asked)
  • more interactions with instructor
  • more interactions among students (especially when peer discussions are used before or after each clicker question)
  • anonymity increases participation rate
  • students like immediate feedback on their learning

While there are a number of websites related to clicker questions in physics, mathematics, biology, etc., there does not seem to be any for computer science. In any case, I found this website at Vanderbilt particular useful. The Bibliography section contains a number of links to the use of clickers in a number of disciplines, including Computer Science.

28 November 2008

Getting students to ask good questions

On the rare occasions I bothered reading the textbook when I was a student, "reading" meant looking at all the assigned pages. As faculty, I've finally realized that textbooks are invaluable as a jumping-off point for my own thoughts, ideas, questions, and problems. In CPSC 111, we experimented with "weekly reading questions" (questions marked on completeness, inspired by students' assigned readings) to help students transition from passive reading habits to this type of "interrogation" of the textbook.

Marbach-Ad and Sokolove probe the issue of improving students' reading questions deeply in their 2000 paper (cited below). Their most successful method involves several parts: Ask students regularly for their "best question" after a reading. Give students a clearly defined rubric with real examples for what good questions are. Make many opportunities for students to practice asking questions, evaluating questions, and answering questions. Give student questions pride of place in the classroom, including using wireless mics so that other students can hear students asking the question.

The paper is somewhat interesting but not especially strong from an experimental standpoint. (Of the four techniques I mention above, they provide some quantitative evidence for the combined value of the last two.) However, the ideas may be worth trying out in our own classrooms.

I've appended their rubric for questions below. It's not directly adoptable for CS, but it's an interesting starting point.

Their most interesting mechanism for student practice with questions is to have stable student teams submit their questions as a stack. Before submitting, the students have a few minutes to decide which questions are the best and put those on top. This seems like a simple way to enforce student practice discussing and assessing questions.

Unfortunately, the paper does not address what to do with the questions the instructor received. In CPSC 111, we chose 10 questions at random to answer every week and sometimes answered additional questions that were common or interesting but didn't show up on the random list. This was somewhat satisfying to students. A complementary currency system (where students can purchase answers or invest in questions?) or a voting system (like ActiveClass's) might be more successful.

Marbach-Ad, Gili and Sokolove, Philip (2000). Can Undergraduate Biology Students Learn to Ask Higher Level Questions? Journal of Research in Science Teaching 37(8): 854-870.




Marbach-Ad and Sokolove's taxonomy for student questions in Intro Biology (developed from sample student questions):
Category 0: Questions that do not make logical or grammatical sense, or are based on a basic misunderstanding or misconception, or do not fit in any other category. (This is a "catch all" category that instructors can readily subdivide for teaching purposes--for example, when grading written questions. In this case we chose not to subdivide the category in order to focus on the characteristics of desirable questions.)

Category 1a: Questions about a simple definition, concept, or fact that could be looked up in the textbook (i.e., "what is meant by the polarity of the membrane?").

Category 1b: Questions about a more complex definition, concept, or fact explained fully in the textbook (i.e., "what does it mean when it is says air moves through a bird's lungs?").

Category 2: Ethical, moral, philosophical, or sociopolitical questions (i.e., "carbon monoxide is a very deadly gas binding to hemoglobin much faster than oxygen. If it is so deadly, why are there no carbon monoxide detectors throughout the dorm halls?").

Category 3: Questions for which the answer is a functional or evolutionary explanation. (In this case students begin by asking a question that relates to function and could, in principle, be answered in functional terms--"Why do people have an appendix?"--however, the deeper answer is more often related to evolution than to function (the human appendix is a vestigial organ)).

Category 4: Questions in which the student seeks more information than is available in the textbook (i.e., "what causes the 'rumbling' in your stomach when you are hungry?").

Category 5: Questions resulting from extended thought and synthesis of prior knowledge and information, often preceded by a summary, a paradox, or something puzzling. (i.e., "In chapter 35 it says that caffeine, if taken excessively, can disrupt motor coordination and mental coherence which can cause depression. I known that Coca-Cola has some amount of caffeine in it. Does this mean that excessive consumption of it could lead to depression . . . ?")

Category 6: Questions that contain within them the kernel of a research hypothesis (i.e., "I have heard that some people snore so badly that they stop breathing during their sleep. What correlation is there, if any, between 'heavy snorers' and a higher instance of apnea during REM sleep. Can the attention their nervous system is devoting to a dream, interfere the regulation of respiration?").

09 November 2008

Collaborative Groups Useful for Individual Student’s Problem-Solving Abilities?

Do you wish that your students have better problem solving strategies and abilities to tackle those tricky questions that you give in assignments or exams, or be able to think “outside the box”? Well, apparently this can be a reality, at least according to a research project conducted in the Chemistry department at Clemson University. Students who were given the opportunity to work collaboratively in small groups are found to have better problem solving skills on their own afterwards. The effect of problem solving abilities extends beyond the group work afterwards when they are given problems to be solved on their own.

In computer science education, group work is quite common for programming assignments and projects. However, one key ingredient in improving student problem solving skills is not just to divide the tasks among them (i.e. simply project management), but to have each member discuss, analyze, debate, and articulate how to solve the problem. Especially when there is a mix of students with different problem solving abilities, the result of improving individual problem solving abilities can be significant.

What are your experiences of collaborative work in computer science education? Have you noticed similar improvement in individual problem solving abilities after a team works on a problem together? What kind of collaborative projects have been most useful in computer science education?

Reference:

Cooper, M., Cox. C., Nammouz, M., Case, E., Stevens, R. (June 6, 2008). An Assessment of the Effect of Collaborative Groups on Students’ Problem-Solving Strategies and Abilities. Journal of Chemical Education 85(6). Pages 866-872.

05 November 2008

How-To Advice on Think-Alouds to Explore Students' Problem Solving

The problem: From mathematical perspectives on assignment in CS1 to naive views of probability in AI, students' misconceptions can lead them astray in CS problem-solving. Identifying and addressing those misconceptions is an important step in helping them achieve expertise in the discipline.

Unfortunately, getting inside a student's head to understand how they perceive and address a problem can be tremendously difficult. Just seeing a student's solution to a problem gives scant hints on their thought process.

A solution: Think-aloud protocols (common in HCI) can help us to explore students' thought processes as they solve a problem.

The basic idea of a think-aloud is for you to quietly observe a student as the student solves a problem. The student, in turn, vocalizes (but does NOT explain) their thoughts as they work. To make this effective, have the student practice on a simple problem first, be sure they don't try to clarify or interpret their thoughts for you, prompt them with a simple "Please keep talking." if they fall silent, and sit out of the student's line-of-sight during the process (to reduce the feeling that they're talking to you). Ericsson and Simon suggest mental multiplication (e.g., "24 x 36") as a practice task, which should produce verbalizations like "'carry the 2,' 'fourteen,' 'one forty four,' 'let's see,' and 'seven twenty'" rather than vocalizations like "I'm going to start working on the problem now. I know that my algorithm for multiplication is...". Between the work you see the student performing and the verbalizations, you will hopefully be able to learn a bit more about what's going on inside the student's head.

This is a time-intensive process; so, you'll want to use this technique only for critical questions. You may also want to work with your friendly neighbourhood STLF (or HCI specialist!) either to help plan your think-alouds or to help execute them.

Read more about think-alouds for exploring student thinking in:

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: Bradford Books/ MIT Press.

Ericsson, K. Anders and Simon, Herbert A.(1998)'How to Study Thinking in Everyday Life: Contrasting Think-Aloud Protocols With Descriptions and Explanations of Thinking',Mind, Culture, and Activity,5:3,178--186.
http://www.informaworld.com/smpp/content~content=a785309769~db=all

Payne, J. W. (1994). Thinking aloud: Insights into information processing. Psychological Science, 5,241,245-248.

Welcome to CSSEI blog

We'll be using this blog to post material relevant to CSSEI, including brief best practice reports about various teaching & learning techniques.