28 November 2008

Getting students to ask good questions

On the rare occasions I bothered reading the textbook when I was a student, "reading" meant looking at all the assigned pages. As faculty, I've finally realized that textbooks are invaluable as a jumping-off point for my own thoughts, ideas, questions, and problems. In CPSC 111, we experimented with "weekly reading questions" (questions marked on completeness, inspired by students' assigned readings) to help students transition from passive reading habits to this type of "interrogation" of the textbook.

Marbach-Ad and Sokolove probe the issue of improving students' reading questions deeply in their 2000 paper (cited below). Their most successful method involves several parts: Ask students regularly for their "best question" after a reading. Give students a clearly defined rubric with real examples for what good questions are. Make many opportunities for students to practice asking questions, evaluating questions, and answering questions. Give student questions pride of place in the classroom, including using wireless mics so that other students can hear students asking the question.

The paper is somewhat interesting but not especially strong from an experimental standpoint. (Of the four techniques I mention above, they provide some quantitative evidence for the combined value of the last two.) However, the ideas may be worth trying out in our own classrooms.

I've appended their rubric for questions below. It's not directly adoptable for CS, but it's an interesting starting point.

Their most interesting mechanism for student practice with questions is to have stable student teams submit their questions as a stack. Before submitting, the students have a few minutes to decide which questions are the best and put those on top. This seems like a simple way to enforce student practice discussing and assessing questions.

Unfortunately, the paper does not address what to do with the questions the instructor received. In CPSC 111, we chose 10 questions at random to answer every week and sometimes answered additional questions that were common or interesting but didn't show up on the random list. This was somewhat satisfying to students. A complementary currency system (where students can purchase answers or invest in questions?) or a voting system (like ActiveClass's) might be more successful.

Marbach-Ad, Gili and Sokolove, Philip (2000). Can Undergraduate Biology Students Learn to Ask Higher Level Questions? Journal of Research in Science Teaching 37(8): 854-870.




Marbach-Ad and Sokolove's taxonomy for student questions in Intro Biology (developed from sample student questions):
Category 0: Questions that do not make logical or grammatical sense, or are based on a basic misunderstanding or misconception, or do not fit in any other category. (This is a "catch all" category that instructors can readily subdivide for teaching purposes--for example, when grading written questions. In this case we chose not to subdivide the category in order to focus on the characteristics of desirable questions.)

Category 1a: Questions about a simple definition, concept, or fact that could be looked up in the textbook (i.e., "what is meant by the polarity of the membrane?").

Category 1b: Questions about a more complex definition, concept, or fact explained fully in the textbook (i.e., "what does it mean when it is says air moves through a bird's lungs?").

Category 2: Ethical, moral, philosophical, or sociopolitical questions (i.e., "carbon monoxide is a very deadly gas binding to hemoglobin much faster than oxygen. If it is so deadly, why are there no carbon monoxide detectors throughout the dorm halls?").

Category 3: Questions for which the answer is a functional or evolutionary explanation. (In this case students begin by asking a question that relates to function and could, in principle, be answered in functional terms--"Why do people have an appendix?"--however, the deeper answer is more often related to evolution than to function (the human appendix is a vestigial organ)).

Category 4: Questions in which the student seeks more information than is available in the textbook (i.e., "what causes the 'rumbling' in your stomach when you are hungry?").

Category 5: Questions resulting from extended thought and synthesis of prior knowledge and information, often preceded by a summary, a paradox, or something puzzling. (i.e., "In chapter 35 it says that caffeine, if taken excessively, can disrupt motor coordination and mental coherence which can cause depression. I known that Coca-Cola has some amount of caffeine in it. Does this mean that excessive consumption of it could lead to depression . . . ?")

Category 6: Questions that contain within them the kernel of a research hypothesis (i.e., "I have heard that some people snore so badly that they stop breathing during their sleep. What correlation is there, if any, between 'heavy snorers' and a higher instance of apnea during REM sleep. Can the attention their nervous system is devoting to a dream, interfere the regulation of respiration?").

09 November 2008

Collaborative Groups Useful for Individual Student’s Problem-Solving Abilities?

Do you wish that your students have better problem solving strategies and abilities to tackle those tricky questions that you give in assignments or exams, or be able to think “outside the box”? Well, apparently this can be a reality, at least according to a research project conducted in the Chemistry department at Clemson University. Students who were given the opportunity to work collaboratively in small groups are found to have better problem solving skills on their own afterwards. The effect of problem solving abilities extends beyond the group work afterwards when they are given problems to be solved on their own.

In computer science education, group work is quite common for programming assignments and projects. However, one key ingredient in improving student problem solving skills is not just to divide the tasks among them (i.e. simply project management), but to have each member discuss, analyze, debate, and articulate how to solve the problem. Especially when there is a mix of students with different problem solving abilities, the result of improving individual problem solving abilities can be significant.

What are your experiences of collaborative work in computer science education? Have you noticed similar improvement in individual problem solving abilities after a team works on a problem together? What kind of collaborative projects have been most useful in computer science education?

Reference:

Cooper, M., Cox. C., Nammouz, M., Case, E., Stevens, R. (June 6, 2008). An Assessment of the Effect of Collaborative Groups on Students’ Problem-Solving Strategies and Abilities. Journal of Chemical Education 85(6). Pages 866-872.

05 November 2008

How-To Advice on Think-Alouds to Explore Students' Problem Solving

The problem: From mathematical perspectives on assignment in CS1 to naive views of probability in AI, students' misconceptions can lead them astray in CS problem-solving. Identifying and addressing those misconceptions is an important step in helping them achieve expertise in the discipline.

Unfortunately, getting inside a student's head to understand how they perceive and address a problem can be tremendously difficult. Just seeing a student's solution to a problem gives scant hints on their thought process.

A solution: Think-aloud protocols (common in HCI) can help us to explore students' thought processes as they solve a problem.

The basic idea of a think-aloud is for you to quietly observe a student as the student solves a problem. The student, in turn, vocalizes (but does NOT explain) their thoughts as they work. To make this effective, have the student practice on a simple problem first, be sure they don't try to clarify or interpret their thoughts for you, prompt them with a simple "Please keep talking." if they fall silent, and sit out of the student's line-of-sight during the process (to reduce the feeling that they're talking to you). Ericsson and Simon suggest mental multiplication (e.g., "24 x 36") as a practice task, which should produce verbalizations like "'carry the 2,' 'fourteen,' 'one forty four,' 'let's see,' and 'seven twenty'" rather than vocalizations like "I'm going to start working on the problem now. I know that my algorithm for multiplication is...". Between the work you see the student performing and the verbalizations, you will hopefully be able to learn a bit more about what's going on inside the student's head.

This is a time-intensive process; so, you'll want to use this technique only for critical questions. You may also want to work with your friendly neighbourhood STLF (or HCI specialist!) either to help plan your think-alouds or to help execute them.

Read more about think-alouds for exploring student thinking in:

Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: Bradford Books/ MIT Press.

Ericsson, K. Anders and Simon, Herbert A.(1998)'How to Study Thinking in Everyday Life: Contrasting Think-Aloud Protocols With Descriptions and Explanations of Thinking',Mind, Culture, and Activity,5:3,178--186.
http://www.informaworld.com/smpp/content~content=a785309769~db=all

Payne, J. W. (1994). Thinking aloud: Insights into information processing. Psychological Science, 5,241,245-248.

Welcome to CSSEI blog

We'll be using this blog to post material relevant to CSSEI, including brief best practice reports about various teaching & learning techniques.