19 June 2009

Student Overconfidence

People tend to be overconfident in their answers to a wide variety of general knowledge questions, and in particular when the questions are difficult (Plous, 1993). How do researchers study overconfidence? One approach is to ask participants to estimate the probability that their judgment is correct. These estimates are then used to calibrate between confidence and accuracy. A person is perfectly calibrated when his proportion of judgment at a given level of confidence is identical to his expected probability of being correct. Another approach is to ask participants to give a "confidence intervals" that have a specific probability (usually .9 or .98) of containing an unknown quantity. In one study, participants were 98% sure that an interval contained the correct answer but they were right only 68% of the time.

In one of the summer sessions of an introductory CS courses, 16 students out of a class of 68 students overestimated their final course grade after they have received feedback from their first midterm, and 3 students underestimated their final course grade.

Overconfidence can be relearned, just like any belief system. People who were initially overconfident could learn to make better judgments after 200 tries with intensive performance feedback (Lichtenstein and Fischhoff, 1980). Arkes et al. (1987) found that overconfidence could be eliminated by giving participants feedback after five "deceptively difficult problems". Yet another study by Lichtenstein and Fischhoff shows that by having the participants generate opposing reasons alone was sufficient to reduce accuracy overconfidence, but this has not been confirmed in subsequent studies.

References:

Arkes, H.R., Christensen, C., Lai, C., and Blumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes. 39, 133-144.

Lichtenstein, S., Fischhoff, B., Phillips, L. 1980. Training for calibration. Organizational Behavior and Human Performance, 26, 149-171.

Lichtenstein, S., Fischhoff, B., Phillips, L. 1982. Calibration of Probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp 306-334). Cambridge, England: Cambridge University Press.

Plous, S. 1993. The Psychology of judgment and decision making. New York: McGraw-Hill.

Plous, S. 1995. A Comparison of Strategies for Reducing Interval Overconfidence in Group Judgments. The American Psychological Association Inc. 80:4 p 443-454

18 June 2009

Why Don't Students Attend Class?

According to Friedman, Rodriguez, and McComb, who did a study on 350 undergraduate students on their reasons for attendance and nonattendance in class, they conclude that "males and females, older and younger students, students who live on and off campus, student who do and do not have jobs, students have light and heavy course loads, and students who do and do not pay their own way in school attend classes with equal frequency." The only difference is that students with better academic records attend classes more regularly.

As to the differences in course characteristics, "students attended faculty taught courses less often than GTA [graduate TA] taught classes, larger classes less often than smaller classes, and natural science classes less often than others." However, courses that penalize absences encourage student attendance in any of the above course settings.

The primary reason why students attend class is internal. They feel they have the responsibility to attend class, their interest in the subject matter, and also getting the material first hand rather than from other sources. Another study has also shown that better attendance is associated with higher grades (Wyatt 1992).

In another article (Jensen and Moore 2009), students who attend help sessions are mostly A and B students and virtually no D and F students. Results also show that students get better grades if they attend these help sessions, and they also attend class more often.

The bottom line is that attendance seems to have a correlation with higher grades. The question is do students really want higher grades, or they are just satisfied with a pass? It will be interesting to survey students on what grades do they realistically expect to get given the effort they are willing to put into the course.

References:

Friedman, P., Rodriguez, F., McComb, J. 2001. Why Students Do and Do Not Attend Classes, Myths and Realities. College Teaching. 49:4, p124-133.

Jensen, P., Moore, R. 2009. What Do Help Sessions Accomplish in Introductory Science Courses? Journal of College Science Teaching. May/June 2009. p60-64.

Wyatt, G. 1992. Skipping class: An analysis of absenteeism among first-year college students. Teaching Sociology 20:201-7.

05 June 2009

Invention Activities

Knowledge transfer from one context to another depends on student learning at least two things: 1) the relevant concepts or skills, and 2) the situations to which they apply. Students are more likely to transfer knowledge from one context to another when instructional examples are abstract and relatively free of surface details. Instead of "tell-and-practice" where instructors often tell the students about the formula they need to use, and then practice using the formulas, it is much better to allow students to develop their "solutions" to a number of contrasting cases before they are told the formula through a mini lecture. Contrasting cases force the students to see beyond the surface differences and explore the underlying deep structure. These contrasting cases constitute what is called an invention activity where students undertake productive activities to note these differences and produce a general solution for all these cases. Such productive activities help students to let go of old interpretations and develop new ones.

Schwartz particularly advocates the use of mathematical tools or procedures in solving invention activities to encourage preciseness and yet general in the solution presentation. They also allow reflection on how the structure of the mathematical tools accomplish their work in the solution of the problems. However, this does not have to be the case. Invention activities can prime students in areas that do not involve quantitative analysis (Yu and Gilley, 2009).

In Schwartz's case, the combination of using visual (problem presentation), numeric (expressing solutions in quantitative mathematical terms), and verbal (student presentation of their solutions) helps to reinforce learning.

In computer science, when we ask our students to create "invent" a solution to a programming assignment, this is an invention activity. The difference with other cases is that invention activities are used as scaffolding for further learning, whereas, in this case, the programming assignment is used to learn the material. In other cases, the students usually don't "invent" the final solution. In the case of computing, the students must get to the final solution themselves. Is that why so many students get frustrated with computer programming? After all, Schwartz did note that students can get tired of repeatedly adapting their inventions.

References:

Schwartz, D., and Martin, T. 2004. Inventing to Prepare for Future Learning: The Hidden Efficiency of Encouraging Original Student Production in Statistics Instruction. Cognition and Instruction. 22(2) 129 - 184.

Yu, B., Gilley, B. 2009. Benefits of Invention Activities Especially for Cross-Cultural Education. Retrieved on October 16, 2009 from http://www.iated.org/concrete2/view_abstract.php?paper_id=8166.

Blooming in Teaching and Learning

It is important to align appropriate teaching activities with learning outcomes, and students need to know at what level of cognitive engagement they are expected to demonstrate. If only facts are presented in lectures, but students are expected to provide an analysis in their assignment but are never taught how, this may not be effective in assessing student's capabilities. Bloom's Taxonomy provides a common language to coordinate what is taught and what is being assessed. The six different levels of Bloom's Taxonomy are: knowledge, comprehension, application, analysis, synthesis, and evaluation. These are further revised with a set of verb counterparts: remember, understand, apply, analyze, create, and evaluate. Here are three ways of using "Blooming" to enhance learning:
  1. Instructor assigns Bloom level to grading rubric, and provide additional learning activities to improve the levels where students have low scores.
  2. Introduce Bloom levels to students and students are asked to "bloom" questions asked in class (i.e. rank the questions according to Bloom's levels). This helps students to develop meta-cognitive skills and reflection on their learning. Students are also shown the class average at each Bloom level after a test, and evaluate their score at each level.
  3. Students are taught the Bloom levels and write questions at each level in small groups. The groups exchange the questions and rank them to see if they correspond to the levels intended.
Most of computer science education will likely require higher level in Bloom's taxonomy. However, students may operate at lower level in learning the material. Recognition of this discrepancy and the use of specific activities in achieving higher Bloom's level may help students to become better computer scientists. Instructors will also benefit in articulating at which level they expect their students to demonstrate by comparing what is stated in the course outline and what is being taught and tested.

Reference:

Crowe, A., Dirks, C., Wenderoth, M., Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology, CBE - Life Sciences Education, Vol. 7, 368-381, 2009

04 June 2009

Item Response Theory

How do we (as instructors) decide whether a test is "hard" or "easy"?  Most of us will answer something along the line .. "it all depends".  I find this observation which Hambleton et al. make of the common responses to this question interesting: "Whether an item [or test] is hard or easy depends on the ability of the examinees being measured, and the ability of the examinees depends on whether the test items are hard or easy!"  Not very helpful, isn't it? Item Response Theory is a body of theory which applies mathematical models to analyze student scores of individual questions from a test to facilitate comparison of the difficulty level of the questions and their capabilities to differentiate student abilities.  It is based on two basic postulates: 1) the performance of an examinee can be predicted by a set of factors called traits (or abilities), 2) the relationship between examinees' item performance and the set of traits can be described by an item characteristic function or item characteristic curve (ICC) like the one in the graph above.  The x axis is the trait or ability score, and the y axis is the probability of the examinee with certain trait or ability score to obtain the correct answer.  As the ability of an examinee increases, so does the probability of a correct response to an item.

Each item in a test has its own ICC and the ICC is the basic building block of IRT.  The steepness of the graph shows how well the item can differentiate examinees with low and high abilities.  A flat curve is a poor indicator while a steep curve, like the one shown above, is a good indicator.  If several ICC's are plotted in the same graph for the corresponding test items with the same shape, the curves on the left (or top) correspond to the easier items than those on the right (or bottom).

By analyzing the examinees' scores of each item from an exam using IRT software, one can have an idea 1) which questions are good indicators of assessing student abilities or not, and 2) objectively respond to which questions are "easy" or "hard".

Reference:

Baker, F. The Basics of Item Response Theory. Available online here.

Graph is taken from http://echo.edres.org:8080/irt/ where one can also find a great deal of information on IRT.

Hambleton, R., Swaminathan, H., Rogers, H.  Fundamentals of Item Response Theory. Newbury Park: Sage Publications. 1991.