We tried to dissect the elements of "deliberate practice" [Ericsson, et al., 2006] during today's CWSEI Reading Group meeting. Not all aspects of "practice" are the same. We recognize that some students insist on multitasking while doing homework (e.g., watch TV or listen to iPod while engaging in practice activities). Perhaps the term "deliberate practice" should be reserved for those tasks that do not readily permit TV or other multitasking interferences. Ray Lister suggested another paper related to practice quality [Plant, et al., 2005] that may be of interest.
Some of the elements of practice include foundations that students often do not enjoy, but are recognized as skill development techniques. In music, this includes practice on scales, repertoire, technical exercises, etc. [Sloboda, et al., 1996]. In computer science, this may involve practice with "boring" parts of CS, like math skills, analyzing sort routines, fixing badly designed or poorly documented code, or coding non-interactive applications.
The studies of Sloboda, et al., showed that no matter what skill level, there is a common trend that performers that are better at that skill/age level have spent more hours in deliberate practice. The highest achievers in each level are those individuals that have practiced the most. Performers in the highest level have accumulated a considerably larger number of hours than in the next highest level, and so on. This reinforces the results of other papers (e.g., [Ericsson, 1996; Colvin, 2008]).
There is some debate about whether a long programming assignment is better than a shorter programming assignment. Historically, to convey a CS learnaing objective, programming assignments tend to be longer than necessary (perhaps because "I had to do it that way when I was an undergrad"). But, if a student cannot get the long program to work at all, does this mean the student has failed? What if the student is really close to getting it working, but just can't get it to work, or simply doesn't understand a small component of it? Might it be better to have many shorter programs/exercises and more manageable or self-contained milestones, thus building confidence for the student?
References:
Colvin, Geoff. Talent is Overrated. Portfolio (Penguin), 2008.
Ericsson, K. A. The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. Feltovich, and R. R. Hoffman, R. R. (Eds.). Cambridge handbook of expertise and expert performance (pp. 685-706). Cambridge, UK: CambridgeUniversityPress, 2006.
Plant, E. Ashby; Ericsson, K. Anders; Hill, Len; Asberg, Kia (2005). "Why Study Time Does Not Predict Grade Point Average across College Students: Implications of Deliberate Practice for Academic Performance". Contemporary Educational Psychology, v30 n1 p96-116 Jan 2005.
Sloboda, John A.; Davidson, Jane W.; Howe, Michael J.A.; Moore, Derek G. "The role of practice in the development of performing musicians". British Journal of Psychology (1996), 87, pp. 287-309.
16 July 2009
12 July 2009
Deliberate Practice
According to extensive studies on how experts develop their specialized knowledge, one of the primary factors is deliberate practice (Ericsson, Krampe, Tesch-Romer, 1993). This means that it is through prolonged efforts to improve performance skills or understanding, whether in chess, sports, music, science, etc., that result in expert performance. Such effortful activities (deliberate practice) need to be carefully designed and administered to optimize improvement with the help of coaches, mentors, teachers, often parents, etc. Many expert characteristics that were once believed to reflect innate talents are actually the result of intense practice extended for a minimum of 10 years.
For computer science, most of the current computing education I have been exposed to do not include significant amount of "practice". There may be some reading assignments, a few programming assignments, but the amount of actual practice is not significant. If expert knowledge does require significant amount of time and effort, we should explore 1) how to deconstruct learning of computer science concepts into sequences of practice activities, and 2) how these activities can be incorporated in the lectures / labs and perhaps even other available technologies, such as online and mobile learning, to promote deliberate practice beyond class time.
Reference:
Ericsson, K.A., Krampe, R.T., Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review. 100(3), 363 - 406.
For computer science, most of the current computing education I have been exposed to do not include significant amount of "practice". There may be some reading assignments, a few programming assignments, but the amount of actual practice is not significant. If expert knowledge does require significant amount of time and effort, we should explore 1) how to deconstruct learning of computer science concepts into sequences of practice activities, and 2) how these activities can be incorporated in the lectures / labs and perhaps even other available technologies, such as online and mobile learning, to promote deliberate practice beyond class time.
Reference:
Ericsson, K.A., Krampe, R.T., Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review. 100(3), 363 - 406.
19 June 2009
Student Overconfidence
People tend to be overconfident in their answers to a wide variety of general knowledge questions, and in particular when the questions are difficult (Plous, 1993). How do researchers study overconfidence? One approach is to ask participants to estimate the probability that their judgment is correct. These estimates are then used to calibrate between confidence and accuracy. A person is perfectly calibrated when his proportion of judgment at a given level of confidence is identical to his expected probability of being correct. Another approach is to ask participants to give a "confidence intervals" that have a specific probability (usually .9 or .98) of containing an unknown quantity. In one study, participants were 98% sure that an interval contained the correct answer but they were right only 68% of the time.
In one of the summer sessions of an introductory CS courses, 16 students out of a class of 68 students overestimated their final course grade after they have received feedback from their first midterm, and 3 students underestimated their final course grade.
Overconfidence can be relearned, just like any belief system. People who were initially overconfident could learn to make better judgments after 200 tries with intensive performance feedback (Lichtenstein and Fischhoff, 1980). Arkes et al. (1987) found that overconfidence could be eliminated by giving participants feedback after five "deceptively difficult problems". Yet another study by Lichtenstein and Fischhoff shows that by having the participants generate opposing reasons alone was sufficient to reduce accuracy overconfidence, but this has not been confirmed in subsequent studies.
References:
Arkes, H.R., Christensen, C., Lai, C., and Blumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes. 39, 133-144.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1980. Training for calibration. Organizational Behavior and Human Performance, 26, 149-171.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1982. Calibration of Probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp 306-334). Cambridge, England: Cambridge University Press.
Plous, S. 1993. The Psychology of judgment and decision making. New York: McGraw-Hill.
Plous, S. 1995. A Comparison of Strategies for Reducing Interval Overconfidence in Group Judgments. The American Psychological Association Inc. 80:4 p 443-454
In one of the summer sessions of an introductory CS courses, 16 students out of a class of 68 students overestimated their final course grade after they have received feedback from their first midterm, and 3 students underestimated their final course grade.
Overconfidence can be relearned, just like any belief system. People who were initially overconfident could learn to make better judgments after 200 tries with intensive performance feedback (Lichtenstein and Fischhoff, 1980). Arkes et al. (1987) found that overconfidence could be eliminated by giving participants feedback after five "deceptively difficult problems". Yet another study by Lichtenstein and Fischhoff shows that by having the participants generate opposing reasons alone was sufficient to reduce accuracy overconfidence, but this has not been confirmed in subsequent studies.
References:
Arkes, H.R., Christensen, C., Lai, C., and Blumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes. 39, 133-144.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1980. Training for calibration. Organizational Behavior and Human Performance, 26, 149-171.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1982. Calibration of Probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp 306-334). Cambridge, England: Cambridge University Press.
Plous, S. 1993. The Psychology of judgment and decision making. New York: McGraw-Hill.
Plous, S. 1995. A Comparison of Strategies for Reducing Interval Overconfidence in Group Judgments. The American Psychological Association Inc. 80:4 p 443-454
18 June 2009
Why Don't Students Attend Class?
According to Friedman, Rodriguez, and McComb, who did a study on 350 undergraduate students on their reasons for attendance and nonattendance in class, they conclude that "males and females, older and younger students, students who live on and off campus, student who do and do not have jobs, students have light and heavy course loads, and students who do and do not pay their own way in school attend classes with equal frequency." The only difference is that students with better academic records attend classes more regularly.
As to the differences in course characteristics, "students attended faculty taught courses less often than GTA [graduate TA] taught classes, larger classes less often than smaller classes, and natural science classes less often than others." However, courses that penalize absences encourage student attendance in any of the above course settings.
The primary reason why students attend class is internal. They feel they have the responsibility to attend class, their interest in the subject matter, and also getting the material first hand rather than from other sources. Another study has also shown that better attendance is associated with higher grades (Wyatt 1992).
In another article (Jensen and Moore 2009), students who attend help sessions are mostly A and B students and virtually no D and F students. Results also show that students get better grades if they attend these help sessions, and they also attend class more often.
The bottom line is that attendance seems to have a correlation with higher grades. The question is do students really want higher grades, or they are just satisfied with a pass? It will be interesting to survey students on what grades do they realistically expect to get given the effort they are willing to put into the course.
References:
Friedman, P., Rodriguez, F., McComb, J. 2001. Why Students Do and Do Not Attend Classes, Myths and Realities. College Teaching. 49:4, p124-133.
Jensen, P., Moore, R. 2009. What Do Help Sessions Accomplish in Introductory Science Courses? Journal of College Science Teaching. May/June 2009. p60-64.
Wyatt, G. 1992. Skipping class: An analysis of absenteeism among first-year college students. Teaching Sociology 20:201-7.
As to the differences in course characteristics, "students attended faculty taught courses less often than GTA [graduate TA] taught classes, larger classes less often than smaller classes, and natural science classes less often than others." However, courses that penalize absences encourage student attendance in any of the above course settings.
The primary reason why students attend class is internal. They feel they have the responsibility to attend class, their interest in the subject matter, and also getting the material first hand rather than from other sources. Another study has also shown that better attendance is associated with higher grades (Wyatt 1992).
In another article (Jensen and Moore 2009), students who attend help sessions are mostly A and B students and virtually no D and F students. Results also show that students get better grades if they attend these help sessions, and they also attend class more often.
The bottom line is that attendance seems to have a correlation with higher grades. The question is do students really want higher grades, or they are just satisfied with a pass? It will be interesting to survey students on what grades do they realistically expect to get given the effort they are willing to put into the course.
References:
Friedman, P., Rodriguez, F., McComb, J. 2001. Why Students Do and Do Not Attend Classes, Myths and Realities. College Teaching. 49:4, p124-133.
Jensen, P., Moore, R. 2009. What Do Help Sessions Accomplish in Introductory Science Courses? Journal of College Science Teaching. May/June 2009. p60-64.
Wyatt, G. 1992. Skipping class: An analysis of absenteeism among first-year college students. Teaching Sociology 20:201-7.
05 June 2009
Invention Activities
Knowledge transfer from one context to another depends on student learning at least two things: 1) the relevant concepts or skills, and 2) the situations to which they apply. Students are more likely to transfer knowledge from one context to another when instructional examples are abstract and relatively free of surface details. Instead of "tell-and-practice" where instructors often tell the students about the formula they need to use, and then practice using the formulas, it is much better to allow students to develop their "solutions" to a number of contrasting cases before they are told the formula through a mini lecture. Contrasting cases force the students to see beyond the surface differences and explore the underlying deep structure. These contrasting cases constitute what is called an invention activity where students undertake productive activities to note these differences and produce a general solution for all these cases. Such productive activities help students to let go of old interpretations and develop new ones.
Schwartz particularly advocates the use of mathematical tools or procedures in solving invention activities to encourage preciseness and yet general in the solution presentation. They also allow reflection on how the structure of the mathematical tools accomplish their work in the solution of the problems. However, this does not have to be the case. Invention activities can prime students in areas that do not involve quantitative analysis (Yu and Gilley, 2009).
In Schwartz's case, the combination of using visual (problem presentation), numeric (expressing solutions in quantitative mathematical terms), and verbal (student presentation of their solutions) helps to reinforce learning.
In computer science, when we ask our students to create "invent" a solution to a programming assignment, this is an invention activity. The difference with other cases is that invention activities are used as scaffolding for further learning, whereas, in this case, the programming assignment is used to learn the material. In other cases, the students usually don't "invent" the final solution. In the case of computing, the students must get to the final solution themselves. Is that why so many students get frustrated with computer programming? After all, Schwartz did note that students can get tired of repeatedly adapting their inventions.
References:
Schwartz, D., and Martin, T. 2004. Inventing to Prepare for Future Learning: The Hidden Efficiency of Encouraging Original Student Production in Statistics Instruction. Cognition and Instruction. 22(2) 129 - 184.
Yu, B., Gilley, B. 2009. Benefits of Invention Activities Especially for Cross-Cultural Education. Retrieved on October 16, 2009 from http://www.iated.org/concrete2/view_abstract.php?paper_id=8166.
Schwartz particularly advocates the use of mathematical tools or procedures in solving invention activities to encourage preciseness and yet general in the solution presentation. They also allow reflection on how the structure of the mathematical tools accomplish their work in the solution of the problems. However, this does not have to be the case. Invention activities can prime students in areas that do not involve quantitative analysis (Yu and Gilley, 2009).
In Schwartz's case, the combination of using visual (problem presentation), numeric (expressing solutions in quantitative mathematical terms), and verbal (student presentation of their solutions) helps to reinforce learning.
In computer science, when we ask our students to create "invent" a solution to a programming assignment, this is an invention activity. The difference with other cases is that invention activities are used as scaffolding for further learning, whereas, in this case, the programming assignment is used to learn the material. In other cases, the students usually don't "invent" the final solution. In the case of computing, the students must get to the final solution themselves. Is that why so many students get frustrated with computer programming? After all, Schwartz did note that students can get tired of repeatedly adapting their inventions.
References:
Schwartz, D., and Martin, T. 2004. Inventing to Prepare for Future Learning: The Hidden Efficiency of Encouraging Original Student Production in Statistics Instruction. Cognition and Instruction. 22(2) 129 - 184.
Yu, B., Gilley, B. 2009. Benefits of Invention Activities Especially for Cross-Cultural Education. Retrieved on October 16, 2009 from http://www.iated.org/concrete2/view_abstract.php?paper_id=8166.
Blooming in Teaching and Learning
It is important to align appropriate teaching activities with learning outcomes, and students need to know at what level of cognitive engagement they are expected to demonstrate. If only facts are presented in lectures, but students are expected to provide an analysis in their assignment but are never taught how, this may not be effective in assessing student's capabilities. Bloom's Taxonomy provides a common language to coordinate what is taught and what is being assessed. The six different levels of Bloom's Taxonomy are: knowledge, comprehension, application, analysis, synthesis, and evaluation. These are further revised with a set of verb counterparts: remember, understand, apply, analyze, create, and evaluate. Here are three ways of using "Blooming" to enhance learning:
Reference:
Crowe, A., Dirks, C., Wenderoth, M., Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology, CBE - Life Sciences Education, Vol. 7, 368-381, 2009
- Instructor assigns Bloom level to grading rubric, and provide additional learning activities to improve the levels where students have low scores.
- Introduce Bloom levels to students and students are asked to "bloom" questions asked in class (i.e. rank the questions according to Bloom's levels). This helps students to develop meta-cognitive skills and reflection on their learning. Students are also shown the class average at each Bloom level after a test, and evaluate their score at each level.
- Students are taught the Bloom levels and write questions at each level in small groups. The groups exchange the questions and rank them to see if they correspond to the levels intended.
Reference:
Crowe, A., Dirks, C., Wenderoth, M., Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology, CBE - Life Sciences Education, Vol. 7, 368-381, 2009
04 June 2009
Item Response Theory
How do we (as instructors) decide whether a test is "hard" or "easy"? Most of us will answer something along the line .. "it all depends". I find this observation which Hambleton et al. make of the common responses to this question interesting: "Whether an item [or test] is hard or easy depends on the ability of the examinees being measured, and the ability of the examinees depends on whether the test items are hard or easy!" Not very helpful, isn't it? Item Response Theory is a body of theory which applies mathematical models to analyze student scores of individual questions from a test to facilitate comparison of the difficulty level of the questions and their capabilities to differentiate student abilities. It is based on two basic postulates: 1) the performance of an examinee can be predicted by a set of factors called traits (or abilities), 2) the relationship between examinees' item performance and the set of traits can be described by an item characteristic function or item characteristic curve (ICC) like the one in the graph above. The x axis is the trait or ability score, and the y axis is the probability of the examinee with certain trait or ability score to obtain the correct answer. As the ability of an examinee increases, so does the probability of a correct response to an item.Each item in a test has its own ICC and the ICC is the basic building block of IRT. The steepness of the graph shows how well the item can differentiate examinees with low and high abilities. A flat curve is a poor indicator while a steep curve, like the one shown above, is a good indicator. If several ICC's are plotted in the same graph for the corresponding test items with the same shape, the curves on the left (or top) correspond to the easier items than those on the right (or bottom).
By analyzing the examinees' scores of each item from an exam using IRT software, one can have an idea 1) which questions are good indicators of assessing student abilities or not, and 2) objectively respond to which questions are "easy" or "hard".
Reference:
Baker, F. The Basics of Item Response Theory. Available online here.
Graph is taken from http://echo.edres.org:8080/irt/ where one can also find a great deal of information on IRT.
Hambleton, R., Swaminathan, H., Rogers, H. Fundamentals of Item Response Theory. Newbury Park: Sage Publications. 1991.
Subscribe to:
Posts (Atom)