It is possible to study objectively the form of thinking that occurs covertly in many types of typical tasks and activities in everyday life through think aloud or talk aloud protocols (or protocol analysis) (Ericsson and Simon, 1980). One of the biggest obstacles in this type of study is to find nonreactive settings to reproduce the thinking process without altering how the subjects would normally think. The single most important precondition for successful direct expression of thinking is that the participants are allowed to maintain undisrupted focus on the completion of the presented tasks. They should not describe nor explain their thoughts to anyone during the process. Interviewers should also limit their interactions with the subjects as much as possible during the sessions. Subjects can also be given a series of simple warm-up exercises (such as mental multiplication of two numbers) that will provide them with the practice of directing their full attention to the presented task while verbalizing their thoughts.
If participants are asked to describe or explain their thinking, it is found that such verbalizations present "a genuine educational opportunity to make students' reasoning more coherent and reflective" (Ericsson and Simon, 1998). These subjects are more successful in mastering the material and generate more self-explanations and monitor their learning better. Writing is found to be the most effective (as well as demanding) activity to improve and develop student's thinking.
References:
Ericsson, K.A. and Simon, H.A. (1980). Verbal Reports as Data. Psychological Review. 87(3), pp 215 - 251.
Ericsson, K.A. and Simon, H.A. (1998). How to Study Thinking in Everyday Life: Contrasting Think-Aloud Protocols with Descriptions and Explanations of Thinking. Mind, Culture, and Activity. 5(3), pp 178 - 186.
09 December 2009
03 December 2009
EnGauging Students
EnGauging students is the process of engaging students in learning and gauging what they are learning simultaneously. Engaged students are more motivated to learn, and gauging students in the process provides students with feedback so they know what they need to change in their study habits. Some tools to enGauge students include:
Handelsman, J., Miller, S., Pfund, C. (2007) Scientific teaching. W.H. Freeman & Company.
- Brainstorming - list as many answers as possible to a question
- Case studies - solve a problem or situation in a real-world context
- "Clicker" questions - answer questions electronically in class
- Decision making - work together to recommend solutions to a problem
- Group exams - work together to discuss exam questions but writes answers individually
- One-minute papers - write a short answer about a topic or question
- Pre / Post questions - answer questions before and after a topic is taught
- Strip sequence - arrange a series of events into the correct order (e.g. Parson's puzzles)
- Think-pair-share - think about possible answers to a question individually, and discuss with partners to come to a consensus
- Reading assessment - enlisting groups of students to design the activities and teach each other
- 99 words / seconds - summarize a topic / lecture in 99 words or in 99 seconds (see example here)
- KWL - have students answer 3 questions, individually or in a group, each class: "what we Know", what we Want to know, and "what we Learned".
Handelsman, J., Miller, S., Pfund, C. (2007) Scientific teaching. W.H. Freeman & Company.
Desirable Difficulties
Given that the fundamental goal of education is to make changes in the learner's long term memory, Bjork et al. have shown that learning conditions that introduce difficulties for the learners are potent in enhancing long-term retention and transfer. Humans do not simply "store" information in long term memory but rather, we relate new information to what is already known. Our long term memory is not a playback device. It is primarily semantic in nature. "Desirable difficulties" have been shown to be effective in making changes in the long term memory. This includes: spacing rather than massing study sessions, interleaving rather than blocking practice on separate topics or tasks; varying how instructional materials are presented or illustrated; reducing feedback; and using tests rather than presentations as learning events. See Bjork's Seven Study Tips and his slide presentation.
Reference:
Bjork, R. and Linn, M. (ND). Introducing Desirable Difficulties for Educational Applications in Science (IDDEAS). Retrieved on December 3, 2009 from http://iddeas.psych.ucla.edu/IDDEASproposal.pdf.
Reference:
Bjork, R. and Linn, M. (ND). Introducing Desirable Difficulties for Educational Applications in Science (IDDEAS). Retrieved on December 3, 2009 from http://iddeas.psych.ucla.edu/IDDEASproposal.pdf.
Cognitive Load Theory (CLT)
Cognitive Load Theory is all about efficiency where efficiency is defined in terms of learner performance and learner mental effort. CLT suggests that we have only a limited amount of cognitive capacity for solving problems in our short term working memory (as opposed to long term memory for information storage). The higher the learner performance and the lower the learner mental effort (which occurs in the short term working memory), the better! According to CLT, there are three main types of cognitive load when one tries to learn something: intrinsic load (due to the complexity of the content to be learned), germane load (due to the instructional activities), and extraneous load (due to wasted mental resources on irrelevant material). Thus, in a first year computer programming course, learning to program in Java imposes the intrinsic load, providing worked examples on a variety of programming tasks contribute to the germane load, and requiring students to work within a complex integrated development environment (IDE) impose extraneous load on the students. Efficient instruction maximize germane load and minimize extraneous load.
Cognitive load depends on the interaction of three components: the learning goal and its associated content, learner's prior knowledge, and the instructional environment.
Reference:
Clark, R.C., Nguyen, and F., Sweller, J. (2006). Efficiency in Learning. San Francisco: Pfeiffer.
Cognitive load depends on the interaction of three components: the learning goal and its associated content, learner's prior knowledge, and the instructional environment.
Reference:
Clark, R.C., Nguyen, and F., Sweller, J. (2006). Efficiency in Learning. San Francisco: Pfeiffer.
25 November 2009
Worked Examples
An important discovery of Cognitive Load Theory (CLT) (Sweller, 1988) is that studying partially worked examples provide better learning results for novices in computing than working through problems from scratch or studying completely worked examples. Gray et al. (2007) suggested the use of fading worked example as an effective strategy for lowering cognitive load in the novice phase of skill acquisition in programming education.
The idea of a fading worked example (FWE) is a sequence of partially worked examples in which each problem in the sequence contains one fewer worked step than its predecessor so that, in the end, the learner is given a problem to solve with no worked steps provided. Thus in systems programming, instructors may start with a fully worked example (Clark et al., 2006) from a problem statement, to analysis, design, coding and testing. Then the next example may involve all steps except coding. The next example may remove design, etc, until the students are required to solve a problem given just a problem statement.
The key to creating FWE is decomposition of each learning goal into smaller steps. As an example of using FWE for learning programming, each aspect of a programming language is identified. This includes variable, expression, assignment, iteration, subroutine call, etc. Next the use of each of these aspects in a program is related to the dimensions of problem solving, namely design, implementation and semantics.
How does studying worked examples compared to actual practice? Actively solving practice problems imposes much more mental work than reviewing worked examples. However, skipping study of worked examples may impose too much cognitive load on the learners when they try to jump into practice assignments right away. (See Guzdial blog entry.) Studies have shown that students who learned by doing took twice as much time to learn as students who learned from worked examples (Mayer, 2008, chapter 9). Students also benefit more with worked examples if they generate explanations as they study the worked examples (meta-cognitive skill development).
A compromise between worked examples and actual practice is a completion example where some of the steps are demonstrated in a worked example and the other steps are completed by the learner as in a practice problem.
It should be noted that as learners gain expertise, worked examples actually become detrimental and they are better off working all the problems. The worked examples can become redundant. This is where FWE will be most useful.
Reference:
Clark, R.C., Nguyen, and F., Sweller, J. (2006). Efficiency in Learning. San Francisco: Pfeiffer. (Chapter 8).
Gray, S., Clair, C., James, R., Mead, J. (2007). Suggestions for Graduated Exposure to Programming Concepts Using Fading Worked Examples. International Computing Education Research Workshop, Proceedings of the third international workshop on Computing education research. pp 99-110.
Mayer, R. E. (2008). Learning and Instruction (2nd ed). Upper Saddle River, NJ: Merrill Prentice-Hall.
Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science. 12(2).
The idea of a fading worked example (FWE) is a sequence of partially worked examples in which each problem in the sequence contains one fewer worked step than its predecessor so that, in the end, the learner is given a problem to solve with no worked steps provided. Thus in systems programming, instructors may start with a fully worked example (Clark et al., 2006) from a problem statement, to analysis, design, coding and testing. Then the next example may involve all steps except coding. The next example may remove design, etc, until the students are required to solve a problem given just a problem statement.
The key to creating FWE is decomposition of each learning goal into smaller steps. As an example of using FWE for learning programming, each aspect of a programming language is identified. This includes variable, expression, assignment, iteration, subroutine call, etc. Next the use of each of these aspects in a program is related to the dimensions of problem solving, namely design, implementation and semantics.
How does studying worked examples compared to actual practice? Actively solving practice problems imposes much more mental work than reviewing worked examples. However, skipping study of worked examples may impose too much cognitive load on the learners when they try to jump into practice assignments right away. (See Guzdial blog entry.) Studies have shown that students who learned by doing took twice as much time to learn as students who learned from worked examples (Mayer, 2008, chapter 9). Students also benefit more with worked examples if they generate explanations as they study the worked examples (meta-cognitive skill development).
A compromise between worked examples and actual practice is a completion example where some of the steps are demonstrated in a worked example and the other steps are completed by the learner as in a practice problem.
It should be noted that as learners gain expertise, worked examples actually become detrimental and they are better off working all the problems. The worked examples can become redundant. This is where FWE will be most useful.
Reference:
Clark, R.C., Nguyen, and F., Sweller, J. (2006). Efficiency in Learning. San Francisco: Pfeiffer. (Chapter 8).
Gray, S., Clair, C., James, R., Mead, J. (2007). Suggestions for Graduated Exposure to Programming Concepts Using Fading Worked Examples. International Computing Education Research Workshop, Proceedings of the third international workshop on Computing education research. pp 99-110.
Mayer, R. E. (2008). Learning and Instruction (2nd ed). Upper Saddle River, NJ: Merrill Prentice-Hall.
Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science. 12(2).
Labels:
CLT,
Cognitive Load Theory,
examples,
worked,
Worked Examples
24 November 2009
Video Lectures
Internet delivered video lectures have been found to prepare students for exams as effectively as live in-class lectures in a biology course (Lents and Cifuentes, 2009) although students were not enthused with the concept of video lectures initially. Another experiment with video podcasts for Java CS1 course resulted in less than expected participation (Murphy and Wolff, 2009). However, in yet another study, students in a first semester calculus-based mechanics course using multimedia modules not only learned more than students using traditional textbook presentation, but also retained information better (Stelzer et al. 2009).
Much effort has gone into research on the design of multimedia materials to improve learning. This includes designing materials to help students stay focused of the learning goals, use of different input channels (visual and auditory) to help students build meaning and understanding, offloading (presenting words as narration rather than on-screen text), weeding (eliminating interesting but extraneous material), signaling (adding arrows or highlighting for emphasis), and aligning words and pictures (Mayer, 2001) (Mayer, 2003).
References:
Lents, N., and Cifuentes, O. (November / December 2009). Web-Based Learning Enhancements: Video Lectures Through Voice-Over PowerPoint in a Majors-Level Biology Course. Journal of College Science Teaching. 39(2), pp 38 - 46.
Mayer, R.E. (2001). The Cambridge Handbook of Multimedia Learning. Cambridge U.P., Cambrdige.
Mayer, R.E. and Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist. 38(1), pp 43 - 52.
Murphy, L. and Wolff, D. (2009). Creating Video Podcasts for CS1: Lessons Learned. NorthWest Academic Computing Consortium (NWACC), Journal of Computing Sciences in Colleges, 25(1). pp 152 - 158.
Stelzer, T., Gladding, G., Mestre, J., Brookes, D. (February 2009). Comparing the Efficacy of Multimedia Modules with Traditional Textbooks for Learning Introductory Physics Content. American Association of Physics Teachers. 77(2), pp 184 - 190.
Much effort has gone into research on the design of multimedia materials to improve learning. This includes designing materials to help students stay focused of the learning goals, use of different input channels (visual and auditory) to help students build meaning and understanding, offloading (presenting words as narration rather than on-screen text), weeding (eliminating interesting but extraneous material), signaling (adding arrows or highlighting for emphasis), and aligning words and pictures (Mayer, 2001) (Mayer, 2003).
References:
Lents, N., and Cifuentes, O. (November / December 2009). Web-Based Learning Enhancements: Video Lectures Through Voice-Over PowerPoint in a Majors-Level Biology Course. Journal of College Science Teaching. 39(2), pp 38 - 46.
Mayer, R.E. (2001). The Cambridge Handbook of Multimedia Learning. Cambridge U.P., Cambrdige.
Mayer, R.E. and Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist. 38(1), pp 43 - 52.
Murphy, L. and Wolff, D. (2009). Creating Video Podcasts for CS1: Lessons Learned. NorthWest Academic Computing Consortium (NWACC), Journal of Computing Sciences in Colleges, 25(1). pp 152 - 158.
Stelzer, T., Gladding, G., Mestre, J., Brookes, D. (February 2009). Comparing the Efficacy of Multimedia Modules with Traditional Textbooks for Learning Introductory Physics Content. American Association of Physics Teachers. 77(2), pp 184 - 190.
20 November 2009
Good Problems and Effective Structures for Groups
Context-rich group problems help students to focus on the concepts and principles that are needed to solve them. They have the following general characteristics:
Homogeneous gender groups and mixed gender groups of two females and one male performed better than groups with two males and one female.
Groups with mixed ability performed as well as groups consisting of only high-ability students (who tend to make problems more complicated than necessary or overlook the obvious), and better than groups with students of only low or medium ability. Low ability students contribute by keeping the groups on track by pointing out the obvious and simple ideas, and requesting for clarification of the concepts and procedures that are needed to solve the problems (which the higher ability students sometimes realize their wrong assumptions and mistakes when they justify their solutions to them).
To avoid dominance of any student in a group, or to avoid a group from jumping at the first possible solution to avoid conflict in the group, two strategies can be used:
Heller, P., Hollabaugh, M. (July 1992). Teaching Problem Solving Through Cooperative Group. Part 2. Designing Problems and Structuring Groups. American Association of Physics Teachers. 60(7). pp 637 - 644.
- Problem statement does not always specify the unknown to be computed.
- More information may be available than is needed to solve the problem.
- Some of the information needed to solve the problem may be missing from the question. Students need to determine what the missing information is and how to come up with it.
- Reasonable assumptions may need to be made to simplify the problem and allow for a meaningful solution.
Homogeneous gender groups and mixed gender groups of two females and one male performed better than groups with two males and one female.
Groups with mixed ability performed as well as groups consisting of only high-ability students (who tend to make problems more complicated than necessary or overlook the obvious), and better than groups with students of only low or medium ability. Low ability students contribute by keeping the groups on track by pointing out the obvious and simple ideas, and requesting for clarification of the concepts and procedures that are needed to solve the problems (which the higher ability students sometimes realize their wrong assumptions and mistakes when they justify their solutions to them).
To avoid dominance of any student in a group, or to avoid a group from jumping at the first possible solution to avoid conflict in the group, two strategies can be used:
- have students take on special roles. In a three member group, the roles of Manager (who designs plans for action and suggests solutions), Skeptic (who questions premises and plans), and Checker / Recorder (who organizes and keeps track of the discussions) can be assigned.
- have the students reflect on how well their groups have worked and suggest ways of improvement at the end of each activity.
Heller, P., Hollabaugh, M. (July 1992). Teaching Problem Solving Through Cooperative Group. Part 2. Designing Problems and Structuring Groups. American Association of Physics Teachers. 60(7). pp 637 - 644.
Is Collaborative Group Learning Useful?
A study on the effectiveness of group problem solving was conducted by Heller et al. (1992). The instructional approach was as follows:
Reference:
Heller, P., Keith, R., Anderson, S. (July 1992). Teaching Problem Solving Through Cooperative Grouping. Part 1: Group versus Individual Problem Solving. American Association of Physics Teachers. 60(7). pp 627 - 636.
- students were taught general problem-solving strategies
- a set of context-rich practice and test problems were given to help students focus their attention on the need to use conceptual knowledge to analyze a problem
- students worked in carefully managed groups to practice solving context-rich problems
- evidence of conceptual understanding
- usefulness of information identified to solve the problems
- match of equations with information identified
- reasonable plan
- logical progression
- appropriate mathematics
Reference:
Heller, P., Keith, R., Anderson, S. (July 1992). Teaching Problem Solving Through Cooperative Grouping. Part 1: Group versus Individual Problem Solving. American Association of Physics Teachers. 60(7). pp 627 - 636.
15 November 2009
Student Cheating in CMS
Do students tend to cheat more when they write exams or quizzes using online course management systems (CMS) like WebCT or Blackboard? Not according to Charlesworth et al. (2006). They did a survey on 178 students and asked them first their definitions of cheating, and their main reasons to cheat in a typical classroom. Most students define cheating as copying or taking answers from others, and their major reasons for cheating include, in order of importance: 1) laziness, 2) grades, 3) pressure to do well and not fail, 4) lack of knowledge, and lastly, 5) opportunity. Given that students are not the best in making proper assessment of themselves, I am not sure if this list accurately ordered. Here is how I would re-interpret this list. As noted in the paper, "[m]any students report lengthy study sessions yet realize incomplete understanding due to factors such as poor study skills and lack of knowledge. As a result, students may feel unprepared for quizzes and examinations, and seek alternative methods to ensure success." If students do not grasp the material (i.e. lack of knowledge (4)), they feel pressured to succeed (3) to obtain good grades (2), but since hard work is difficult, some may give up, and blame it on their laziness (1), and given the right opportunity (5) to cheat, they would do so.
It is also interesting to note from the paper that students whose GPA is between 2.4 - 3.0 are more likely to cheat on written assignments. However, the study does not show that a web-enhanced course automatically increase the amount of cheating.
Reference:
Charlesworth, P., Charlesworth, D., Vician, C. (September 2006). Students' Perspectives of the Influence of Web-Enhanced Coursework on Incidences of Cheating. Journal of Chemical Education. 83(9), pp 1368 - 1375.
It is also interesting to note from the paper that students whose GPA is between 2.4 - 3.0 are more likely to cheat on written assignments. However, the study does not show that a web-enhanced course automatically increase the amount of cheating.
Reference:
Charlesworth, P., Charlesworth, D., Vician, C. (September 2006). Students' Perspectives of the Influence of Web-Enhanced Coursework on Incidences of Cheating. Journal of Chemical Education. 83(9), pp 1368 - 1375.
14 November 2009
Problem Based Learning
Problem based learning (PBL) is not simply throwing a problem to the students and let them figure out the solutions all by themselves. There are significant support elements to guide the students in the learning. It is actually a well defined, structured instructional method that students work through in seven steps with appropriate scaffolding support, learning resources, instructional support, tutor support, group discussions, etc.:
References:
Hmelo-Silver, C., Duncan, R.G., Chinn, C.A. (2007). Scaffolding and Achievement in Problem-Based and Inquiry Learning: A Response to Kirschner, Sweller, and Clark (2006). Educational Psychologist. 42(2), pp 99 - 107.
Schmidt, H.G., Loyens, S.M., van Gog, T., Paas, F. (2007). Problem-Based Learning is Compatible with Human Cognitive Architecture: Commentary on Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), pp 91 - 97.
- students clarify any terms and concepts in the problem text
- generate a definition of the problem (or what is really the problem to be solved)
- students brainstorm ideas, hypothesize, question about the problem
- systematize and scrutinize the ideas
- produce a list of issues for individual learning (the learning goals / contents behind the problem)
- the learning issues are used to guide student study activities where students study the available resources
- students share findings, review and discuss literature, solve other problems, and synthesize what is learned.
References:
Hmelo-Silver, C., Duncan, R.G., Chinn, C.A. (2007). Scaffolding and Achievement in Problem-Based and Inquiry Learning: A Response to Kirschner, Sweller, and Clark (2006). Educational Psychologist. 42(2), pp 99 - 107.
Schmidt, H.G., Loyens, S.M., van Gog, T., Paas, F. (2007). Problem-Based Learning is Compatible with Human Cognitive Architecture: Commentary on Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), pp 91 - 97.
Minimal Guided Learning
Are there really any benefits to minimal guided learning, as practiced in a number of classroom activities in the form of inquiry learning, problem based learning, invention activities, etc.? According to Kirschner et al. (2006), not much. Their argument is that problem solving takes place in the working memory, which is severely limited in capacity when dealing with novel information, and since learning is to ultimately alter long term memory, they conclude that 1) the changes in the short term memory will likely not cause any changes in the long term memory since all information is lost within 30 seconds if the information is not rehearsed, 2) the heavy cognitive load is detrimental to learning.
Instead, a worked example with strongly guided instruction, process worksheets where descriptions on how to solve problems with specific hints and rules of thumb are more effective for student learning. Kyllonen and Lajoie (2003) found that highly structured instructional presentations benefit less able learners and unstructured instructional presentations benefit more able learners. Clark (1982) also noted that less able learners tend to choose less guided approaches to learning and they learn less. Higher aptitude students tend to choose more guided approaches to learning but they could have learned even more if they have chosen less guided instruction.
Is it possible then that CS education tends to create such a heavy cognitive load on our students, especially first year students, that result in such high attrition rate? Would providing students with detailed worked programming examples, strategies to solve programming problems, use of worksheets to allow students engage in deliberate practice help transition students to become more skilled programmers a better approach?
References:
Clark, R.E. (1982). Antagonism between Achievement and Enjoyment in ATI Studies. Educational Psychologist, 17, pp 92 - 101.
Kyllonen, P.C., and Lajoie, S.P. (2003). Reassessing aptitude: Introduction to a Special Issue in honor of Richard E. Snow. Educational Psychologist, 38, pp 79 - 83.
Kirschner, P.A., Sweller, J., Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), pp 75 - 86.
Sweller, J., Kirschner, P., Clark, R.E. (2007). Why Minimally Guided Teaching Techniques Do Not Work: A Reply to Commentaries. Educational Psychologist. 42(2), pp 115 - 121.
Instead, a worked example with strongly guided instruction, process worksheets where descriptions on how to solve problems with specific hints and rules of thumb are more effective for student learning. Kyllonen and Lajoie (2003) found that highly structured instructional presentations benefit less able learners and unstructured instructional presentations benefit more able learners. Clark (1982) also noted that less able learners tend to choose less guided approaches to learning and they learn less. Higher aptitude students tend to choose more guided approaches to learning but they could have learned even more if they have chosen less guided instruction.
Is it possible then that CS education tends to create such a heavy cognitive load on our students, especially first year students, that result in such high attrition rate? Would providing students with detailed worked programming examples, strategies to solve programming problems, use of worksheets to allow students engage in deliberate practice help transition students to become more skilled programmers a better approach?
References:
Clark, R.E. (1982). Antagonism between Achievement and Enjoyment in ATI Studies. Educational Psychologist, 17, pp 92 - 101.
Kyllonen, P.C., and Lajoie, S.P. (2003). Reassessing aptitude: Introduction to a Special Issue in honor of Richard E. Snow. Educational Psychologist, 38, pp 79 - 83.
Kirschner, P.A., Sweller, J., Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), pp 75 - 86.
Sweller, J., Kirschner, P., Clark, R.E. (2007). Why Minimally Guided Teaching Techniques Do Not Work: A Reply to Commentaries. Educational Psychologist. 42(2), pp 115 - 121.
06 November 2009
Tutorials and the Significant Role of the TA's
It is unfortunate that many TA's are so busy with their research and course work that they often have minimal time to devote to tutorial preparation, whether in the material to be covered or teaching methods. A study by Koenig et al (2007) found that student performance gain in learning drastically improved in tutorials where students work in groups with TA's interaction using Socratic dialogue over tutorials where they work alone, or where they learn in a traditional lecture setting, or even in groups by themselves without other inputs from TA's. Student satisfaction of tutorials is clearly linked to the teaching performance of the TA's.
Interestingly though, when students were asked which style of tutorial did they prefer, more students indicate a traditional lecture style than Socratic group discussion with a TA, even though it is less effective. Perhaps the latter style moves the students out of their comfort zone a tad more than what they are used to and may seem to demand more work from them? In any case, this latter style seems to be more successful in moving students away from their initial misconceptions in the tutorials.
In order to implement such learning / teaching style in the tutorials, TA's will require weekly training to prepare for the tutorials. They have to work through the material and they need guidance on how to use Socratic dialogue with each tutorial topic.
Reference:
Koenig, K., Endorf, R., Braun, G. (15 May 2007). Effectiveness of Different Tutorial Recitation Teaching Methods and Its Implications for TA Training. The American Physical Society. Physical Review Special Topics - Physics Education Research. 3, 010104-1 to 010104-9.
Interestingly though, when students were asked which style of tutorial did they prefer, more students indicate a traditional lecture style than Socratic group discussion with a TA, even though it is less effective. Perhaps the latter style moves the students out of their comfort zone a tad more than what they are used to and may seem to demand more work from them? In any case, this latter style seems to be more successful in moving students away from their initial misconceptions in the tutorials.
In order to implement such learning / teaching style in the tutorials, TA's will require weekly training to prepare for the tutorials. They have to work through the material and they need guidance on how to use Socratic dialogue with each tutorial topic.
Reference:
Koenig, K., Endorf, R., Braun, G. (15 May 2007). Effectiveness of Different Tutorial Recitation Teaching Methods and Its Implications for TA Training. The American Physical Society. Physical Review Special Topics - Physics Education Research. 3, 010104-1 to 010104-9.
05 November 2009
Multiple Choice Questions
Here are some suggestions and results on studies of multiple choice questions (Haladyna and Downing, 1989).
Haladyn, T. and Downing S. (1989). Validity of a Taxonomy of Multiple-Choice Item-Writing Rules. Applied Measurement in Education. 2(1), pp 51- 78.
Haladyn T. and Downing S. (1989). A Taxonomy of Multiple-Choice Item-Writing Rules. Applied Measurement in Education. 2(1), pp 37 - 50.
- Three-option questions are optimal for most examinees. Three-option questions provides the most information at the mid range of the score scale, two-option questions provides the most information for high-scoring examinees, and the four- and five-option questions provide the most information for low-scoring examinees.
- Use question format rather than sentence completion format.
- Use as many functional distractors as are feasible. Eliminate dysfunctional distractors.
- Type K questions (i.e. where each option includes combination of answers such as A) 1, 2, and 3, B) 2 or 3, etc.) are more inefficient to construct, more laborious to read, make a heavier cognitive demand on the students. They can be used to measure complex, higher level thinking skills.
- Place the keys to the questions equally in different positions throughout the exam.
- Avoid incorrect grammar that may clue the examinees to the correct option.
- Humor in the options lowers test anxiety.
- Word the question positively and avoid negative phrasing.
- Common student errors can be used to make up distractors.
- "All of the above" option makes the questions more difficult and less discriminating.
- Avoid, or use sparingly, the option "None of the above". Similar to "All of the above", the questions are more difficult, less discriminating, and test scores are less reliable.
Haladyn, T. and Downing S. (1989). Validity of a Taxonomy of Multiple-Choice Item-Writing Rules. Applied Measurement in Education. 2(1), pp 51- 78.
Haladyn T. and Downing S. (1989). A Taxonomy of Multiple-Choice Item-Writing Rules. Applied Measurement in Education. 2(1), pp 37 - 50.
30 October 2009
Promising Practices in Undergraduate STEM Education
In STEM education transformation, it is important to evaluate changes in light of implementation and student performance standards. Froyd puts together eight promising practices in STEM transformation, and how these practices can be evaluated.
Froyd, J. (2008). White Paper on Promising Practices in Undergraduate STEM Education. Retrieved on October 30, 2009 from here.
- Use of learning outcomes
- Organize students in small groups
- Organize students in learning communities to promote integrated and interdisciplinary learning
- Organize content based on problem or scenario
- Provide students feedback through systematic formative assessment
- Design in-class activities to actively engage students
- Provide students with the opportunities to engage in undergraduate research
- Have faculty initiate student – faculty interactions
- whether the practice is relevant for the course
- whether sufficient resource is available
- the amount of effort required for the implementation
- whether there is any performance gain with the new practice compared to other students
- comparison of different approaches to the implementation of the same practice, different class settings, students, etc.
Froyd, J. (2008). White Paper on Promising Practices in Undergraduate STEM Education. Retrieved on October 30, 2009 from here.
23 October 2009
Innovative Ways of Teaching Computer Science (Part 2)
Please add to this list if you have any good ideas on innovative ways of teaching Computer Science:
- Team teaching
- Turning lectures into labs (given that many students bring their laptops to lectures, why not form groups of students with at least one laptop in the group for some hands on activities?)
- Treating programming assignments like math homework problems (why do we give only big assignments most of the time?)
- Play games (use games to engage students, see Thiagi web site)
- Invention activities
- Use humor, group activities, field trips
- Get rid of textbooks or let students learn as much as they can on their own and share (these are two ends of the spectrum)
- Let students decide what practical problems they are interested in solving with guidance from faculty (e.g. program iPhone, web app, robotics, etc.) and structure the course around them.
22 October 2009
Problem Solving
Definition: Problem solving is cognitive processing directed at achieving a goal when no solution method is obvious to the problem solver. (Meyer, 1992)
Proposition #1: Problem solving abilities do not transfer between disciplines. (Maloney, 1993)
Proposition #2: A student's strengths and weaknesses in problem solving are the same regardless of the environment. As an example, a student's strengths and weaknesses in solving a complicated trip planning problem are the same in solving a physics problem or performing in a work place. (Adams and Wieman, 2007)
Implications: if #1 is true, the argument that math and logic help students in Computer Science is no longer valid?
If #2 is true, all Computer Science students should play a lot more video games?
References:
Adams, W. and Wieman, C. (2007). Problem Solving Skill Evaluation Instrument - Validation Studies. Retrieved on October 22, 2009 from here.
Maloney, D.P. (1993). Research on Problem Solving: Physics, in Handbook of Research on Science Teaching and Learning edited by D.L. Gabel. Toronto: Macmillan. pp 327 - 354.
Meyer, R.E. (1992). Thinking, problem solving, cognition (2nd ed). New York: Freeman.
Proposition #1: Problem solving abilities do not transfer between disciplines. (Maloney, 1993)
Proposition #2: A student's strengths and weaknesses in problem solving are the same regardless of the environment. As an example, a student's strengths and weaknesses in solving a complicated trip planning problem are the same in solving a physics problem or performing in a work place. (Adams and Wieman, 2007)
Implications: if #1 is true, the argument that math and logic help students in Computer Science is no longer valid?
If #2 is true, all Computer Science students should play a lot more video games?
References:
Adams, W. and Wieman, C. (2007). Problem Solving Skill Evaluation Instrument - Validation Studies. Retrieved on October 22, 2009 from here.
Maloney, D.P. (1993). Research on Problem Solving: Physics, in Handbook of Research on Science Teaching and Learning edited by D.L. Gabel. Toronto: Macmillan. pp 327 - 354.
Meyer, R.E. (1992). Thinking, problem solving, cognition (2nd ed). New York: Freeman.
Innovative Approaches to Teaching Computer Science (Part 1)
What we teach in Computer Science depends a lot on how we think of Computer Science as a discipline. According to Lewis and Smith (2005), the segregationists think that it is mainly problem solving, algorithmic analysis, theory building, and not an art. The integrationists think that it should be driven by what is needed in other computing fields and majors, such as applied computing in bioinformatics, engineering, and only partly by industries. The synergists think it should transcend any discipline where computing concepts can be applied in much broader terms in non-computing specific areas. As an example, the computing concept of pattern matching may be applied to DNA sequence matching initially (synergistic model), but now it is core to DNA analysis in bioinformatics (integration model), and complexity theories may come out of special algorithms in this area (segregation model).
How we teach Computer Science can also be influenced by these three models. One of the synergistic ways of teaching Computer Science is to consider the approaches to teaching in fine arts and how these can be applied to our discipline. Computer Science is traditionally taught in a format that is instructor centered (instructor is the expert, students are the novices), where the subject matter is abstracted from its practical use (toy programs vs. real life applications), and taught in individualized, non-collaborative (to avoid cheating) environment. In contrast, the fine art approach to teaching has a lot more student - student collaboration, student - instructor engagement, etc. (Barker et al. 2005). This is starting to change as we see more Just in Time teaching, use of clicker questions during lectures, pair programming, peer instruction, peer evaluations, in class activities, group projects, two-stage exams, media programming, etc. to increase enrollment and reduce attrition especially for female students in Computer Science. The paper by Barker et al. also has a good background summary on the attrition of women in Computer Science, and how fine arts approach to teaching may help in Computer Science teaching.
An integrative approach to teaching Computer Science can be seen in the paper by Cushing et al. (2009) where he reported an entry level Computer Science course integrated with Computational Linguistics that included case studies, term project, lecture series and seminars. Out of 70 students that completed the course, 24 students went on to the next quarter of Computer Science with several of them not originally intended to. It is not clear how this compares to other years.
References:
Lewis, T. and Smith, W. (June 2005). The Computer Science Debate: It's a Matter of Perspective. The SIGCSE Bulletin. 37(2), pp 80 - 84.
Barker, L. Garvin-Doxas, K., Roberts, E. (February, 2005). What Can Computer Science Learn From a Fine Arts Approach to Teaching? SIGCSE 2005, pp 421 - 425.
Cushing, J., Hastings, R., Walter, B. (2009). CS0++ Broadening Computer Science At The Entry Level: Linguistics, Computer Science, And The Semantic Web. The Journal of Computing Sciences in Colleges, Papers of the Sixteenth Annual CCSC Midwestern Conference, October 9 - 10, 2009. pp 135 - 142.
How we teach Computer Science can also be influenced by these three models. One of the synergistic ways of teaching Computer Science is to consider the approaches to teaching in fine arts and how these can be applied to our discipline. Computer Science is traditionally taught in a format that is instructor centered (instructor is the expert, students are the novices), where the subject matter is abstracted from its practical use (toy programs vs. real life applications), and taught in individualized, non-collaborative (to avoid cheating) environment. In contrast, the fine art approach to teaching has a lot more student - student collaboration, student - instructor engagement, etc. (Barker et al. 2005). This is starting to change as we see more Just in Time teaching, use of clicker questions during lectures, pair programming, peer instruction, peer evaluations, in class activities, group projects, two-stage exams, media programming, etc. to increase enrollment and reduce attrition especially for female students in Computer Science. The paper by Barker et al. also has a good background summary on the attrition of women in Computer Science, and how fine arts approach to teaching may help in Computer Science teaching.
An integrative approach to teaching Computer Science can be seen in the paper by Cushing et al. (2009) where he reported an entry level Computer Science course integrated with Computational Linguistics that included case studies, term project, lecture series and seminars. Out of 70 students that completed the course, 24 students went on to the next quarter of Computer Science with several of them not originally intended to. It is not clear how this compares to other years.
References:
Lewis, T. and Smith, W. (June 2005). The Computer Science Debate: It's a Matter of Perspective. The SIGCSE Bulletin. 37(2), pp 80 - 84.
Barker, L. Garvin-Doxas, K., Roberts, E. (February, 2005). What Can Computer Science Learn From a Fine Arts Approach to Teaching? SIGCSE 2005, pp 421 - 425.
Cushing, J., Hastings, R., Walter, B. (2009). CS0++ Broadening Computer Science At The Entry Level: Linguistics, Computer Science, And The Semantic Web. The Journal of Computing Sciences in Colleges, Papers of the Sixteenth Annual CCSC Midwestern Conference, October 9 - 10, 2009. pp 135 - 142.
17 October 2009
Curriculum Change
What / who drives curriculum change? Some claim that it should be the academic faculty, others claim the industry, employers, or best practices, while others claim the students. Gruba et al (2004)'s extensive survey finds that computer education curriculum changes are driven by individuals, politics, and fashion (what is attractive to students) more than they are driven by academic merit and external curricula. So how can curriculum changes be made more objectively?
Peter Wolf, Associate Director of Teaching Support Services at the University of Guelph, co-edited the New Directions for Teaching and Learning publication, “Curriculum Development in Higher Education: Faculty-Driven Processes and Practices”. He is also the first author of the Handbook for Curriculum Assessment. In the handbook, he suggests a curriculum development process that combines Donald Kirkpatrick's four level training assessment model during curriculum development. It is evidence based that informs and guides the entire process. Here is a synopsis of the individual processes:
Curriculum Development
Peter Wolf's model of curriculum development process is a top-down model which starts with the learning goals and expected outcomes that should be acquired by an ideal graduate and then further refine this to how these goals can be implemented within a program / course structure and specific learning activities.
Training / Learning Assessment
Donald Kirkpatrick (1994) proposed a four level model to assess effectiveness of training:
By combing Wolf's and Kirkpatrick's models, each stage of Wolf's development process can be accessed by various levels of Kirkpatrick's assessment model, thus each is informed by the other.
References:
Gruba, P., Moffat, A., Søndergaard, H., and Zobel, J. 2004. What drives curriculum change?. In Proceedings of the Sixth Conference on Australasian Computing Education - Volume 30 (Dunedin, New Zealand). R. Lister and A. Young, Eds. ACM International Conference Proceeding Series, vol. 57. Australian Computer Society, Darlinghurst, Australia, 109-117.
Kirkpatrick, D.L. (1994). Evaluating Training Programs: The Four Levels. San Francisco, CA: Berrett-Koehler.
Wolf, P., A. Hill, and F. Evers, The Handbook for Curriculum Assessment, 2006, Guelph University, obtained February 2007 from here.
Peter Wolf, Associate Director of Teaching Support Services at the University of Guelph, co-edited the New Directions for Teaching and Learning publication, “Curriculum Development in Higher Education: Faculty-Driven Processes and Practices”. He is also the first author of the Handbook for Curriculum Assessment. In the handbook, he suggests a curriculum development process that combines Donald Kirkpatrick's four level training assessment model during curriculum development. It is evidence based that informs and guides the entire process. Here is a synopsis of the individual processes:
Curriculum Development
Peter Wolf's model of curriculum development process is a top-down model which starts with the learning goals and expected outcomes that should be acquired by an ideal graduate and then further refine this to how these goals can be implemented within a program / course structure and specific learning activities.
Training / Learning Assessment
Donald Kirkpatrick (1994) proposed a four level model to assess effectiveness of training:
- Reaction - Did the learners like the program? Was the material relevant to their work? This type of evaluation is often called a “smilesheet.” According to Kirkpatrick, every program should at least be evaluated at this level to provide for the improvement of a training program.
- Learning - Did the learners learn anything? Have the students advanced in skills, knowledge, or attitude? Pre-tests and post-tests are often administered to assess student learning.
- Transfer - Are the newly acquired skills, knowledge, or attitude being used in the everyday environment of the learner? For many trainers this level represents the truest assessment of a program's effectiveness. It is also most difficult to test at this stage.
- Results - Is there any increased production, improved quality, decreased costs, reduced frequency of accidents, increased sales, and even higher profits or return on investment from the training?
By combing Wolf's and Kirkpatrick's models, each stage of Wolf's development process can be accessed by various levels of Kirkpatrick's assessment model, thus each is informed by the other.
References:
Gruba, P., Moffat, A., Søndergaard, H., and Zobel, J. 2004. What drives curriculum change?. In Proceedings of the Sixth Conference on Australasian Computing Education - Volume 30 (Dunedin, New Zealand). R. Lister and A. Young, Eds. ACM International Conference Proceeding Series, vol. 57. Australian Computer Society, Darlinghurst, Australia, 109-117.
Kirkpatrick, D.L. (1994). Evaluating Training Programs: The Four Levels. San Francisco, CA: Berrett-Koehler.
Wolf, P., A. Hill, and F. Evers, The Handbook for Curriculum Assessment, 2006, Guelph University, obtained February 2007 from here.
12 October 2009
Student Sharing (legitimately)
While students are warned repeatedly against plagiarism, are there any advantages to have them share their work with each other after submission? One of the possible benefits is that students get to see how their peers have completed their assignments. This is particular useful if the assignment is open-ended where students are free to choose the problems they like to solve, the essays they like to write, projects they like to work on, or any areas of interest related to the course subject they may want to pursue. This in turn creates a multitude of contexts of learning that promotes knowledge transfer. According to Bransford et al. (2000), knowledge transfer is influenced by a number of factors. Some of these are:
References:
Head, C and Wolfman, S. (2008). Poogle and the Unknown-Answer Assignment: Open-Ended, Sharable CS1 Assignments. SIGCSE 2008. pp 133 - 137.
Cho, K., Schunn C., Kwon, K. (2007). Learning Writing by Reviewing. Retrieved on October 13, 2009 from here.
Bransford, J., Brown, A., Cocking, R. (eds). (2000). How People Learn. Washington: National Academy Press.
- degree of mastery of original subject (without a good understanding of the original material, transfer cannot be expected)
- degree of understanding rather than just memorizing facts
- amount of time to learn, and more specifically the time on task (or deliberate practice)
- motivation (whether students are motivated by performance or learning)
- exposure to different contexts
- problem representations and relationships between what is learned and what is tested
- student metacognition .. whether learners actively choose and evaluate strategies, consider resources, and receive feedback (active transfer), or depend on external prompting (passive transfer)
References:
Head, C and Wolfman, S. (2008). Poogle and the Unknown-Answer Assignment: Open-Ended, Sharable CS1 Assignments. SIGCSE 2008. pp 133 - 137.
Cho, K., Schunn C., Kwon, K. (2007). Learning Writing by Reviewing. Retrieved on October 13, 2009 from here.
Bransford, J., Brown, A., Cocking, R. (eds). (2000). How People Learn. Washington: National Academy Press.
09 October 2009
Case-Based Teaching and Data Analysis
Case-Based Teaching and Learning Gains
Here are the different data analysis that can be done to determine the effects of changes made in a course:
Chaplin, Susan. (September / October 2009). Assessment of the Impact of Case Studies on Student Learning Gains in an Introductory Biology Course. Journal of College Science Teaching. pp 72 - 79.
Case Studies Resources:
National Center for Case Study Teaching in Science:
http://ublib.buffalo.edu/libraries/projects/cases/case.html
The case page (with cases for many different science areas):
http://ublib.buffalo.edu/libraries/projects/cases/ubcase.htm
- Case based teaching that emphasizes problem solving and discussion improve student performance significantly on exams throughout the semester. It also enhances students' abilities to correctly answer application and analysis type questions.
- While case based teaching improves student exam performance overall, lecture-based teaching results in more top performing students (90% or higher exam score) than case-based teaching. I wonder the "top" students that we traditionally think of are so well trained in learning under the didactic teaching method, that when they are exposed to other learning styles, they just become lost!
Here are the different data analysis that can be done to determine the effects of changes made in a course:
- Use prerequisite course final exam scores or entrance exam scores to determine variation of student academic ability when comparing students from different terms.
- Compare first test score with the final test score in a course to see how students improve in their different levels of learning (which can either follow Bloom's categories, or simply two levels: knowledge-comprehension / application-analysis).
- Compare total exam points earned by students under different grade band (90% or higher, 80% - 90%, 70% - 80%, etc.)
- Bloom course material / homework / etc. and correlate with test scores in 2.
Chaplin, Susan. (September / October 2009). Assessment of the Impact of Case Studies on Student Learning Gains in an Introductory Biology Course. Journal of College Science Teaching. pp 72 - 79.
Case Studies Resources:
National Center for Case Study Teaching in Science:
http://ublib.buffalo.edu/libraries/projects/cases/case.html
The case page (with cases for many different science areas):
http://ublib.buffalo.edu/libraries/projects/cases/ubcase.htm
04 October 2009
Student Cheating
In a recent student survey conducted in one of the Computer Science courses at UBC, we asked the following question with the preamble: Just like all your other responses in this survey, no instructor will have access to your identity. In particular, your responses to the following two questions will not be used in any way as evidence of violation of academic misconduct.
One of the more publicized cases of student cheating in Computer Science is reported by Zobel (2004). In that case, students cheated by purchasing assignments and even have someone write the exams for them. As the faculty tried to investigate on the case, there were met with violent threats and even office break in's. It all sounded like a soap opera, but it is understandable that many faculty members or administrators do not want to deal with cheating cases. After all, it is costly on every one's part.
Greening et al. (2004) and Joyce (2007) examine ways of integrating ethical content into computer curricula. A student survey that involves a number of scenario's that involve cheating seems to challenge the students' thinking on critical ethical issues of a number of issues. It is also critical that faculty needs to have a good background of philosophical frameworks to guide the students. Some of these include utilitarian, deontological, virtuous, and relativist frameworks.
The prevalence of cheating cases, especially in assignments, works against student learning in that properly designed assignments are effective ways to help students construct their knowledge. If instructors knew that students mostly cheat on the assignments, they tend to place less emphasis (and hence, marks) on assignments, and students are further unmotivated to do the assignments. Why is it so difficult to make up Computer Science assignments that are fun and are made up of small incremental tasks to engage the students?
References:
Zobel, Justin. (2004). Uni Cheats Racket: A Case Study in Plagiarism Investigation. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Zobel.pdf.
Greening, T., Kay, J., and Kummerfeld, B. (2004). Integrating Ethical Content Into Computing Curricula. Sixth Australasian Computing Education Conference, Dunedin, NZ. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Greening.pdf.
Sheard, J., Carbone, A., and Dick, M. (2002). Determination of Factors which Impact on IT Students' Propensity to Cheat. Australasian Computing Education Conference (ACE2003), Adelaide, Australia. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV20Sheard.pdf.
Joyce, D. (2007). Academic Integrity and Plagiarism: Australasian perspectives. Computer Science Education. 17(3), pp 187 - 200.
Do you believe you may have ever violated the academic conduct guidelines of a UBC course and, if so, what activities were you engaged in?Out of 81 responses we received, no student admitted to having violated the academic conduct guidelines. Of course it is quite probable that UBC students are highly ethical in nature, or the question may not be clear enough on what constitutes "academic conduct guidelines". In any case, even with the preamble, the students may not feel comfortable in revealing the truth because the survey did ask for their student number in the beginning! In a study of student cheating by Sheard et al. (2002), students self-report of their cheating activities ranges from around 10% to 47%. In general, there are internal and external factors that cause students to cheat, but the three most common reasons are: time pressure, possible failure of the course, and difficulty of work.
One of the more publicized cases of student cheating in Computer Science is reported by Zobel (2004). In that case, students cheated by purchasing assignments and even have someone write the exams for them. As the faculty tried to investigate on the case, there were met with violent threats and even office break in's. It all sounded like a soap opera, but it is understandable that many faculty members or administrators do not want to deal with cheating cases. After all, it is costly on every one's part.
Greening et al. (2004) and Joyce (2007) examine ways of integrating ethical content into computer curricula. A student survey that involves a number of scenario's that involve cheating seems to challenge the students' thinking on critical ethical issues of a number of issues. It is also critical that faculty needs to have a good background of philosophical frameworks to guide the students. Some of these include utilitarian, deontological, virtuous, and relativist frameworks.
The prevalence of cheating cases, especially in assignments, works against student learning in that properly designed assignments are effective ways to help students construct their knowledge. If instructors knew that students mostly cheat on the assignments, they tend to place less emphasis (and hence, marks) on assignments, and students are further unmotivated to do the assignments. Why is it so difficult to make up Computer Science assignments that are fun and are made up of small incremental tasks to engage the students?
References:
Zobel, Justin. (2004). Uni Cheats Racket: A Case Study in Plagiarism Investigation. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Zobel.pdf.
Greening, T., Kay, J., and Kummerfeld, B. (2004). Integrating Ethical Content Into Computing Curricula. Sixth Australasian Computing Education Conference, Dunedin, NZ. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Greening.pdf.
Sheard, J., Carbone, A., and Dick, M. (2002). Determination of Factors which Impact on IT Students' Propensity to Cheat. Australasian Computing Education Conference (ACE2003), Adelaide, Australia. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV20Sheard.pdf.
Joyce, D. (2007). Academic Integrity and Plagiarism: Australasian perspectives. Computer Science Education. 17(3), pp 187 - 200.
Asking Questions
When we pose questions to our students, they sequentially and iteratively go through four stages: comprehension, memory retrieval, judgment, and mapping (Conrad and Blair, 1996) (Tourangeau, 1984) (Oksenberg and Cannell, 1977). At any one of these stages, students may find it difficult to answer the questions due to the choice of words and the way the questions are asked. This may not because of their misconceptions of the subject matter but may indicate the questions need to be revised. Ding et al. summarized their results of validating clicker questions using interviews (2009).
In the comprehension stage, we want to make sure the students understand the problem accurately. In a think-aloud session, we may be able to see whether the students have misinterpreted the questions. Otherwise, this can be easily dismissed as a misconception that the students have.
In the memory retrieval stage, we want to make sure the students are accessing the relevant information to solve the problem. If any part of the question triggers the students that lead them in the wrong path, these questions can be seen as "trick" questions and are not testing the student learning.
In judgment, students need to perform the appropriate task to solve the problem given a correct retrieval of relevant information. If the questions are not clear about the context / conditions, the students may not be able reach a definite conclusion. In those cases, the questions need to be clarified.
In mapping, students need to correctly map the right answer to the right choice. Here, the choices provided must be clear and the students can make a definite choice.
Validating questions take time, and student interviews seem to be an effective way of helping instructors refine their questions. Teachers can also find out something about the student responses to the questions and see if there is a majority of them getting the questions wrong by examining the exam sores and their correlation with other data. Such forensic study may reveal how students interpret and think through the questions.
References:
Ding, L, Reay, N.W., Lee, A., Bao, L. (2009). Are We Asking the Right Questions? Validating Clicker Question Sequences by Student Interviews. American Journal of Physics. 77(7), pp 643 - 650.
Conrad F. and Blair, J. (1996). From Impressions to Data: Increasing the Objectivity of Cognitive Interviews. Proceedings of the Section on Survey Research Methods, American Statistical Association. (ASA, Alexandria, VA). p 1.
Tourangeau. R. (1984). Cognitive Science and Survey Methods. Cognitive Aspects of Survey Design: Building a Bridge Between Disciplines. Edited by T. Jabine, M. Straf, J. Tanur, and R. Tourangeau. (National Academics Press, Washington, DC). p 73.
Oksenberg, L. and Cannell, C. (1977). Some Factors Underlying the Validity of Response in Self-Report. Bull. I'Institut Int. Stati. 48, pp 325 - 346.
In the comprehension stage, we want to make sure the students understand the problem accurately. In a think-aloud session, we may be able to see whether the students have misinterpreted the questions. Otherwise, this can be easily dismissed as a misconception that the students have.
In the memory retrieval stage, we want to make sure the students are accessing the relevant information to solve the problem. If any part of the question triggers the students that lead them in the wrong path, these questions can be seen as "trick" questions and are not testing the student learning.
In judgment, students need to perform the appropriate task to solve the problem given a correct retrieval of relevant information. If the questions are not clear about the context / conditions, the students may not be able reach a definite conclusion. In those cases, the questions need to be clarified.
In mapping, students need to correctly map the right answer to the right choice. Here, the choices provided must be clear and the students can make a definite choice.
Validating questions take time, and student interviews seem to be an effective way of helping instructors refine their questions. Teachers can also find out something about the student responses to the questions and see if there is a majority of them getting the questions wrong by examining the exam sores and their correlation with other data. Such forensic study may reveal how students interpret and think through the questions.
References:
Ding, L, Reay, N.W., Lee, A., Bao, L. (2009). Are We Asking the Right Questions? Validating Clicker Question Sequences by Student Interviews. American Journal of Physics. 77(7), pp 643 - 650.
Conrad F. and Blair, J. (1996). From Impressions to Data: Increasing the Objectivity of Cognitive Interviews. Proceedings of the Section on Survey Research Methods, American Statistical Association. (ASA, Alexandria, VA). p 1.
Tourangeau. R. (1984). Cognitive Science and Survey Methods. Cognitive Aspects of Survey Design: Building a Bridge Between Disciplines. Edited by T. Jabine, M. Straf, J. Tanur, and R. Tourangeau. (National Academics Press, Washington, DC). p 73.
Oksenberg, L. and Cannell, C. (1977). Some Factors Underlying the Validity of Response in Self-Report. Bull. I'Institut Int. Stati. 48, pp 325 - 346.
Labels:
asking,
clicker,
interviews,
questions,
validating
01 October 2009
7 Techniques of Teaching / Learning
deWinstanley summarizes Bjork's seven studying techniques in the reference below. These seven learning techniques have corresponding implications for teachers. Here is the list for teachers:
deWinstanley, Patricia. (1999). The Science of Studying Effectively. Bjork's Seven Studying Techniques. Retrieved on September 4, 2009 from http://www.oberlin.edu/psych/studytech/.
- Allocate your attention efficiently. Anything that does not help your students bridge what you want them to learn with what you want to tell / show them is a distraction. If you tell a story, make sure there is a connection with what you want them to learn. Use questions to help your students to focus.
- Interpret and elaborate on what you are trying to teach. Students need context to apply what they learn so they can have better recall and retention.
- Make your teaching variable (e.g. location, interpretation, example). Use a variety of contexts to illustrate what you want to teach (see points 1 and 2). Try contrasting cases.
- Space your teaching of a topic or area and repeat your teaching several times. Instead of blocking or massing what you want to teach on XXX in one big chunk of time, try to space it out in a number of sessions.
- Organize and structure the information you are trying to teach. Provide skeleton outline rather than a full outline so students can pay more attention. Provide or have the students produce (see point 7) a concept map that captures the concepts and their relationships with one another.
- Help students to visualize the information. Reinstate the context during a test. Use mnemonics, graphs, props, etc., but make sure they are helpful for the student to build bridges to the learning content (see point 1).
- Generate Generate Generate ... Retrieve Retrieve Retrieve. Give students lots of tests and opportunities to construct their knowledge. Feedback is good but even if they don't get immediate feedback, have them generate their knowledge over and over again.
deWinstanley, Patricia. (1999). The Science of Studying Effectively. Bjork's Seven Studying Techniques. Retrieved on September 4, 2009 from http://www.oberlin.edu/psych/studytech/.
24 September 2009
Misconceptions
Constructivistic learning claims that "all learning involves the interpretation of phenomena, situations, and events, including classroom instruction, through the perspective of the learner's existing knowledge" (Smith et al, 1993). As such, with the prior knowledge students bring into the classroom, learning involves confrontation and replacement of misconceptions that students have. (Otherwise, there is no need for them for any formal training.) But how do students recognize these misconceptions, and how can they correct these misconceptions given that misconceptions are hard to change? Traditional strategies include lectures, assignments, exams, etc. Well constructed clicker questions can be particularly effective in exposing misconceptions. We can also learn a lot from video games. Good video games (Gee, 2005) can expose players' misconceptions of the game by slowly guiding the players to gain proficiency in the game play, whether it may be motor skills required to use the controls, or mental skills to solve the problems, or awareness of hidden story lines, etc. Rewards have been used effectively to grab the player's attention. How we can turn our classroom experience into a well constructed video game remains a mystery and challenge for all instructors!
According to Smith, diSessa and Roschelle, instruction is supposed to replace misconceptions by confronting the students with their misconceptions. Instead of replacement, perhaps learners are integrating what they know and trying to resolve the conflicts they encounter when new information is presented. There may be knowledge replacement but I suspect it is more integration or resolution of these conflicts, than replacement that is going on.
McCartney et al's paper shows some of the misconceptions on how CS students determine algorithm efficiency. Given two algorithms, the students were asked which one is more efficient to solve a certain problem. The goal is to see whether they consider how the algorithms behave in the worst case - which most experts would do. Although the majority of the students pick the right algorithm, some focus on one part of an algorithm to determine the "worst" case, while others focus on another part of the same algorithm. One of the misconceptions then is that students do not really know what the worst case was. They also do not seem to think tracing through an algorithm on concrete data is important in the analysis.
What are other CS misconceptions? Computer will do what I mean. Command line is not as powerful / efficient as graphic interface. The scenes in a computer game are stored in the program rather than dynamically generated. Playing with / debugging / changing code in the process of writing a program are not expert behaviors. Doing rough work is not cool when solving problems. Web design is programming. Spreadsheet is a database. Design is useless. Testing is not valuable. Any others, there must be a whole lot more ...
References:
Smith, J., diSessa, A., Roschelle, J., (1993), Misconceptions Reconceived: A Constructivist Analysis of Knowledge in Transition, retrieved on September 24, 2009 from http://ctl.sri.com/publications/downloads/MisconceptionsReconceived.pdf.
McCartney et al., (2009), "Commonsense computing (episode 5): Algorithm Efficiency and Balloon Testing", retrieved on September 24, 2009 from http://portal.acm.org/citation.cfm?id=1584322.1584330.
Gee, J. (2005). "Learning by Design: good video games as learning machines." E-Learning, 2(1). pp 5 - 16.
According to Smith, diSessa and Roschelle, instruction is supposed to replace misconceptions by confronting the students with their misconceptions. Instead of replacement, perhaps learners are integrating what they know and trying to resolve the conflicts they encounter when new information is presented. There may be knowledge replacement but I suspect it is more integration or resolution of these conflicts, than replacement that is going on.
McCartney et al's paper shows some of the misconceptions on how CS students determine algorithm efficiency. Given two algorithms, the students were asked which one is more efficient to solve a certain problem. The goal is to see whether they consider how the algorithms behave in the worst case - which most experts would do. Although the majority of the students pick the right algorithm, some focus on one part of an algorithm to determine the "worst" case, while others focus on another part of the same algorithm. One of the misconceptions then is that students do not really know what the worst case was. They also do not seem to think tracing through an algorithm on concrete data is important in the analysis.
What are other CS misconceptions? Computer will do what I mean. Command line is not as powerful / efficient as graphic interface. The scenes in a computer game are stored in the program rather than dynamically generated. Playing with / debugging / changing code in the process of writing a program are not expert behaviors. Doing rough work is not cool when solving problems. Web design is programming. Spreadsheet is a database. Design is useless. Testing is not valuable. Any others, there must be a whole lot more ...
References:
Smith, J., diSessa, A., Roschelle, J., (1993), Misconceptions Reconceived: A Constructivist Analysis of Knowledge in Transition, retrieved on September 24, 2009 from http://ctl.sri.com/publications/downloads/MisconceptionsReconceived.pdf.
McCartney et al., (2009), "Commonsense computing (episode 5): Algorithm Efficiency and Balloon Testing", retrieved on September 24, 2009 from http://portal.acm.org/citation.cfm?id=1584322.1584330.
Gee, J. (2005). "Learning by Design: good video games as learning machines." E-Learning, 2(1). pp 5 - 16.
17 September 2009
Lectures
Presenting information in lectures require careful planning so that the precious class time will not be wasted. Since learning is an interpretative process, new information needs to be integrated with what is already known. deWinstanley and Bjork suggested 5 processes that affect much on how students learn. Attention - divided attention is most detrimental during encoding of new information. What is worse is that "divided attention during a lecture may leave students with a subsequent sense of familiarity ... without the concomitant ability to recall or recognize the material on a direct test of memory". Interpretation and Elaboration - learning requires accurate interpretation and thorough elaboration. Students need to know the "story" behind the new information. Simply presenting a graph or a formula does not help the students to learn why and how the new information can be used. Generation and Retrieval Practice - students learn better if they generate the information rather than just passively absorb information. If students are asked to retrieve information, it is more likely they will recall the information later. Students can create concept maps / reflective blogs / contribute to discussion forums as means of generating the information they have learned.
Other techniques that can promote long term retention of information in the lectures include: spacing - distributing rather than massing the presentations of information at the same time, (an example of spacing is the spiral curriculum, i.e. start with an introduction, then drill down into the topics in the next interaction, and then focusing more details in further iterations), presenting material from more than one standpoint, providing outline (but not too much detail), having students to generate their outline, using visual images and other mnemonic devices, analogies, humor, having the students to make predictions and elaborate interrogation.
To keep student attention, one can also use appropriate games, toys, simulators, play, etc. Interactivity is important to engage students. Pollard and Duvall suggested also using prizes, games, good competition, creating artwork, media, acting out (algorithms), and even rewarding students with stickers and smileys on their papers.
Getting students to generate / reproduce information is a powerful tool. Invention activities are one way to get students attempt the solution and then apply the concept to another area.
Hichens and Lister noted that students expect the teachers to go beyond what is written in the lecture notes, and the teachers to assess student learning during the lectures and adjust the teaching accordingly. Reading straight out from the lecture slides, or making them feel bad / lazy, and going over material too fast / too slow are absolute no no's!
References:
deWinstanley, Patricia Ann and Bjork, Robert, A. (2002). Successful Lecturing: Presenting Information in Ways That Engage Effective Processing. New Directions for Teaching and Learning. No. 89, pp 19- 31.
Hitchens, Michael and Lister, Raymond. (January 2009). A Focus Group Study of Student Attitudes to Lectures. Eleventh Australian Computing Education Conference.
Pollard, Shannon and Duvall, Robert. (2006). Everything I Needed to Know About Teaching I Learned in Kindergarten: Bringing Elementary Education Techniques to Undergraduate Computer Science Classes. SIGCSE 2006. Pp 224 - 228.
Other techniques that can promote long term retention of information in the lectures include: spacing - distributing rather than massing the presentations of information at the same time, (an example of spacing is the spiral curriculum, i.e. start with an introduction, then drill down into the topics in the next interaction, and then focusing more details in further iterations), presenting material from more than one standpoint, providing outline (but not too much detail), having students to generate their outline, using visual images and other mnemonic devices, analogies, humor, having the students to make predictions and elaborate interrogation.
To keep student attention, one can also use appropriate games, toys, simulators, play, etc. Interactivity is important to engage students. Pollard and Duvall suggested also using prizes, games, good competition, creating artwork, media, acting out (algorithms), and even rewarding students with stickers and smileys on their papers.
Getting students to generate / reproduce information is a powerful tool. Invention activities are one way to get students attempt the solution and then apply the concept to another area.
Hichens and Lister noted that students expect the teachers to go beyond what is written in the lecture notes, and the teachers to assess student learning during the lectures and adjust the teaching accordingly. Reading straight out from the lecture slides, or making them feel bad / lazy, and going over material too fast / too slow are absolute no no's!
References:
deWinstanley, Patricia Ann and Bjork, Robert, A. (2002). Successful Lecturing: Presenting Information in Ways That Engage Effective Processing. New Directions for Teaching and Learning. No. 89, pp 19- 31.
Hitchens, Michael and Lister, Raymond. (January 2009). A Focus Group Study of Student Attitudes to Lectures. Eleventh Australian Computing Education Conference.
Pollard, Shannon and Duvall, Robert. (2006). Everything I Needed to Know About Teaching I Learned in Kindergarten: Bringing Elementary Education Techniques to Undergraduate Computer Science Classes. SIGCSE 2006. Pp 224 - 228.
03 September 2009
Knowledge Transfer
Most educators are hopeful that their students are able to apply what they have learned in different settings, "from one problem to another within a course, from one course to another, from one school year to the next, and from their years in school to their years in the workplace." (Bransford and Schwartz, 1999). However, researchers have found that people seem to learn things that are very specific (Thorndike and Woodworth, 1901). Further studies have shown that sufficient initial learning is critical in effective transfer, and that concrete examples can enhance initial learning because students see the relevance of new information. But overly contextualized information can impede transfer because information is too tied to the context.
People also forget information easily ("replicative knowing") and people have difficulty applying their knowledge to solve new problems ("applicative knowing") (Broudy, 1977). That is people have difficulty knowing "that" (replicative), and knowing "how" (applicative). What seems to help is people know "with" other concepts / experiences. This is related to Piaget's learning theory of assimilation and accommodation.
Contrasting cases are especially useful for people to "learn with" their experiences. The differences among the contrasting cases help people to notice the pattern that persist among the cases. After the students have a chance to work through some contrasting cases, a lecture that follows results in much greater retention than simply working through the contrasting cases only or have the students summarize what they learned after a lecture.
In order for students to transfer their knowledge from one area to another, they need to "let go" of previously held ideas and behaviors. It is not the same as repeating the same idea / behavior in a new situation. The word "insight", coined by Land, inventor of the Polaroid Land camera, highlights the importance of "letting go" of previous assumptions and strategies rather than simply repeating them (Land, 1982). For Land, insight is "the sudden cessation of stupidity". It is not enough to try to adapt old ideas to new situations. Thus, effective learners revise and actively control their learning when things do not work.
Knowledge transfer also benefits from actively seeking others' ideas and perspectives. Other essential ingredients include: tolerance for ambiguity, courage spans, persistence in the face of difficulty, willingness to learn from others, and sensitivity to the expectations of others. All these help people to be life long learners.
How can students learn to develop these characteristics? Bransford Schwartz suggested lived experiences (spending time in a different country), learning to play a musical instrument, learning to perform on stage, learning to participate in organized sports activities. Learners need to self evaluate in areas such as their commitment to excellence, their need to be in the limelight, their respect for others, their own fears and strategies that may be hampering their progress. Such meta-cognitive reflection are part of "knowing with" new information.
Having students evaluate their own confidence level and then realizing whether their confidence level matches their competence helps them realize whether they are ready to move on to more challenging problems or new problems, or whether they should seek help before they can attempt the problems. Some learners need to know the dangers of confidence when there is little competence. All these prepare the learners to transfer their knowledge to new situations and domain areas.
Reference:
Bransford, J. and Schwartz, D. (1999). Rethinking Transfer: A Simple Proposal with Multiple Implications. Review of Research in Education, Vol 24, pp 61-100.
Broudy, H.S. (1977). Types of knowledge and purposes of education. In R.C. Anderson, R.J. Spiro and W.E. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 1-17), Hillsdale, NJ: Erlbaum.
Land, E.H. (1982). Creativity and the ideal framework. in G.I. Nierenberg (Ed.), The Art of Creative Thining. New York: Simon & Schuster.
Thorndike, E. L., and Woodworth, R.S. (1901). The infludence of improvement in one mental function upon the efficacy of other functions. Psychological Review, 8, 247-261.
People also forget information easily ("replicative knowing") and people have difficulty applying their knowledge to solve new problems ("applicative knowing") (Broudy, 1977). That is people have difficulty knowing "that" (replicative), and knowing "how" (applicative). What seems to help is people know "with" other concepts / experiences. This is related to Piaget's learning theory of assimilation and accommodation.
Contrasting cases are especially useful for people to "learn with" their experiences. The differences among the contrasting cases help people to notice the pattern that persist among the cases. After the students have a chance to work through some contrasting cases, a lecture that follows results in much greater retention than simply working through the contrasting cases only or have the students summarize what they learned after a lecture.
In order for students to transfer their knowledge from one area to another, they need to "let go" of previously held ideas and behaviors. It is not the same as repeating the same idea / behavior in a new situation. The word "insight", coined by Land, inventor of the Polaroid Land camera, highlights the importance of "letting go" of previous assumptions and strategies rather than simply repeating them (Land, 1982). For Land, insight is "the sudden cessation of stupidity". It is not enough to try to adapt old ideas to new situations. Thus, effective learners revise and actively control their learning when things do not work.
Knowledge transfer also benefits from actively seeking others' ideas and perspectives. Other essential ingredients include: tolerance for ambiguity, courage spans, persistence in the face of difficulty, willingness to learn from others, and sensitivity to the expectations of others. All these help people to be life long learners.
How can students learn to develop these characteristics? Bransford Schwartz suggested lived experiences (spending time in a different country), learning to play a musical instrument, learning to perform on stage, learning to participate in organized sports activities. Learners need to self evaluate in areas such as their commitment to excellence, their need to be in the limelight, their respect for others, their own fears and strategies that may be hampering their progress. Such meta-cognitive reflection are part of "knowing with" new information.
Having students evaluate their own confidence level and then realizing whether their confidence level matches their competence helps them realize whether they are ready to move on to more challenging problems or new problems, or whether they should seek help before they can attempt the problems. Some learners need to know the dangers of confidence when there is little competence. All these prepare the learners to transfer their knowledge to new situations and domain areas.
Reference:
Bransford, J. and Schwartz, D. (1999). Rethinking Transfer: A Simple Proposal with Multiple Implications. Review of Research in Education, Vol 24, pp 61-100.
Broudy, H.S. (1977). Types of knowledge and purposes of education. In R.C. Anderson, R.J. Spiro and W.E. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 1-17), Hillsdale, NJ: Erlbaum.
Land, E.H. (1982). Creativity and the ideal framework. in G.I. Nierenberg (Ed.), The Art of Creative Thining. New York: Simon & Schuster.
Thorndike, E. L., and Woodworth, R.S. (1901). The infludence of improvement in one mental function upon the efficacy of other functions. Psychological Review, 8, 247-261.
27 August 2009
Expert Tutors
One of the most distinguishing characteristics of an expert tutor is their considerable attention given to motivating students as well as providing cognitive information to them. They seem to have a working model of each tutee on when they need more emotional support, and when they need to be challenged in their state of knowledge construction. Lepper and Woolverton proposes the INSPIRE model which highlights seven critical characteristics of expert tutors:
I - intelligent. Expert tutors know their subject well and are able to guide their tutees in knowledge construction.
N - nurturant. Expert tutors are highly supportive and nurturing of their students.
S - Socratic. Expert tutors engage their students using questions rather than directions or assertions, they provide hints and not answers.
P - progressive. Expert tutors carefully plan their tutoring sessions of increasing difficulty and complexity, but they are also flexible in adjusting their session in response to the student's learning.
I - indirect. Expert tutors deliberately avoid overt criticism of their students' mistakes but rather often pose questions that imply the existence of the errors.
R - reflective. Expert tutors ask their students to reflect aloud on what they have just done immediately after a successful attempt in their problem solving.
E - encouraging. Expert tutors keep their students' interest / attention / involvement high by instilling confidence and a sense of curiosity.
Reference:
Lepper, M. and Woolverton, M. (2002). "The Wisdom of Practice: Lessons Learned from the Study of Highly Effective Tutors" in Improving Academic Achievement. Elsevier Science, pp 135 - 158.
I - intelligent. Expert tutors know their subject well and are able to guide their tutees in knowledge construction.
N - nurturant. Expert tutors are highly supportive and nurturing of their students.
S - Socratic. Expert tutors engage their students using questions rather than directions or assertions, they provide hints and not answers.
P - progressive. Expert tutors carefully plan their tutoring sessions of increasing difficulty and complexity, but they are also flexible in adjusting their session in response to the student's learning.
I - indirect. Expert tutors deliberately avoid overt criticism of their students' mistakes but rather often pose questions that imply the existence of the errors.
R - reflective. Expert tutors ask their students to reflect aloud on what they have just done immediately after a successful attempt in their problem solving.
E - encouraging. Expert tutors keep their students' interest / attention / involvement high by instilling confidence and a sense of curiosity.
Reference:
Lepper, M. and Woolverton, M. (2002). "The Wisdom of Practice: Lessons Learned from the Study of Highly Effective Tutors" in Improving Academic Achievement. Elsevier Science, pp 135 - 158.
22 August 2009
Framing
Solving problems usually involve a variety of concepts and skills. Some problems can be approached from a number of angles but usually, when one goes down a "wrong track", it may take sometime to recover unless one is aware of the backtracking points and be able to try alternate paths. The perception / judgment that is used in problem solving is called epistemological framing. It refers to the class of tools and skills that one would bring to a particular situation or context for problem solving. As a simple example, some students may rely on memorized facts to solve a problem, while others may rely on logical reasoning, etc.
Bing and Redish (2009) identify four common framing clusters that students commonly use of mathematics to solve physics problems: calculation, physical mapping, invoking authority, and math consistency. Calculation refers to the algorithmic use of established computational steps to derive a solution, e.g. calculus rules, geometry rules, algebraic rules, etc. Physical mapping refers to the mapping between mathematics with the student's intuition of the physical or geometrical situation at hand to support their arguments and reasoning. Invoking authority points to the resource / book / journal / quote / person / etc. to support a claim. Math consistency appeals to the other math ideas and concepts that are demonstrably consistent to offer validation of an argument.
As I reflect on these four framing clusters, I wonder how these clusters can be detrimental for beginning computer science students. Take for example, calculation, the meaning of "=" in math as equating two entities, like x = y, is so different from computer science use of assigning one value to a variable. Similarly, there is hardly any connection between how computer science models physical objects, like tree, or student, and how we intuitively understand and interact with them. Students are also often surprised at what they can do and cannot do with a programming language. They lack the source(s) of "authority" to guide them in their learning. One comment that I often hear from students when they are learning a new programming language is "I didn't know you can do that!". Finally, although computer science students know that computers are consistent and logical, the subtleties of programming language syntax and the precision of logic that is also highly dependent on the sequence of execution in a program can be frustrating and overwhelming to them. Identifying some of these framing clusters that students bring into the classroom may help in their learning process.
Reference:
Bing, T. J. and E. F. Redish (2009, Jul). Analyzing problem solving using math in physics: Epistemological framing via warrants. Available at http://arxiv.org/pdf/0908.0028.
Bing and Redish (2009) identify four common framing clusters that students commonly use of mathematics to solve physics problems: calculation, physical mapping, invoking authority, and math consistency. Calculation refers to the algorithmic use of established computational steps to derive a solution, e.g. calculus rules, geometry rules, algebraic rules, etc. Physical mapping refers to the mapping between mathematics with the student's intuition of the physical or geometrical situation at hand to support their arguments and reasoning. Invoking authority points to the resource / book / journal / quote / person / etc. to support a claim. Math consistency appeals to the other math ideas and concepts that are demonstrably consistent to offer validation of an argument.
As I reflect on these four framing clusters, I wonder how these clusters can be detrimental for beginning computer science students. Take for example, calculation, the meaning of "=" in math as equating two entities, like x = y, is so different from computer science use of assigning one value to a variable. Similarly, there is hardly any connection between how computer science models physical objects, like tree, or student, and how we intuitively understand and interact with them. Students are also often surprised at what they can do and cannot do with a programming language. They lack the source(s) of "authority" to guide them in their learning. One comment that I often hear from students when they are learning a new programming language is "I didn't know you can do that!". Finally, although computer science students know that computers are consistent and logical, the subtleties of programming language syntax and the precision of logic that is also highly dependent on the sequence of execution in a program can be frustrating and overwhelming to them. Identifying some of these framing clusters that students bring into the classroom may help in their learning process.
Reference:
Bing, T. J. and E. F. Redish (2009, Jul). Analyzing problem solving using math in physics: Epistemological framing via warrants. Available at http://arxiv.org/pdf/0908.0028.
06 August 2009
What can we learn from Video Games?
How do we motivate people to learn? Well, Gee (2005) notes that "[u]nder the right conditions, learning, like sex, is biologically motivating and pleasurable for humans (and other primates)." It is the same hook that game designers use to attract gamers (see link), so we can learn a great deal about learning from video games. Gee organized these attributes in video games in 3 categories and for each category a number of principles:
Empowered Learners - gamers / learners need to have some sense of control
They feel that they are co-designers of the game or learning, they can customize their game play or learning experience, they can take on a new identity (and for learners to adopt the culture and role of a biologist / computer scientist / etc.), and be able to manipulation and distributed knowledge in the game virtual world or in the real world.
Problem Solving - gamers / learners need to be exposed to appropriate information and problems
They need to be exposed to well-organized problems that are not too complex nor too trivial, and problems should be pleasantly frustrating and there is payoff. There should be cycles of practice to help them develop their expertise, information is given 'on demand' and 'just in time' so they don't feel overwhelmed, they are exposed to fish tanks and sandboxes (simplified versions of the game / learning content) so they can understand a simple system or try out things without any risk first, and they see their practice of skills as strategies to accomplish their goals.
Understanding - gamers / learners make sense of their world
They want to look at the big picture and be able to think of the system at large, they can attach meanings to their past experiences.
Reference:
Gee, J. (2005). "Learning by Design: good video games as learning machines." E-Learning, 2(1). pp 5 - 16.
Empowered Learners - gamers / learners need to have some sense of control
They feel that they are co-designers of the game or learning, they can customize their game play or learning experience, they can take on a new identity (and for learners to adopt the culture and role of a biologist / computer scientist / etc.), and be able to manipulation and distributed knowledge in the game virtual world or in the real world.
Problem Solving - gamers / learners need to be exposed to appropriate information and problems
They need to be exposed to well-organized problems that are not too complex nor too trivial, and problems should be pleasantly frustrating and there is payoff. There should be cycles of practice to help them develop their expertise, information is given 'on demand' and 'just in time' so they don't feel overwhelmed, they are exposed to fish tanks and sandboxes (simplified versions of the game / learning content) so they can understand a simple system or try out things without any risk first, and they see their practice of skills as strategies to accomplish their goals.
Understanding - gamers / learners make sense of their world
They want to look at the big picture and be able to think of the system at large, they can attach meanings to their past experiences.
Reference:
Gee, J. (2005). "Learning by Design: good video games as learning machines." E-Learning, 2(1). pp 5 - 16.
Situated Learning
Much learning is done within contexts. An average 17 year old would have learned her vocabulary at a rate of 5000 words per year for over 16 years by listening, talking, reading, and interactions. In contrast, if vocabulary were taught simply by abstract definitions and sentences taken out of context, it is hardly even possible to learn 100 to 200 words per year.
Students should be exposed to and then adopt the culture of which the tools they are taught to use. This requires the support of a community, and learning is a process of enculturation. The activities that the students will be exposed to will be authentic (i.e. ordinary practices of the culture, and not just classroom or toy problems), and these activities are framed by its culture.
While we want our students to have practical knowledge on how to use the tools and develop practical skills, we also want them to develop deep thinking and cognitive sills. Within the context of situated learning, this is called cognitive apprenticeship. It begins with problems and practice in situ, and moves them beyond the traditional practices by emphasizing that practices are not absolute, and students are encouraged to generate their own solutions with other members of the culture, which we sometimes call a community of practice.
Reference:
Brown, J., Collins, A., Dugid, P. (1989). "Situated Cognition and the Culture of Learning". Educational Researcher. 18(32). pp 32 - 42.
Students should be exposed to and then adopt the culture of which the tools they are taught to use. This requires the support of a community, and learning is a process of enculturation. The activities that the students will be exposed to will be authentic (i.e. ordinary practices of the culture, and not just classroom or toy problems), and these activities are framed by its culture.
While we want our students to have practical knowledge on how to use the tools and develop practical skills, we also want them to develop deep thinking and cognitive sills. Within the context of situated learning, this is called cognitive apprenticeship. It begins with problems and practice in situ, and moves them beyond the traditional practices by emphasizing that practices are not absolute, and students are encouraged to generate their own solutions with other members of the culture, which we sometimes call a community of practice.
Reference:
Brown, J., Collins, A., Dugid, P. (1989). "Situated Cognition and the Culture of Learning". Educational Researcher. 18(32). pp 32 - 42.
05 August 2009
Interactive Engagement vs. Traditional Methods
A study of 6,542 students (Hake, 1998) who took introductory physics courses in high schools, colleges and universities was conducted to compare the effectiveness of interactive engagement in the classroom as compared to traditional lecture style presentations. Not surprisingly, the average gain (measured as per Halloun-Hestenes Mechanics Diagnostic test, Force Concept Inventory, and Mechanics Baseline test) due to interactive engagement delivery is significantly higher than traditional courses.
In another paper by Hake (1997), he lists several interactive engagement methods that have been used successfully for teaching physics. These include collaborative Peer Instruction, microcomputer-based labs, concept tests, modeling, active learning problem sets or overview case studies, physics-education-research based text or no text, and socratic dialogue inducing labs.
It should be noted that interactive engagement is "necessary but not sufficient for marked improvement over traditional methods" (Hake, 1997) since there are a number of colleges which have marginal gain even when interactive engagement activities were used.
I like the Epilogue that Hake included in his 1997 article:
I am deeply convinced that a statistically significant improvement would occur if more of us learned to listen to our students....By listening to what they say in answer to carefully phrased, leading questions, we can begin to understand what does and does not happen in their minds, anticipate the hurdles they encounter, and provide the kind of help needed to master a concept or line of reasoning without simply "telling them the answer."....Nothing is more ineffectually arrogant than the widely found teacher attitude that ’all you have to do is say it my way, and no one within hearing can fail to understand it.’....Were more of us willing to relearn our physics by the dialog and listening process I have described, we would see a discontinuous upward shift in the quality of physics teaching. I am satisfied that this is fully within the competence of our colleagues; the question is one of humility and desire.
Arnold Arons, Am. J. Phys. 42, 157 (1974)
I often wonder whether this applies to Computer Science. Afterall, don't we know pretty well how our students think? ... or do we?
Reference:
Hake, Richard. (1997). "Interactive engagement methods in introductory mechanics courses". Retrieved on August 6, 2009 from http://www.physics.indiana.edu/~sdi/IEM-2b.pdf.
Hake, Richard. (1998). "Interactive-engagement versus Traditional Methods: A six-thousand-student survey of mechanics test data for introductory physics courses". American Association of Physics Teacher. 66(1), pp64-74.
In another paper by Hake (1997), he lists several interactive engagement methods that have been used successfully for teaching physics. These include collaborative Peer Instruction, microcomputer-based labs, concept tests, modeling, active learning problem sets or overview case studies, physics-education-research based text or no text, and socratic dialogue inducing labs.
It should be noted that interactive engagement is "necessary but not sufficient for marked improvement over traditional methods" (Hake, 1997) since there are a number of colleges which have marginal gain even when interactive engagement activities were used.
I like the Epilogue that Hake included in his 1997 article:
I am deeply convinced that a statistically significant improvement would occur if more of us learned to listen to our students....By listening to what they say in answer to carefully phrased, leading questions, we can begin to understand what does and does not happen in their minds, anticipate the hurdles they encounter, and provide the kind of help needed to master a concept or line of reasoning without simply "telling them the answer."....Nothing is more ineffectually arrogant than the widely found teacher attitude that ’all you have to do is say it my way, and no one within hearing can fail to understand it.’....Were more of us willing to relearn our physics by the dialog and listening process I have described, we would see a discontinuous upward shift in the quality of physics teaching. I am satisfied that this is fully within the competence of our colleagues; the question is one of humility and desire.
Arnold Arons, Am. J. Phys. 42, 157 (1974)
I often wonder whether this applies to Computer Science. Afterall, don't we know pretty well how our students think? ... or do we?
Reference:
Hake, Richard. (1997). "Interactive engagement methods in introductory mechanics courses". Retrieved on August 6, 2009 from http://www.physics.indiana.edu/~sdi/IEM-2b.pdf.
Hake, Richard. (1998). "Interactive-engagement versus Traditional Methods: A six-thousand-student survey of mechanics test data for introductory physics courses". American Association of Physics Teacher. 66(1), pp64-74.
16 July 2009
Deliberate Practice, Part 2
We tried to dissect the elements of "deliberate practice" [Ericsson, et al., 2006] during today's CWSEI Reading Group meeting. Not all aspects of "practice" are the same. We recognize that some students insist on multitasking while doing homework (e.g., watch TV or listen to iPod while engaging in practice activities). Perhaps the term "deliberate practice" should be reserved for those tasks that do not readily permit TV or other multitasking interferences. Ray Lister suggested another paper related to practice quality [Plant, et al., 2005] that may be of interest.
Some of the elements of practice include foundations that students often do not enjoy, but are recognized as skill development techniques. In music, this includes practice on scales, repertoire, technical exercises, etc. [Sloboda, et al., 1996]. In computer science, this may involve practice with "boring" parts of CS, like math skills, analyzing sort routines, fixing badly designed or poorly documented code, or coding non-interactive applications.
The studies of Sloboda, et al., showed that no matter what skill level, there is a common trend that performers that are better at that skill/age level have spent more hours in deliberate practice. The highest achievers in each level are those individuals that have practiced the most. Performers in the highest level have accumulated a considerably larger number of hours than in the next highest level, and so on. This reinforces the results of other papers (e.g., [Ericsson, 1996; Colvin, 2008]).
There is some debate about whether a long programming assignment is better than a shorter programming assignment. Historically, to convey a CS learnaing objective, programming assignments tend to be longer than necessary (perhaps because "I had to do it that way when I was an undergrad"). But, if a student cannot get the long program to work at all, does this mean the student has failed? What if the student is really close to getting it working, but just can't get it to work, or simply doesn't understand a small component of it? Might it be better to have many shorter programs/exercises and more manageable or self-contained milestones, thus building confidence for the student?
References:
Colvin, Geoff. Talent is Overrated. Portfolio (Penguin), 2008.
Ericsson, K. A. The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. Feltovich, and R. R. Hoffman, R. R. (Eds.). Cambridge handbook of expertise and expert performance (pp. 685-706). Cambridge, UK: CambridgeUniversityPress, 2006.
Plant, E. Ashby; Ericsson, K. Anders; Hill, Len; Asberg, Kia (2005). "Why Study Time Does Not Predict Grade Point Average across College Students: Implications of Deliberate Practice for Academic Performance". Contemporary Educational Psychology, v30 n1 p96-116 Jan 2005.
Sloboda, John A.; Davidson, Jane W.; Howe, Michael J.A.; Moore, Derek G. "The role of practice in the development of performing musicians". British Journal of Psychology (1996), 87, pp. 287-309.
Some of the elements of practice include foundations that students often do not enjoy, but are recognized as skill development techniques. In music, this includes practice on scales, repertoire, technical exercises, etc. [Sloboda, et al., 1996]. In computer science, this may involve practice with "boring" parts of CS, like math skills, analyzing sort routines, fixing badly designed or poorly documented code, or coding non-interactive applications.
The studies of Sloboda, et al., showed that no matter what skill level, there is a common trend that performers that are better at that skill/age level have spent more hours in deliberate practice. The highest achievers in each level are those individuals that have practiced the most. Performers in the highest level have accumulated a considerably larger number of hours than in the next highest level, and so on. This reinforces the results of other papers (e.g., [Ericsson, 1996; Colvin, 2008]).
There is some debate about whether a long programming assignment is better than a shorter programming assignment. Historically, to convey a CS learnaing objective, programming assignments tend to be longer than necessary (perhaps because "I had to do it that way when I was an undergrad"). But, if a student cannot get the long program to work at all, does this mean the student has failed? What if the student is really close to getting it working, but just can't get it to work, or simply doesn't understand a small component of it? Might it be better to have many shorter programs/exercises and more manageable or self-contained milestones, thus building confidence for the student?
References:
Colvin, Geoff. Talent is Overrated. Portfolio (Penguin), 2008.
Ericsson, K. A. The influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. Feltovich, and R. R. Hoffman, R. R. (Eds.). Cambridge handbook of expertise and expert performance (pp. 685-706). Cambridge, UK: CambridgeUniversityPress, 2006.
Plant, E. Ashby; Ericsson, K. Anders; Hill, Len; Asberg, Kia (2005). "Why Study Time Does Not Predict Grade Point Average across College Students: Implications of Deliberate Practice for Academic Performance". Contemporary Educational Psychology, v30 n1 p96-116 Jan 2005.
Sloboda, John A.; Davidson, Jane W.; Howe, Michael J.A.; Moore, Derek G. "The role of practice in the development of performing musicians". British Journal of Psychology (1996), 87, pp. 287-309.
12 July 2009
Deliberate Practice
According to extensive studies on how experts develop their specialized knowledge, one of the primary factors is deliberate practice (Ericsson, Krampe, Tesch-Romer, 1993). This means that it is through prolonged efforts to improve performance skills or understanding, whether in chess, sports, music, science, etc., that result in expert performance. Such effortful activities (deliberate practice) need to be carefully designed and administered to optimize improvement with the help of coaches, mentors, teachers, often parents, etc. Many expert characteristics that were once believed to reflect innate talents are actually the result of intense practice extended for a minimum of 10 years.
For computer science, most of the current computing education I have been exposed to do not include significant amount of "practice". There may be some reading assignments, a few programming assignments, but the amount of actual practice is not significant. If expert knowledge does require significant amount of time and effort, we should explore 1) how to deconstruct learning of computer science concepts into sequences of practice activities, and 2) how these activities can be incorporated in the lectures / labs and perhaps even other available technologies, such as online and mobile learning, to promote deliberate practice beyond class time.
Reference:
Ericsson, K.A., Krampe, R.T., Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review. 100(3), 363 - 406.
For computer science, most of the current computing education I have been exposed to do not include significant amount of "practice". There may be some reading assignments, a few programming assignments, but the amount of actual practice is not significant. If expert knowledge does require significant amount of time and effort, we should explore 1) how to deconstruct learning of computer science concepts into sequences of practice activities, and 2) how these activities can be incorporated in the lectures / labs and perhaps even other available technologies, such as online and mobile learning, to promote deliberate practice beyond class time.
Reference:
Ericsson, K.A., Krampe, R.T., Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review. 100(3), 363 - 406.
19 June 2009
Student Overconfidence
People tend to be overconfident in their answers to a wide variety of general knowledge questions, and in particular when the questions are difficult (Plous, 1993). How do researchers study overconfidence? One approach is to ask participants to estimate the probability that their judgment is correct. These estimates are then used to calibrate between confidence and accuracy. A person is perfectly calibrated when his proportion of judgment at a given level of confidence is identical to his expected probability of being correct. Another approach is to ask participants to give a "confidence intervals" that have a specific probability (usually .9 or .98) of containing an unknown quantity. In one study, participants were 98% sure that an interval contained the correct answer but they were right only 68% of the time.
In one of the summer sessions of an introductory CS courses, 16 students out of a class of 68 students overestimated their final course grade after they have received feedback from their first midterm, and 3 students underestimated their final course grade.
Overconfidence can be relearned, just like any belief system. People who were initially overconfident could learn to make better judgments after 200 tries with intensive performance feedback (Lichtenstein and Fischhoff, 1980). Arkes et al. (1987) found that overconfidence could be eliminated by giving participants feedback after five "deceptively difficult problems". Yet another study by Lichtenstein and Fischhoff shows that by having the participants generate opposing reasons alone was sufficient to reduce accuracy overconfidence, but this has not been confirmed in subsequent studies.
References:
Arkes, H.R., Christensen, C., Lai, C., and Blumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes. 39, 133-144.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1980. Training for calibration. Organizational Behavior and Human Performance, 26, 149-171.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1982. Calibration of Probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp 306-334). Cambridge, England: Cambridge University Press.
Plous, S. 1993. The Psychology of judgment and decision making. New York: McGraw-Hill.
Plous, S. 1995. A Comparison of Strategies for Reducing Interval Overconfidence in Group Judgments. The American Psychological Association Inc. 80:4 p 443-454
In one of the summer sessions of an introductory CS courses, 16 students out of a class of 68 students overestimated their final course grade after they have received feedback from their first midterm, and 3 students underestimated their final course grade.
Overconfidence can be relearned, just like any belief system. People who were initially overconfident could learn to make better judgments after 200 tries with intensive performance feedback (Lichtenstein and Fischhoff, 1980). Arkes et al. (1987) found that overconfidence could be eliminated by giving participants feedback after five "deceptively difficult problems". Yet another study by Lichtenstein and Fischhoff shows that by having the participants generate opposing reasons alone was sufficient to reduce accuracy overconfidence, but this has not been confirmed in subsequent studies.
References:
Arkes, H.R., Christensen, C., Lai, C., and Blumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes. 39, 133-144.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1980. Training for calibration. Organizational Behavior and Human Performance, 26, 149-171.
Lichtenstein, S., Fischhoff, B., Phillips, L. 1982. Calibration of Probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, and A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp 306-334). Cambridge, England: Cambridge University Press.
Plous, S. 1993. The Psychology of judgment and decision making. New York: McGraw-Hill.
Plous, S. 1995. A Comparison of Strategies for Reducing Interval Overconfidence in Group Judgments. The American Psychological Association Inc. 80:4 p 443-454
18 June 2009
Why Don't Students Attend Class?
According to Friedman, Rodriguez, and McComb, who did a study on 350 undergraduate students on their reasons for attendance and nonattendance in class, they conclude that "males and females, older and younger students, students who live on and off campus, student who do and do not have jobs, students have light and heavy course loads, and students who do and do not pay their own way in school attend classes with equal frequency." The only difference is that students with better academic records attend classes more regularly.
As to the differences in course characteristics, "students attended faculty taught courses less often than GTA [graduate TA] taught classes, larger classes less often than smaller classes, and natural science classes less often than others." However, courses that penalize absences encourage student attendance in any of the above course settings.
The primary reason why students attend class is internal. They feel they have the responsibility to attend class, their interest in the subject matter, and also getting the material first hand rather than from other sources. Another study has also shown that better attendance is associated with higher grades (Wyatt 1992).
In another article (Jensen and Moore 2009), students who attend help sessions are mostly A and B students and virtually no D and F students. Results also show that students get better grades if they attend these help sessions, and they also attend class more often.
The bottom line is that attendance seems to have a correlation with higher grades. The question is do students really want higher grades, or they are just satisfied with a pass? It will be interesting to survey students on what grades do they realistically expect to get given the effort they are willing to put into the course.
References:
Friedman, P., Rodriguez, F., McComb, J. 2001. Why Students Do and Do Not Attend Classes, Myths and Realities. College Teaching. 49:4, p124-133.
Jensen, P., Moore, R. 2009. What Do Help Sessions Accomplish in Introductory Science Courses? Journal of College Science Teaching. May/June 2009. p60-64.
Wyatt, G. 1992. Skipping class: An analysis of absenteeism among first-year college students. Teaching Sociology 20:201-7.
As to the differences in course characteristics, "students attended faculty taught courses less often than GTA [graduate TA] taught classes, larger classes less often than smaller classes, and natural science classes less often than others." However, courses that penalize absences encourage student attendance in any of the above course settings.
The primary reason why students attend class is internal. They feel they have the responsibility to attend class, their interest in the subject matter, and also getting the material first hand rather than from other sources. Another study has also shown that better attendance is associated with higher grades (Wyatt 1992).
In another article (Jensen and Moore 2009), students who attend help sessions are mostly A and B students and virtually no D and F students. Results also show that students get better grades if they attend these help sessions, and they also attend class more often.
The bottom line is that attendance seems to have a correlation with higher grades. The question is do students really want higher grades, or they are just satisfied with a pass? It will be interesting to survey students on what grades do they realistically expect to get given the effort they are willing to put into the course.
References:
Friedman, P., Rodriguez, F., McComb, J. 2001. Why Students Do and Do Not Attend Classes, Myths and Realities. College Teaching. 49:4, p124-133.
Jensen, P., Moore, R. 2009. What Do Help Sessions Accomplish in Introductory Science Courses? Journal of College Science Teaching. May/June 2009. p60-64.
Wyatt, G. 1992. Skipping class: An analysis of absenteeism among first-year college students. Teaching Sociology 20:201-7.
05 June 2009
Invention Activities
Knowledge transfer from one context to another depends on student learning at least two things: 1) the relevant concepts or skills, and 2) the situations to which they apply. Students are more likely to transfer knowledge from one context to another when instructional examples are abstract and relatively free of surface details. Instead of "tell-and-practice" where instructors often tell the students about the formula they need to use, and then practice using the formulas, it is much better to allow students to develop their "solutions" to a number of contrasting cases before they are told the formula through a mini lecture. Contrasting cases force the students to see beyond the surface differences and explore the underlying deep structure. These contrasting cases constitute what is called an invention activity where students undertake productive activities to note these differences and produce a general solution for all these cases. Such productive activities help students to let go of old interpretations and develop new ones.
Schwartz particularly advocates the use of mathematical tools or procedures in solving invention activities to encourage preciseness and yet general in the solution presentation. They also allow reflection on how the structure of the mathematical tools accomplish their work in the solution of the problems. However, this does not have to be the case. Invention activities can prime students in areas that do not involve quantitative analysis (Yu and Gilley, 2009).
In Schwartz's case, the combination of using visual (problem presentation), numeric (expressing solutions in quantitative mathematical terms), and verbal (student presentation of their solutions) helps to reinforce learning.
In computer science, when we ask our students to create "invent" a solution to a programming assignment, this is an invention activity. The difference with other cases is that invention activities are used as scaffolding for further learning, whereas, in this case, the programming assignment is used to learn the material. In other cases, the students usually don't "invent" the final solution. In the case of computing, the students must get to the final solution themselves. Is that why so many students get frustrated with computer programming? After all, Schwartz did note that students can get tired of repeatedly adapting their inventions.
References:
Schwartz, D., and Martin, T. 2004. Inventing to Prepare for Future Learning: The Hidden Efficiency of Encouraging Original Student Production in Statistics Instruction. Cognition and Instruction. 22(2) 129 - 184.
Yu, B., Gilley, B. 2009. Benefits of Invention Activities Especially for Cross-Cultural Education. Retrieved on October 16, 2009 from http://www.iated.org/concrete2/view_abstract.php?paper_id=8166.
Schwartz particularly advocates the use of mathematical tools or procedures in solving invention activities to encourage preciseness and yet general in the solution presentation. They also allow reflection on how the structure of the mathematical tools accomplish their work in the solution of the problems. However, this does not have to be the case. Invention activities can prime students in areas that do not involve quantitative analysis (Yu and Gilley, 2009).
In Schwartz's case, the combination of using visual (problem presentation), numeric (expressing solutions in quantitative mathematical terms), and verbal (student presentation of their solutions) helps to reinforce learning.
In computer science, when we ask our students to create "invent" a solution to a programming assignment, this is an invention activity. The difference with other cases is that invention activities are used as scaffolding for further learning, whereas, in this case, the programming assignment is used to learn the material. In other cases, the students usually don't "invent" the final solution. In the case of computing, the students must get to the final solution themselves. Is that why so many students get frustrated with computer programming? After all, Schwartz did note that students can get tired of repeatedly adapting their inventions.
References:
Schwartz, D., and Martin, T. 2004. Inventing to Prepare for Future Learning: The Hidden Efficiency of Encouraging Original Student Production in Statistics Instruction. Cognition and Instruction. 22(2) 129 - 184.
Yu, B., Gilley, B. 2009. Benefits of Invention Activities Especially for Cross-Cultural Education. Retrieved on October 16, 2009 from http://www.iated.org/concrete2/view_abstract.php?paper_id=8166.
Blooming in Teaching and Learning
It is important to align appropriate teaching activities with learning outcomes, and students need to know at what level of cognitive engagement they are expected to demonstrate. If only facts are presented in lectures, but students are expected to provide an analysis in their assignment but are never taught how, this may not be effective in assessing student's capabilities. Bloom's Taxonomy provides a common language to coordinate what is taught and what is being assessed. The six different levels of Bloom's Taxonomy are: knowledge, comprehension, application, analysis, synthesis, and evaluation. These are further revised with a set of verb counterparts: remember, understand, apply, analyze, create, and evaluate. Here are three ways of using "Blooming" to enhance learning:
Reference:
Crowe, A., Dirks, C., Wenderoth, M., Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology, CBE - Life Sciences Education, Vol. 7, 368-381, 2009
- Instructor assigns Bloom level to grading rubric, and provide additional learning activities to improve the levels where students have low scores.
- Introduce Bloom levels to students and students are asked to "bloom" questions asked in class (i.e. rank the questions according to Bloom's levels). This helps students to develop meta-cognitive skills and reflection on their learning. Students are also shown the class average at each Bloom level after a test, and evaluate their score at each level.
- Students are taught the Bloom levels and write questions at each level in small groups. The groups exchange the questions and rank them to see if they correspond to the levels intended.
Reference:
Crowe, A., Dirks, C., Wenderoth, M., Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology, CBE - Life Sciences Education, Vol. 7, 368-381, 2009
04 June 2009
Item Response Theory
How do we (as instructors) decide whether a test is "hard" or "easy"? Most of us will answer something along the line .. "it all depends". I find this observation which Hambleton et al. make of the common responses to this question interesting: "Whether an item [or test] is hard or easy depends on the ability of the examinees being measured, and the ability of the examinees depends on whether the test items are hard or easy!" Not very helpful, isn't it? Item Response Theory is a body of theory which applies mathematical models to analyze student scores of individual questions from a test to facilitate comparison of the difficulty level of the questions and their capabilities to differentiate student abilities. It is based on two basic postulates: 1) the performance of an examinee can be predicted by a set of factors called traits (or abilities), 2) the relationship between examinees' item performance and the set of traits can be described by an item characteristic function or item characteristic curve (ICC) like the one in the graph above. The x axis is the trait or ability score, and the y axis is the probability of the examinee with certain trait or ability score to obtain the correct answer. As the ability of an examinee increases, so does the probability of a correct response to an item.
Each item in a test has its own ICC and the ICC is the basic building block of IRT. The steepness of the graph shows how well the item can differentiate examinees with low and high abilities. A flat curve is a poor indicator while a steep curve, like the one shown above, is a good indicator. If several ICC's are plotted in the same graph for the corresponding test items with the same shape, the curves on the left (or top) correspond to the easier items than those on the right (or bottom).
By analyzing the examinees' scores of each item from an exam using IRT software, one can have an idea 1) which questions are good indicators of assessing student abilities or not, and 2) objectively respond to which questions are "easy" or "hard".
Reference:
Baker, F. The Basics of Item Response Theory. Available online here.
Graph is taken from http://echo.edres.org:8080/irt/ where one can also find a great deal of information on IRT.
Hambleton, R., Swaminathan, H., Rogers, H. Fundamentals of Item Response Theory. Newbury Park: Sage Publications. 1991.
Subscribe to:
Posts (Atom)