27 January 2009

Moving Students from Rule Based to Creative Problem Solving Skills

Lubben et al. wrote an article on the change of students' perception of "preciseness" under different contexts. In laboratory / pharmacy settings, measurements are expected to be more precise than in a kitchen setting. Deviations are not acceptable in the laboratory / pharmacy settings, but ok in a kitchen setting. The interesting thing is that students based their judgement on the perceived effects of the result mostly rather than on the instructions given or the process to be used. That is, whether deviations are ok or not, (or what precision really means), depends largely on whether there will be any effects on the results. In the kitchen setting, deviations are ok because the measurements are perceived not to have significant impact on the results, whereas such is not the case in laboratory and pharmacy settings. From this, the authors conclude that context makes a difference in the students' choice of a point-paradigm (drawing conclusions from individual data points) in the laboratory / pharmacy settings as opposed to the set-paradigm (drawing conclusions from the ensemble of all data) used in the kitchen. One of the goal of teaching is to move students from a point-paradigm to a set-paradigm.

In computing, context does not play such a significant role in student's perception of preciseness. Whether the students are writing a program for data analysis in a laboratory or for a game program, preciseness and accuracy are needed. But a similar transformation of students' perception of what is essential in programming needs to take place for computer science students. Novice programmers stick to a "formulaic" strategy in solving problems. To them, there is one solution they need to come up with in solving a problem. Whereas seasoned programmers are free to explore different ways of thinking about the problems, modeling, and solving them. Students eventually learn that the result is what really matters and they realize they can be free to be creative, and innovate and construct their programs.

I started programming with BASIC, and was it fun to create programs with GOTO's! I could create the most convoluted programs and few people would have understood them, but it was fun. Those programs would probably fail under many conditions and any half decent test plan, but it was fun. I wonder whether our computer science education may be prescribing too many rules in programming and rob the students from experiencing the fun and creativity in computer science.

Reference:

Lubben, F., Campbell, B., Buffler, A., Allie, S. (2004). The Influence of Context on Judgements of the Quality of Experimental Measurements. Proceedings of the 12th Annual Conference of the Southern African Association for Research in Mathematics, Science and Technology Education. Pages 569 - 577.

18 January 2009

STROBE

STROBE is a classroom observation tool used by trained observers on learners without interfering with their activities. It yields quantitative data from brief observations of individual learners from around the classroom. The observation occur over 5-min "STROBE cycle" that is typically repeated from 8 to 10 times depending on the length of the class session. Each STROBE cycle proceeds as follows:

First, the observer writes down the following:
  • the start time of the cycle,
  • the subjects to be observed, whether it be "entire class", "subgroups", or any specific group,
  • the major activity, which can be "instructional", "procedural", or other,
  • the estimated portion of learners on task, which can be "all", "almost all", "half or less", etc.,
Next, the observer selects a learner from the class and observes the selected learner for 10 to 20 seconds, marking the type of engagement the learner exhibits, such as "talking", "listening", "reading", "writing", etc., and the object at whom the learner's engagement is directed ("other learners", "instructor", "self", etc.). This is repeated 4 times.

The observer also observes the instructor and marks the instructor's type and object of engagement. Finally, the observer also notes the number of questions students ask in the cycle.

What STROBE provides is a simple and effective way of gauging the level of engagement of students in the classroom, not necessary learning. It can also be skewed by untrained observers like me who did it for the first time in one of the CS classes recently. As a newbie to this, I found myself picking on the students who were not "norms" to be my targets, those who were working on their computers, those who were talking to other people, etc. I had to keep reminding to randomly pick students and not just the ones that catch my attention!

Reference:

Kelly, P. , Haidet, P., Schneider, V., Searle, N., Seidel, C., Richards, B. (2005). A Comparison of In-Class Learner Engagement Across Lecture, Problem-Based Learning, and Team Learning Using the STROBE Classroom Observation Tool, Teaching and Learning in Medicine, Volume 17, Issue 2 April 2005 , pages 112 - 118

12 January 2009

Outliers

Having heard of the latest book by Malcolm Gladwell, Outliers, and read some of the raving reviews about the book, I was naturally drawn to it while my family roamed the malls during the Christmas holidays. Little did I anticipate that as soon as I started the first page of the book, it was not until three chapters later when I finally left the store with a copy in hand. The condensed message behind the book is simple: outliers are not born, they are made. They are shaped by culture, tradition, communities, and they do have breaks that they seize and take advantage of .. but most of all, they work hard. This reminds me of one of my professors in my undergraduate years who told me that if one wants to pursue a PhD, all one needs is patience, persistence, and money! According to Malcolm, there is this magic number of 10,000 hours of practice and hard work which outliers usually spend to get to where they are at. In a culture where many believe that success comes only to the selected few with special genetic makeup, or by pure luck, the book contains a number of evidences to dispel these perceptions. Also, Malcolm suggests that our culture and tradition may either make or break us. He traces the cause of a number of plane crashes to the cultural influence on the pilots, and the difference in aptitude towards mathematics between Asian and Western children also to their different cultural upbringing.

What does this have to do with computer science education? I have heard so many students who claim that they “are just not made to program”, or they “just don’t have the aptitude” for computer programming. What Malcolm has shown, even though mostly via anecdotal accounts, that success depends largely on repetitive practice and hard work. In computing, it has also been demonstrated that highly intensive training programs have been successful in converting students with no programming background to proficient software developers. The problem that face every computer science educator is how to make this repetitive practice and seemingly hard work that require long hours of engagement to be perceived as challenging, rewarding, and, at the same time, providing the students with a sense of autonomy in their learning – the three essential ingredients, according to Malcolm, that make any work satisfying.

13 December 2008

Designing Assessment To Support Students' Learning

It has been reported that the single most power influence for student achievement is feedback (Hattie, 1987; Black & William, 1998). But it is not just any plain ol' feedback. Informative, timely, concise feedback from the instructors, and feedback which the students actually read and follow up on, are what really count. As such, conducting assessment and providing feedback is an art. Assessment can be enormously expensive, and can be perceived as ineffective and a poor representation of student learning. Gibbs and Simpson (2005) come up with a list of 10 plausible conditions under which assessment supports learning.
  1. Sufficient assessed tasks are provided for students to capture sufficient study time.
  2. These tasks are engaged with by students, orienting them to allocate appropriate amounts of time and effort to the most important aspects of the course.
  3. Tackling the assessed task engages students in productive learning activity of an appropriate kind.
  4. Sufficient feedback is provided, both often enough and in enough detail.
  5. The feedback focuses on students' performance, on their learning and on actions under the students' control, rather than on the students themselves and on their characteristics.
  6. The feedback is timely in that it is received by students while it still matters to them and in time for them to pay attention to further learning or receive further assistance.
  7. Feedback is appropriate to the purposes of the assignment and to its criteria for success.
  8. Feedback is appropriate, in relation to students' understanding of what they are supposed to be doing.
  9. Feedback is received and attended to.
  10. Feedback is acted upon by the student.
Reference:

Hatte, J.A. (1987). Identifying the salient facets of a model of student learning: a synthesis of meta-analyses, International Journal of Educational Research, vol. 11, pp. 187-212.

Black, P. & William, D. (1998). Assessment and classroom learning, Assessment in Education, vol. 5, no. 1, pp 7-74.

Gibbs, G. & Simpson, C. (2005). Conditions Under Which Assessment Supports Students' Learning, Learning and Teaching in Higher Education, Issue 1, pp. 3 - 31.

08 December 2008

Class Observations

I recently had the opportunities to visit a number of CS classes both domestically and internationally. Here are some observations from these visits that I really like. I will keep adding to this list as I come across other gems!

  • Use analogies to explain difficult concepts ... e.g. passing of a hockey puck as an analogy to passing parameters, recipe as an analogy to algorithm, etc. 
  • Even in a big lecture, try to learn the names of at least some students.  One easy way is to get to know those who ask questions in class.  All of us like to be known!
  • When students ask what-if questions about programming languages, (e.g. what if you divide a integer number with a real number), instead of just giving them the answer, do a quick demo on the computer (if you have one set up).  This instills a culture of learning via experimentation.
  • Repeatedly ask the students if there are any questions throughout the entire lecture, and PAUSE. Students may not ask questions right away, but they know that the instructor is encouraging any questions they may have.
  • Instead of asking “Any Questions”, try “Who is comfortable with the material presented so far?” and take a poll. The poll can be done via either clicker or raising of hands.
  • Show Learning Goals at the beginning of class, show Learning Goals before each learning unit, show Learning Goals for each learning activity, show Learning Goals at the end of the class. The Learning Goals should reference back to the Learning Goals as stated on the course outline.
  • Great teachers are usually experts in what they teach. They know the subject well ... very well!
  • Students like a variety of presentation styles .. try video, simulation, debate, demonstration (such as having the instructor develop a piece of code live, or work through a problem after a number of unsuccessful attempts.) Students like to see the process of solving a problem rather than just the solution.

07 December 2008

Clicker Use in Computer Science Education

There have been a number of reports on the effective use of Clickers (or Classroom Response System, or Student Response System, or Classroom Communication System, or many other names) across different disciplines. The general comments have consistently been:

  • increased attendance
  • greater involvement (especially when good questions are asked)
  • more interactions with instructor
  • more interactions among students (especially when peer discussions are used before or after each clicker question)
  • anonymity increases participation rate
  • students like immediate feedback on their learning

While there are a number of websites related to clicker questions in physics, mathematics, biology, etc., there does not seem to be any for computer science. In any case, I found this website at Vanderbilt particular useful. The Bibliography section contains a number of links to the use of clickers in a number of disciplines, including Computer Science.

28 November 2008

Getting students to ask good questions

On the rare occasions I bothered reading the textbook when I was a student, "reading" meant looking at all the assigned pages. As faculty, I've finally realized that textbooks are invaluable as a jumping-off point for my own thoughts, ideas, questions, and problems. In CPSC 111, we experimented with "weekly reading questions" (questions marked on completeness, inspired by students' assigned readings) to help students transition from passive reading habits to this type of "interrogation" of the textbook.

Marbach-Ad and Sokolove probe the issue of improving students' reading questions deeply in their 2000 paper (cited below). Their most successful method involves several parts: Ask students regularly for their "best question" after a reading. Give students a clearly defined rubric with real examples for what good questions are. Make many opportunities for students to practice asking questions, evaluating questions, and answering questions. Give student questions pride of place in the classroom, including using wireless mics so that other students can hear students asking the question.

The paper is somewhat interesting but not especially strong from an experimental standpoint. (Of the four techniques I mention above, they provide some quantitative evidence for the combined value of the last two.) However, the ideas may be worth trying out in our own classrooms.

I've appended their rubric for questions below. It's not directly adoptable for CS, but it's an interesting starting point.

Their most interesting mechanism for student practice with questions is to have stable student teams submit their questions as a stack. Before submitting, the students have a few minutes to decide which questions are the best and put those on top. This seems like a simple way to enforce student practice discussing and assessing questions.

Unfortunately, the paper does not address what to do with the questions the instructor received. In CPSC 111, we chose 10 questions at random to answer every week and sometimes answered additional questions that were common or interesting but didn't show up on the random list. This was somewhat satisfying to students. A complementary currency system (where students can purchase answers or invest in questions?) or a voting system (like ActiveClass's) might be more successful.

Marbach-Ad, Gili and Sokolove, Philip (2000). Can Undergraduate Biology Students Learn to Ask Higher Level Questions? Journal of Research in Science Teaching 37(8): 854-870.




Marbach-Ad and Sokolove's taxonomy for student questions in Intro Biology (developed from sample student questions):
Category 0: Questions that do not make logical or grammatical sense, or are based on a basic misunderstanding or misconception, or do not fit in any other category. (This is a "catch all" category that instructors can readily subdivide for teaching purposes--for example, when grading written questions. In this case we chose not to subdivide the category in order to focus on the characteristics of desirable questions.)

Category 1a: Questions about a simple definition, concept, or fact that could be looked up in the textbook (i.e., "what is meant by the polarity of the membrane?").

Category 1b: Questions about a more complex definition, concept, or fact explained fully in the textbook (i.e., "what does it mean when it is says air moves through a bird's lungs?").

Category 2: Ethical, moral, philosophical, or sociopolitical questions (i.e., "carbon monoxide is a very deadly gas binding to hemoglobin much faster than oxygen. If it is so deadly, why are there no carbon monoxide detectors throughout the dorm halls?").

Category 3: Questions for which the answer is a functional or evolutionary explanation. (In this case students begin by asking a question that relates to function and could, in principle, be answered in functional terms--"Why do people have an appendix?"--however, the deeper answer is more often related to evolution than to function (the human appendix is a vestigial organ)).

Category 4: Questions in which the student seeks more information than is available in the textbook (i.e., "what causes the 'rumbling' in your stomach when you are hungry?").

Category 5: Questions resulting from extended thought and synthesis of prior knowledge and information, often preceded by a summary, a paradox, or something puzzling. (i.e., "In chapter 35 it says that caffeine, if taken excessively, can disrupt motor coordination and mental coherence which can cause depression. I known that Coca-Cola has some amount of caffeine in it. Does this mean that excessive consumption of it could lead to depression . . . ?")

Category 6: Questions that contain within them the kernel of a research hypothesis (i.e., "I have heard that some people snore so badly that they stop breathing during their sleep. What correlation is there, if any, between 'heavy snorers' and a higher instance of apnea during REM sleep. Can the attention their nervous system is devoting to a dream, interfere the regulation of respiration?").