30 October 2009

Promising Practices in Undergraduate STEM Education

In STEM education transformation, it is important to evaluate changes in light of implementation and student performance standards. Froyd puts together eight promising practices in STEM transformation, and how these practices can be evaluated.
  1. Use of learning outcomes
  2. Organize students in small groups
  3. Organize students in learning communities to promote integrated and interdisciplinary learning
  4. Organize content based on problem or scenario
  5. Provide students feedback through systematic formative assessment
  6. Design in-class activities to actively engage students
  7. Provide students with the opportunities to engage in undergraduate research
  8. Have faculty initiate student – faculty interactions
Implementation standards and student performance standards can be used to evaluate each of these practices. Implementation standards include:
  1. whether the practice is relevant for the course
  2. whether sufficient resource is available
  3. the amount of effort required for the implementation
Student performance standards include:
  1. whether there is any performance gain with the new practice compared to other students
  2. comparison of different approaches to the implementation of the same practice, different class settings, students, etc.
Reference:

Froyd, J. (2008). White Paper on Promising Practices in Undergraduate STEM Education. Retrieved on October 30, 2009 from here.

23 October 2009

Innovative Ways of Teaching Computer Science (Part 2)

Please add to this list if you have any good ideas on innovative ways of teaching Computer Science:
  • Team teaching
  • Turning lectures into labs (given that many students bring their laptops to lectures, why not form groups of students with at least one laptop in the group for some hands on activities?)
  • Treating programming assignments like math homework problems (why do we give only big assignments most of the time?)
  • Play games (use games to engage students, see Thiagi web site)
  • Invention activities
  • Use humor, group activities, field trips
  • Get rid of textbooks or let students learn as much as they can on their own and share (these are two ends of the spectrum)
  • Let students decide what practical problems they are interested in solving with guidance from faculty (e.g. program iPhone, web app, robotics, etc.) and structure the course around them.

22 October 2009

Problem Solving

Definition: Problem solving is cognitive processing directed at achieving a goal when no solution method is obvious to the problem solver. (Meyer, 1992)

Proposition #1: Problem solving abilities do not transfer between disciplines. (Maloney, 1993)

Proposition #2: A student's strengths and weaknesses in problem solving are the same regardless of the environment. As an example, a student's strengths and weaknesses in solving a complicated trip planning problem are the same in solving a physics problem or performing in a work place. (Adams and Wieman, 2007)

Implications: if #1 is true, the argument that math and logic help students in Computer Science is no longer valid?

If #2 is true, all Computer Science students should play a lot more video games?

References:

Adams, W. and Wieman, C. (2007). Problem Solving Skill Evaluation Instrument - Validation Studies. Retrieved on October 22, 2009 from here.

Maloney, D.P. (1993). Research on Problem Solving: Physics, in Handbook of Research on Science Teaching and Learning edited by D.L. Gabel. Toronto: Macmillan. pp 327 - 354.

Meyer, R.E. (1992). Thinking, problem solving, cognition (2nd ed). New York: Freeman.

Innovative Approaches to Teaching Computer Science (Part 1)

What we teach in Computer Science depends a lot on how we think of Computer Science as a discipline. According to Lewis and Smith (2005), the segregationists think that it is mainly problem solving, algorithmic analysis, theory building, and not an art. The integrationists think that it should be driven by what is needed in other computing fields and majors, such as applied computing in bioinformatics, engineering, and only partly by industries. The synergists think it should transcend any discipline where computing concepts can be applied in much broader terms in non-computing specific areas. As an example, the computing concept of pattern matching may be applied to DNA sequence matching initially (synergistic model), but now it is core to DNA analysis in bioinformatics (integration model), and complexity theories may come out of special algorithms in this area (segregation model).

How we teach Computer Science can also be influenced by these three models. One of the synergistic ways of teaching Computer Science is to consider the approaches to teaching in fine arts and how these can be applied to our discipline. Computer Science is traditionally taught in a format that is instructor centered (instructor is the expert, students are the novices), where the subject matter is abstracted from its practical use (toy programs vs. real life applications), and taught in individualized, non-collaborative (to avoid cheating) environment. In contrast, the fine art approach to teaching has a lot more student - student collaboration, student - instructor engagement, etc. (Barker et al. 2005). This is starting to change as we see more Just in Time teaching, use of clicker questions during lectures, pair programming, peer instruction, peer evaluations, in class activities, group projects, two-stage exams, media programming, etc. to increase enrollment and reduce attrition especially for female students in Computer Science. The paper by Barker et al. also has a good background summary on the attrition of women in Computer Science, and how fine arts approach to teaching may help in Computer Science teaching.

An integrative approach to teaching Computer Science can be seen in the paper by Cushing et al. (2009) where he reported an entry level Computer Science course integrated with Computational Linguistics that included case studies, term project, lecture series and seminars. Out of 70 students that completed the course, 24 students went on to the next quarter of Computer Science with several of them not originally intended to. It is not clear how this compares to other years.

References:

Lewis, T. and Smith, W. (June 2005). The Computer Science Debate: It's a Matter of Perspective. The SIGCSE Bulletin. 37(2), pp 80 - 84.

Barker, L. Garvin-Doxas, K., Roberts, E. (February, 2005). What Can Computer Science Learn From a Fine Arts Approach to Teaching? SIGCSE 2005, pp 421 - 425.

Cushing, J., Hastings, R., Walter, B. (2009). CS0++ Broadening Computer Science At The Entry Level: Linguistics, Computer Science, And The Semantic Web. The Journal of Computing Sciences in Colleges, Papers of the Sixteenth Annual CCSC Midwestern Conference, October 9 - 10, 2009. pp 135 - 142.

17 October 2009

Curriculum Change

What / who drives curriculum change? Some claim that it should be the academic faculty, others claim the industry, employers, or best practices, while others claim the students. Gruba et al (2004)'s extensive survey finds that computer education curriculum changes are driven by individuals, politics, and fashion (what is attractive to students) more than they are driven by academic merit and external curricula. So how can curriculum changes be made more objectively?

Peter Wolf, Associate Director of Teaching Support Services at the University of Guelph, co-edited the New Directions for Teaching and Learning publication, “Curriculum Development in Higher Education: Faculty-Driven Processes and Practices”. He is also the first author of the Handbook for Curriculum Assessment. In the handbook, he suggests a curriculum development process that combines Donald Kirkpatrick's four level training assessment model during curriculum development. It is evidence based that informs and guides the entire process. Here is a synopsis of the individual processes:

Curriculum Development

Peter Wolf's model of curriculum development process is a top-down model which starts with the learning goals and expected outcomes that should be acquired by an ideal graduate and then further refine this to how these goals can be implemented within a program / course structure and specific learning activities.

Training / Learning Assessment

Donald Kirkpatrick (1994) proposed a four level model to assess effectiveness of training:
  1. Reaction - Did the learners like the program? Was the material relevant to their work? This type of evaluation is often called a “smilesheet.” According to Kirkpatrick, every program should at least be evaluated at this level to provide for the improvement of a training program.
  2. Learning - Did the learners learn anything? Have the students advanced in skills, knowledge, or attitude? Pre-tests and post-tests are often administered to assess student learning.
  3. Transfer - Are the newly acquired skills, knowledge, or attitude being used in the everyday environment of the learner? For many trainers this level represents the truest assessment of a program's effectiveness. It is also most difficult to test at this stage.
  4. Results - Is there any increased production, improved quality, decreased costs, reduced frequency of accidents, increased sales, and even higher profits or return on investment from the training?
Integrated Development and Assessment Model

By combing Wolf's and Kirkpatrick's models, each stage of Wolf's development process can be accessed by various levels of Kirkpatrick's assessment model, thus each is informed by the other.


References:

Gruba, P., Moffat, A., Søndergaard, H., and Zobel, J. 2004. What drives curriculum change?. In Proceedings of the Sixth Conference on Australasian Computing Education - Volume 30 (Dunedin, New Zealand). R. Lister and A. Young, Eds. ACM International Conference Proceeding Series, vol. 57. Australian Computer Society, Darlinghurst, Australia, 109-117.

Kirkpatrick, D.L. (1994). Evaluating Training Programs: The Four Levels. San Francisco, CA: Berrett-Koehler.

Wolf, P., A. Hill, and F. Evers, The Handbook for Curriculum Assessment, 2006, Guelph University, obtained February 2007 from here.

12 October 2009

Student Sharing (legitimately)

While students are warned repeatedly against plagiarism, are there any advantages to have them share their work with each other after submission? One of the possible benefits is that students get to see how their peers have completed their assignments. This is particular useful if the assignment is open-ended where students are free to choose the problems they like to solve, the essays they like to write, projects they like to work on, or any areas of interest related to the course subject they may want to pursue. This in turn creates a multitude of contexts of learning that promotes knowledge transfer. According to Bransford et al. (2000), knowledge transfer is influenced by a number of factors. Some of these are:
  • degree of mastery of original subject (without a good understanding of the original material, transfer cannot be expected)
  • degree of understanding rather than just memorizing facts
  • amount of time to learn, and more specifically the time on task (or deliberate practice)
  • motivation (whether students are motivated by performance or learning)
  • exposure to different contexts
  • problem representations and relationships between what is learned and what is tested
  • student metacognition .. whether learners actively choose and evaluate strategies, consider resources, and receive feedback (active transfer), or depend on external prompting (passive transfer)
Open-ended assignments where students are encouraged to pursue problems that they are interested in and to share their work with one another and even critique each other's work touch upon many of these factors. Poogle (Head and Wolfman, 2008) is a framework for students to submit, share, and assess open-ended, interactive "unknown-answer" computer science assignments. The SWoRD system (Cho et al, 2007) allows students to review each other's writing and studies have shown that peer reviewing can empower learning to write from many angles. Both have been successful in promoting student learning through the process of student sharing.

References:

Head, C and Wolfman, S. (2008). Poogle and the Unknown-Answer Assignment: Open-Ended, Sharable CS1 Assignments. SIGCSE 2008. pp 133 - 137.

Cho, K., Schunn C., Kwon, K. (2007). Learning Writing by Reviewing. Retrieved on October 13, 2009 from here.

Bransford, J., Brown, A., Cocking, R. (eds). (2000). How People Learn. Washington: National Academy Press.

09 October 2009

Case-Based Teaching and Data Analysis

Case-Based Teaching and Learning Gains
  1. Case based teaching that emphasizes problem solving and discussion improve student performance significantly on exams throughout the semester. It also enhances students' abilities to correctly answer application and analysis type questions.
  2. While case based teaching improves student exam performance overall, lecture-based teaching results in more top performing students (90% or higher exam score) than case-based teaching. I wonder the "top" students that we traditionally think of are so well trained in learning under the didactic teaching method, that when they are exposed to other learning styles, they just become lost!
Data Analysis on Changes in Course Delivery

Here are the different data analysis that can be done to determine the effects of changes made in a course:
  1. Use prerequisite course final exam scores or entrance exam scores to determine variation of student academic ability when comparing students from different terms.
  2. Compare first test score with the final test score in a course to see how students improve in their different levels of learning (which can either follow Bloom's categories, or simply two levels: knowledge-comprehension / application-analysis).
  3. Compare total exam points earned by students under different grade band (90% or higher, 80% - 90%, 70% - 80%, etc.)
  4. Bloom course material / homework / etc. and correlate with test scores in 2.
Reference:

Chaplin, Susan. (September / October 2009). Assessment of the Impact of Case Studies on Student Learning Gains in an Introductory Biology Course. Journal of College Science Teaching. pp 72 - 79.

Case Studies Resources:

National Center for Case Study Teaching in Science:
http://ublib.buffalo.edu/libraries/projects/cases/case.html

The case page (with cases for many different science areas):
http://ublib.buffalo.edu/libraries/projects/cases/ubcase.htm

04 October 2009

Student Cheating

In a recent student survey conducted in one of the Computer Science courses at UBC, we asked the following question with the preamble: Just like all your other responses in this survey, no instructor will have access to your identity. In particular, your responses to the following two questions will not be used in any way as evidence of violation of academic misconduct.
Do you believe you may have ever violated the academic conduct guidelines of a UBC course and, if so, what activities were you engaged in?
Out of 81 responses we received, no student admitted to having violated the academic conduct guidelines. Of course it is quite probable that UBC students are highly ethical in nature, or the question may not be clear enough on what constitutes "academic conduct guidelines". In any case, even with the preamble, the students may not feel comfortable in revealing the truth because the survey did ask for their student number in the beginning! In a study of student cheating by Sheard et al. (2002), students self-report of their cheating activities ranges from around 10% to 47%. In general, there are internal and external factors that cause students to cheat, but the three most common reasons are: time pressure, possible failure of the course, and difficulty of work.

One of the more publicized cases of student cheating in Computer Science is reported by Zobel (2004). In that case, students cheated by purchasing assignments and even have someone write the exams for them. As the faculty tried to investigate on the case, there were met with violent threats and even office break in's. It all sounded like a soap opera, but it is understandable that many faculty members or administrators do not want to deal with cheating cases. After all, it is costly on every one's part.

Greening et al. (2004) and Joyce (2007) examine ways of integrating ethical content into computer curricula. A student survey that involves a number of scenario's that involve cheating seems to challenge the students' thinking on critical ethical issues of a number of issues. It is also critical that faculty needs to have a good background of philosophical frameworks to guide the students. Some of these include utilitarian, deontological, virtuous, and relativist frameworks.

The prevalence of cheating cases, especially in assignments, works against student learning in that properly designed assignments are effective ways to help students construct their knowledge. If instructors knew that students mostly cheat on the assignments, they tend to place less emphasis (and hence, marks) on assignments, and students are further unmotivated to do the assignments. Why is it so difficult to make up Computer Science assignments that are fun and are made up of small incremental tasks to engage the students?

References:

Zobel, Justin. (2004). Uni Cheats Racket: A Case Study in Plagiarism Investigation. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Zobel.pdf.

Greening, T., Kay, J., and Kummerfeld, B. (2004). Integrating Ethical Content Into Computing Curricula. Sixth Australasian Computing Education Conference, Dunedin, NZ. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV30Greening.pdf.

Sheard, J., Carbone, A., and Dick, M. (2002). Determination of Factors which Impact on IT Students' Propensity to Cheat. Australasian Computing Education Conference (ACE2003), Adelaide, Australia. Retrieved on October 4, 2009 from http://crpit.com/confpapers/CRPITV20Sheard.pdf.

Joyce, D. (2007). Academic Integrity and Plagiarism: Australasian perspectives. Computer Science Education. 17(3), pp 187 - 200.

Asking Questions

When we pose questions to our students, they sequentially and iteratively go through four stages: comprehension, memory retrieval, judgment, and mapping (Conrad and Blair, 1996) (Tourangeau, 1984) (Oksenberg and Cannell, 1977). At any one of these stages, students may find it difficult to answer the questions due to the choice of words and the way the questions are asked. This may not because of their misconceptions of the subject matter but may indicate the questions need to be revised. Ding et al. summarized their results of validating clicker questions using interviews (2009).

In the comprehension stage, we want to make sure the students understand the problem accurately. In a think-aloud session, we may be able to see whether the students have misinterpreted the questions. Otherwise, this can be easily dismissed as a misconception that the students have.

In the memory retrieval stage, we want to make sure the students are accessing the relevant information to solve the problem. If any part of the question triggers the students that lead them in the wrong path, these questions can be seen as "trick" questions and are not testing the student learning.

In judgment, students need to perform the appropriate task to solve the problem given a correct retrieval of relevant information. If the questions are not clear about the context / conditions, the students may not be able reach a definite conclusion. In those cases, the questions need to be clarified.

In mapping, students need to correctly map the right answer to the right choice. Here, the choices provided must be clear and the students can make a definite choice.

Validating questions take time, and student interviews seem to be an effective way of helping instructors refine their questions. Teachers can also find out something about the student responses to the questions and see if there is a majority of them getting the questions wrong by examining the exam sores and their correlation with other data. Such forensic study may reveal how students interpret and think through the questions.

References:

Ding, L, Reay, N.W., Lee, A., Bao, L. (2009). Are We Asking the Right Questions? Validating Clicker Question Sequences by Student Interviews. American Journal of Physics. 77(7), pp 643 - 650.

Conrad F. and Blair, J. (1996). From Impressions to Data: Increasing the Objectivity of Cognitive Interviews. Proceedings of the Section on Survey Research Methods, American Statistical Association. (ASA, Alexandria, VA). p 1.

Tourangeau. R. (1984). Cognitive Science and Survey Methods. Cognitive Aspects of Survey Design: Building a Bridge Between Disciplines. Edited by T. Jabine, M. Straf, J. Tanur, and R. Tourangeau. (National Academics Press, Washington, DC). p 73.

Oksenberg, L. and Cannell, C. (1977). Some Factors Underlying the Validity of Response in Self-Report. Bull. I'Institut Int. Stati. 48, pp 325 - 346.

01 October 2009

7 Techniques of Teaching / Learning

deWinstanley summarizes Bjork's seven studying techniques in the reference below. These seven learning techniques have corresponding implications for teachers. Here is the list for teachers:
  1. Allocate your attention efficiently. Anything that does not help your students bridge what you want them to learn with what you want to tell / show them is a distraction. If you tell a story, make sure there is a connection with what you want them to learn. Use questions to help your students to focus.
  2. Interpret and elaborate on what you are trying to teach. Students need context to apply what they learn so they can have better recall and retention.
  3. Make your teaching variable (e.g. location, interpretation, example). Use a variety of contexts to illustrate what you want to teach (see points 1 and 2). Try contrasting cases.
  4. Space your teaching of a topic or area and repeat your teaching several times. Instead of blocking or massing what you want to teach on XXX in one big chunk of time, try to space it out in a number of sessions.
  5. Organize and structure the information you are trying to teach. Provide skeleton outline rather than a full outline so students can pay more attention. Provide or have the students produce (see point 7) a concept map that captures the concepts and their relationships with one another.
  6. Help students to visualize the information. Reinstate the context during a test. Use mnemonics, graphs, props, etc., but make sure they are helpful for the student to build bridges to the learning content (see point 1).
  7. Generate Generate Generate ... Retrieve Retrieve Retrieve. Give students lots of tests and opportunities to construct their knowledge. Feedback is good but even if they don't get immediate feedback, have them generate their knowledge over and over again.
References:

deWinstanley, Patricia. (1999). The Science of Studying Effectively. Bjork's Seven Studying Techniques. Retrieved on September 4, 2009 from http://www.oberlin.edu/psych/studytech/.