Skip to content ↓

August 2022 - Year 24 - Issue 4

ISSN 1755-9715

Designing an Effective Assessment Strategy for “What’s so Funny?” Module

Alim Asanov has worked as a Lecturer at Westminster International University in Tashkent (WIUT) since 2011. He holds an MA in Theory and Practice of Translation and Interpretation obtained in 2008. Currently, he is attempting his second Master’s in Learning and Teaching (MALT TESOL) at WIUT. His research interest includes teacher education, assessment in HE and classroom management.  Email: aasanov@wiut.uz

                                                    

Introduction

Assessment has undergone a vast shift in its development to be viewed today as an inextricable block in teaching and learning. Despite the existence of a vast source of strategies, not every teacher is able to align them in a manner to feed learning. Per this assignment, for the first time over 15 years of my teaching experience, I intend to make a change in my own learning by having developed and justified an assessment strategy for “What is so funny?” module. The current syllabus aims at year one students enrolled on Communication and Media Course at Higher Education. The 12-week course has 24 sessions in total, each lasting 120 minutes (48 contact hours). The estimated length for student-managed learning hours is 152.

Four measurable Learning Outcomes (LOs) underneath this course have been followed to ensure conducive student learning is administered:

  1. Ability to appreciate and distinguish humor and satire.
  2. Ability to discuss and analyze instances of purported humor.
  3. Ability to analyse how humor may reflect or shape cultural norms, values and attitudes.
  4. Ability to use the complexities and ambiguities of humor and satire as a means of communication.

Since the whole academic semester encompasses 12 weeks of study, it has been decided to integrate four different assignments to backbone the assessment each rendered at even intervals of three teaching weeks. Thus, the students will be successively engaged in cooperative, pair and individual methods of instruction to widen their learning experience and ensure gradual adaptation and familiarization with the course.

I have tried to implement the idea of internal consistency of the course structure by underpinning constructive alignment. In this design, the learning outcomes, assessments and instructional strategies are carefully balanced so that the desired level of mastery of the course content can be attained. Knowing that assessments tend to enhance students’ learning, the learners should be transparent with and informed beforehand about the learning goals being targeted, assessments set upfront as well as the assessment criteria against which their level of knowledge is to be evaluated. Davis recommends that teachers consider designing an assignment as a multi-faceted element in which in addition to scoring procedures, layout and format, measurable and relevant to the task learning outcomes take the focal point (1993, p 240). All four assessment tasks are summative (with formative elements disseminated throughout) in nature and aim to measure both the result and the process of students’ academic performance:

  1. Conduct an interview (LO 1; Group work; Summative; Weighting 20%; Process assessment)
  2. Write an interview article (LO 2; Group work; Summative; Weighting 30%; Product assessment)
  3. Simulate a role-play (LO 4; Pair work; Summative; Weighting 20%; Process/Performance assessment)
  4. Write a narrative essay on filmed role-play simulation (LO 3; Individual work; Summative; 30%; Product assessment)

In the analysis of four assessment strategies below, each of this methods will be discussed and evaluated to recognize how optimal and balanced they are in relation to the predetermined learning outcomes….

The four summative assessment tasks form two blocks, each constituting two strategies with initial ones being the pre-requisites for written assignments. Therefore, the principles of assessment (following the discussion of assessment strategy, method and procedures) will be viewed cumulatively considering these two sets from the perspective of validity, reliability, transparency, feedback, manageability, inclusivity and motivation, deep approach to learning and student-centeredness.

 

Alignment

Teachers are often tempted to create an assessment that will be the least challenging and easily measurable. Deviating from this regular path and avoiding an assessment in which the teacher is the central figure, I searched vigorously how to link, namely align, module learning outcomes with the methods of assessment. The principles of constructive alignment, which were expounded by Biggs in 2003, helped me in creating a simple framework with the help of which I could think of assessment tasks. The two Wh-questions guided me throughout all work: ‘What is it that I want my students to have learned by the end of this course…?’ (normally worded as the learning outcome), and the concomitant one that goes as ‘How will I know that they have learned it?’ (Campbell and Norton, 2007, p93). The answers to these questions are in the follow-up discussion and analysis of the main principles of assessment.

 

Summative Assessment #1

Assessment strategy, method and procedures

Interviews are indispensable and highly effective teaching tools recognized by students for the valuable experience the assessment strategy is bound to bring. They give students and opportunity to generate new ideas and meaningful insights in a flexible natural setting. Besides having a chance to work in an unusual setting, organizing and conducting interviews (structured, viz. “protocol” or semi-structured) allow for students’ accountability and responsibility. This strategy is justified since it develops not only students’ independence and collaboration, but also allows scope for verifying their understanding and stimulating metacognition. Dori, Mevarech and Baker define the latter as “thinking about one’s thoughts” (2018, p17). Hacker et al. (1998) assert that metacognition is the thinking process linked to a person’s inner mental conceptualization of reality deliberately performed in a planful and goal-oriented manner meanwhile being prompted by thinking at a higher level (cited in Dori, Mevarech and Baker, p17). For example, learners themselves can decide how and what sort of questions to redeem from an interviewee or in what format to organize the process to reap the expected results.

As an assessment method, qualitative group interview responds to LO 1, which aims at gauging students’ ability to appreciate and distinguish humour and satire. Since the verb “appreciate” is not measurable, according to the Bloom’s Taxonomy of Measurable Verbs, the synonymous verb “appraise” will be used instead (we need to bear in mind that this verb implies assessing both students’ ability to analyse and evaluate). Considering this taxonomy as a six-level hierarchy: remember (1), understand (2), apply (3), analyse (4), evaluate (5) and create (6), the current assessment strategy looks predominantly into Levels 3 and 4. It is essential to emphasize that a variety of assessment types preceding those levels can be implemented to achieve pre-set LOs. What’s more, by applying and analysing, students can develop their higher order thinking skills, known as HOTs. Afifah and Retnawati (2019) believe that on the top of fundamental daily life skills, students’ academic achievement directly depends on their, critical, logical and reasoning skills, which are HOTs’ prior goals. Their implementation itself presupposes preparing students for other than traditional atmosphere in which certain decisions need to be made to solve problems. Consequently, it should be understood that HOTs are hardly feasible in teacher-centered classes and impossible to be taught straightforward as a separate skill as part of lesson objectives; however, through involving students’ in active learning and adequate scaffolding this deems manageable. Scaffolding is a concept often used metaphorically as a “temporary structure”. It was proposed in 1976 by Wood, Bruner and Ross, and implies helping a learner to solve a task beyond their scope of knowledge or ability by someone more adept (Gibbons, 2015, p16). To scaffold students’ ability to conduct effective interviews, the teacher can implement a number of activities, such as reviewing the basics, teaching how to illicit high-quality questions, writing the right queries, documenting and practicing the interview (Yeung, 2009).

The students should be explained that conducting an interview is done with the purpose to collect data for the upcoming assignment, which is writing an interview report. Locke and Latham (2002, p713) assert that students’ learning and achievement increase when goals and instructions are accurately delivered to them. While this task is carried out in small groups, the upcoming one will be an individual assignment. Because the students will work in close collaboration for several weeks, it is crucial to create small groups of 3-4 after a class or two when the rapport among the teacher and the students has been built. As Davis (2009) emphasizes smaller group arrangements are a better option for more productive shared work outcome, especially if students have little experience in this mode of study and interaction. However, this experience may become rather chaotic and frustrating should the students be given a chance to do it independently. Such grouping up will most probably be based on social connectedness and deprive its members of adequate task concentration and enthusiasm or be split into two clusters of high/low-achieving students. Typically, the cases of free-riding and inefficient progress are reported to teachers in these circumstances. A better alternative, as my experience suggests, is forming mixed-ability groups following a careful teacher’s observation of students’ participation in class, background knowledge and personality traits. To enhance the spirit of group work, interdependence and sense of responsibility, the students will be required to sign a Contract Proforma (can be adapted from: Curtin Business School http://www.cbs.curtin.edu.au/files/cbsUnitsCourses/Con tract%20Administration%20502.doc). Connery (1988) believes that contracts like this are an effective tool helping to develop teamwork skills and make an equal contribution to a common assignment (cited in Davies, 2009, p574). To make the students feel an inextricable part of the assessment process, be more observant and enhance student-centeredness, they will be instructed to complete a Peer Evaluation Form in which one member will evaluate other members anonymously on a range of aspects they went through together before, during and after the interview. Walvoord  (1986)  this a good method to manage uncooperative members of the group and “…recommends  telling  the  class  that  after  the  group  task  is completed,  each  student  will  submit  to  the  instructor  an  anonymous  assessment  of  the  participation  of  the  other  group  members:  who  did  extra  work and  who  shirked  work” (cited in Davis, 1993, p152).

As a group, you should interview any field professional (no longer than 15 minutes) mostly ridiculed in stand-up shows, parodies, satire books/films, etc. There are four conventional theories in modern humour: Superiority Theory, Relief Theory, Incongruity Theory, Benign Violations Theory (Mulder and Nijholt, 2002). Choose one of those theories against which to collect people’s opinions, attitudes and values through a set of questions. The main goal is to establish the differences and similarities that may occur among the representatives for the upcoming task. Follow several consecutive steps to organize the procedure:

  1. Sign a Group Contract Proforma (it will contribute to the assessment) and choose a leader.
  2. As a group conduct research on each of the theories and select one that all group members would like to base their further task on.
  3. Think of any occupations you are mostly inquisitive about and using your background knowledge and the Internet explore their most vivid examples associated with humour and satire.
  4. Establish contact with the potential interviewee and agree on the medium (face-to-face or virtual), day(s) to meet up for the interview and location.
  5. Decide whether you want your interview to be conducted in a structured or semi-structured format and prepare interesting interview questions in the order they will appear (mind that they will become the foundation for writing an article); along with the questions, include filters and prompts.
  6. One week prior to the interview, submit the interview question to your teacher for formative feedback and make improvements where suggested.
  7. Taking turns, try rehearsing/piloting an interview to build confidence and envisage possible mishaps.
  8. Organize the recorded interview after informing the interviewer about confidentiality matters and ensuring that his/her identity will not be revealed for other than research purposes; Sign the interview attendance form among the interviewers and interviewee (make sure to include the subject’s contact details and email for further correspondence).
  9. During the interview, avoid giving your personal perspective/bias and ask questions preliminary distributed among the team members as well as ad hoc ones; meantime take back-up notes of something interesting you may find about each interviewee’s reaction and body language.
  10. Expand the notes into sentences preferably within 24 hours (Mack, et al., 2011, p33)
  11.  Transcribe the interview to keep it as a proof and reference material for report writing.
  12. Discuss the obtained information as a team. This procedure known as “content analysis” is used to identify and group common as well as distinguishing themes (Mathers, Fox and Hunn (2000, p24).
  13. By the end of Teaching week 6, evaluate anonymously each member of your team using a Google form Peer Evaluation Form sent by your lecturer upon submission of the article. 

The students should conduct their interviews by the end of Teaching Week 3. They will have two full weeks to make proper preparation considering the fact that group formation should be completed by the end of Teaching Week 1.

Feedback

The students will receive written formative feedback from their teacher on the quality of designed questions, their appropriateness and relevance to the task. This should take no longer than 3-4 days to let students make final arrangements for the interview itself.

 

Summative Assessment #2

Assessment strategy, method and procedures

The second summative assessment method is to encourage students to continue working in the same teams and collaboratively turn raw data they have collected from the interview into a structured written piece that the readers will be willing to read.  This task is an interesting and effective way to consolidate the knowledge and understanding about a particular discipline-related problem through scrupulous analysis of the subject’s opinions, thoughts, beliefs and values and incorporating them in a succinct article substantiated with external experts’ views by dint of comparison, contrasting and revealing the interviewee’s unique personality and attitudes. Chapman (1990) maintains that fostering  students' ability  to  write remains the most crucial  schooling priority  because this authentic  assessment tool allows educators to  stimulate  students'  HOTs, such  as  their  ability  to  compare  and  contrast  solutions  to  problems, make  logical  connections,  and be able to find appropriate  support  for their arguments and reach conclusions.

In concord with the LO 2, the assessment method aims to enhance students’ ability to discuss and analyze instances of purported humor. Bloom’s Taxonomy of Action Verbs helps to rationalize this by proving that with the help of article writing a student can translate, comprehend, summarize an argument or interpret information from preliminary learning experience as well as classify and relate other people’s hypothesis and assumptions in a structured manner. Undeniably, when students learn how to draw connections among various ideas, they demonstrate their ability to analyse.

To accomplish the second summative assessment task, the students should comply with the following:

Writing an interview article on a narrowed down critical topic, you will be able to communicate the concept of purported humor. Based on new in-depth insight you will gather from the interviewee, write a 1,200+ word article in an engaging way to keep readers informed, amused and interested. Your prior goal is to captivate the reader with the subject’s personality that should be put in the center.

The following instructions should be followed to write an Interview Article:

  1. Carefully study the assessment criteria first.
  2. As a team determine what format your article will be written in:
  1. Question-answer interview
  2. Personal interview
  3. Narrative essay interviews
  1. Create a properly structured draft (Introduction, Body and Conclusion) of the article using hooks, external sources and the background information about the subject as well as his/her quotes (the latter should not be altered not to distort the meaning).
  2. By the end of Teaching Week 4, upload your draft for peer-review and teacher’s feedback through the Learning Management System (LMS) in the “Assignments” sidebar task preliminary created by your teacher; insert IDs of all team members in the Microsoft Word document.
  3. Within the next 3 days, using the assessment criteria in a team provide your own comments on other team’s draft assigned randomly by the teacher.
  4. Once the peers and teacher’s comments have been received (end of Teaching Week 5), modify and complete the final draft and send it to the interviewee by email to see if his/her views have been accurately represented. You should receive a response within 2-3 days; insert corrections if necessary. Make a screenshot of your correspondence and paste it in the Appendix as a proof for further assessment.
  5. Revise, review and edit your article.
  6. Upload your Interview Article through Turnitin by the end of Teaching Week 6.

           

Feedback

During Teaching Week 5 students will provide feedback to their peers relying on the assessment criteria attached to LMS “Assignments”. The teacher manually assigns two groups to assess one another and sets deadline for the task completion. They should work as a team and be ready to provide comments based on the criteria. No other groups will be able to read the comments. Once the deadline elapses, the students will no longer be able to provide feedback. At this point the teacher leaves his/her own comments linking them to the learning outcome. The main goal here is to avoid burdening students with excessive feedback and focus on pinpointing the most essential components and offering the learners specific strategies they could use for revision.

One or two days prior to submission of the interview-based article, the teacher should send each small group an individual google form link (i.e.“Peer and Self-Evaluation Group 1”; https://docs.google.com/forms/d/e/1FAIpQLSew5Jm9sDP_5BfAGUOnGQjW75zWVX82q_4m3wfUkpHiOsg_0Q/viewform) and encourage each student to answers all questions anonymously:

  • How would you rate your peer's performance as regards preparation, organization and participation in the interview?
  • How would you rate your peer's performance as regards analysis and discussion of the interview?
  • How would you rate your peer’s ability to inspire and support team members?
  • How would you rate your peer's performance as regards working on teacher's feedback?
  • How would you rate your peer's time management, sense of responsibility and collaboration?
  • How would you rate your own performance and contribution on a scale of 1 to 20?

The marker should obtain all google form links and download the excel spreadsheet where all students’ responses will be available. The aggregate score of each student should be calculated and divided by the number of students that evaluated their peers. For example, student A received 17,16 and 18 marks from his/her peers, so the total sum of 51 should be divided by 3 to obtain the average of 17. Next, the mark a student gives to oneself must also be considered. Supposedly, it is 18, so the teacher should add it to the total and divide by 4 to round the figure and obtain 17. Nevertheless, in case a student obtains a score which is obviously discrepant with those of his/her peers, i.e. 9-10-8 vs 20 (5 points or more), then the mark should be capped at what the collaborative one suggests.

Only one student will upload the interview-based article through LMS, to an assessment tool known as Turnitin. Other students’ IDs will be included in the consecutive order in the footnote of the uploaded document as well as in the LMS. This student’s ID should be sent to the teacher one week prior to the submission so that the teacher can share them with the first marker. Working against the assessment criteria (See Appendix 1), the marker will assign one mark for the article to all members of the group and the other mark will be based on the Peer and Self-Evaluation results, therefore all students will be given different marks based on a 50% weighting scale of the total mark (20% for peer and self-evaluation work while progressing through the interview and 30% for the article). Using the “Turntin Reports and Student Uploads”, the marker will provide a written feedback for all group members; meanwhile through “Do Marking” the teacher will submit the same comments regarding the article to all representatives of the group and different ones related to peer and self-evaluation feedback. Consequently, the marks will vary to a minor extent in groups where team collaboration was effective while in groups with less effectively established rapport there might be a significant deviation.

 

Summative Assessment #3

Assessment strategy, method and procedures

One of the core aims of this course is to impart essential elements to learn about the differences between laughing at and laughing with an individual or group as well as knowing the difference between something being funny and being offensive. Simulating a role-play as an assessment method meets this expectation since it is aligned with LO 4 which aims to develop the students’ ability to “use” the specific learning objective - complexities and ambiguities of humor and satire as a means of communication, meaning that it targets the third level of in the Bloom’s Hierarchy of cognition (known as LOT – lower order thinking) where learners demonstrate their ability by applying their acting skills in a new situation. However, Schomberg (1986) believes that role play may even be applicable in the “attainment of higher levels of cognitive development” because it involves spontaneous performance of a given setting (pp32-36). Kozma et al. (1978) consider this strategy of assessing students’ learning as one of the most advantages simply because it involves not only cognitive and affective, but also psychomotor levels of learning (cited in Schomberg, 1986, p32). Role-play simulation encourages thinking and creativity and for content of the course evaluation, reinforcement and consolidation, this assessment method could certainly benefit the learning process. Besides, the use of video-recording of the whole process, peer assessment, self-assessment and teacher feedback regarding the quality of their performance and application of knowledge make the task justifiably effective. For example, Lockhart (2005) indicates that role play combined with peer assessment is bound to produce an effective method of assessment.

Apparently, proficiency of the role play simulation is not easy to assess when students are given little time for preparation, considering that their performance will be video-taped what may create additional stress and discomfort. Therefore, to collect ample background information and data about a particular role as well as making the setting arrangements, at least over two weeks’ time is given.

Role-play simulation presented in pairs will allow a student to step outside the accustomed role in exchange for the role and patterns of a different personality to show how humour with all its complexities and ambiguities can be communicated to the audience. The emphasis for learning according to this strategy lies on its process. Students are free to choose any role (political leaders, religious authorities, cultural icons, enemies, or themselves) so that through humorous and satirical discourse they can present a 7-10 minute role-play simulation in the form of standup comedy, satirical news broadcast, and parody on a scene from a film or humorous novel or reaction to an article from an ironic magazine.

Students should follow the instructions given below to prepare for a recorded role-play simulation:

  1. Pair up with a peer in the beginning of Teaching Week 7 (students should be allowed to do so on their own).
  2. Carefully study the rubric for the task. Alternatively, you can turn it into a checklist.
  3. Think of the 2-role scenario you will use by identifying the situation and distributing the roles.
  4. Use your imagination trying to embody the characters of the people you will be representing and entering the situation.
  5. Work on the text, but do not try to cram it because some improvisation is welcome.
  6. Act out your scenario in front of the camera on a specified day during Teaching Week 9 in the presence of other group members and two teachers (either teaching the module or external) assessing your performance.
  7. Each student will receive an oral feedback from other students and teachers observing the process (the latter also provide a written one together with the mark based on a rubric; the paper is to be kept for further submission).
  8. After you have heard the comments spend 1 minute each to briefly share what you have learnt from this experience, what were your strong sides and what challenged you.
  9. Ensure to save a video footage copy for further written process analysis on role-play simulation
  10. Get prepared to write a narrative essay within the next three weeks.

 

Feedback

The marking rubric (See Appendix 2) measures five relevant to the task skills and abilities: fluency, relatedness, engagement, feedback (as audience), evaluation of one’s own learning and “other comments” box. Each should be assessed on a range of parameters descending from “excellent” (80+) to “very poor” (0-29%). Each student should receive two rubric sheets from the assessors. Therefore, the markers’ role is immense because they ought to ensure students receive accurately and neatly completed marking rubric along with providing additional comments and stating how well students acted as feedback-givers. Before the assessment is in place, the teacher can assign pairs to provide feedback to one another or let any pair from the cohort provide their comments. The latter might be more difficult to grasp in bigger classes.

As the task implies, each student should provide feedback, at least once, during the whole role-play simulation session so that teachers can evaluate their ability to recall information, their degree of understanding and their ability to articulate their thinking (all levels LOTs cover). The two markers should calibrate on their comments and need to decide on the mark after all students leave the classroom. The teacher delivering the module should return papers during the next class and provide additional feedback and give clarifications should any queries arise from students’ side. This approach will increase the transparency in assigning the mark.

 

Summative Assessment #4

Assessment strategy, method and procedures

The final summative assessment task is an individual assignment which requires that students reflect on their recent experience of performing a pair-mode role-play simulation. Shannon (n.d.) points out that the analysis that comes after the role-play is even more important than the original task itself (cited in Schomberg, 1986, p34). Aubusson and Fogwill (2006) name role-play simulation a “performance-based activity” that in the opinion of Harrison-Pepper (1999) is a type of kinaesthetic learning which stimulates students’ ability to reflect and facilitates a more active part in the process (cited in Price, 2010, p83). The LO 3 is closely aligned with the task since students’ ability to analyse their role-play simulation experience by drawing connections among various ideas is at the front of this assessment strategy. The task will stimulate their original and creative thinking. Working individually, students will be able to progress through the given assignment from a different mode of instruction following cooperative and pair-pattern learning.

The main advantage of this method of assessment compared to writing regular discursive or argumentative essays is that it is largely pragmatic. The content of their narrative will be concocted based on data derived from action-based preliminary assignment and integration of other essential components, such as video-recorded performance and feedback from teachers. With this, the students can not only analyze the main character, but also juxtapose self-evaluation with the critical remarks of others (teachers and peers). Such direct engagement with an authentic task will encourage students’ motivation allowing them to turn their experiences through complex cognitive processes into a product.

Since the 7-10 minute role-play simulation was video-recorded, more meticulous analysis can be carried out by the students. On top of that, the students received a written feedback from the teacher-observers and oral feedback from their peers. Both should be considered when analysing the process of role-play simulation.  Students should follow a number of guidelines to write a 700+ word narrative essay. Below is the background for the task and instructions they should base their writing on:

Westerners see humour as a natural way to gain amusement. Easterners’ attitudes, however, are not that positive. In China, it used to be devalued wherein today being humorous is not the most desirable personality trait. Lebanese and Belgians link humour with harmony and group cohesion (Jiang, Li, and Hou, 2019). Attitudes to humour are different throughout the world and have a great impact on other areas of human activities and even personalities. Analyze the role-play simulation you presented from the perspective of a cultural belonging of the main character(s) to infer how their norms, values and attitudes differ from those of yours. Reflect on strengths and weaknesses you experienced as you were simulating your role and connect your learning with the task-related outcome.

Please, follow the guidelines below:

  1. Read the assessment criteria first.
  2. Re-watch the video of your performance for deeper analysis and study the feedback returned by the teachers.
  3. Write your narrative essay in the first-person and eye-catching language and adhere to a preferable 5-paragraph structure.
  4. Write about the audience, ideas for selecting role-play character(s) and setting.
  5. Share what you have learnt from the role-play simulation, what you could have done differently to make it better and how you will apply the gained knowledge in the future.
  6. Show how you addressed the feedback given to you and what impression it had on you.
  7. Do not be too formal, but creative; use metaphors, analogues and external literature where necessary.
  8.  Create a Reference List and an Appendix to which the teachers’ feedback should be added.
  9. Proofread the essay and submit it by the end of Teaching Week 12 through Turnitin.

 

Feedback

For the last three weeks of the course as the learners progress through writing their narrative, they should be encouraged to send their drafts for teacher’s formative feedback. This should not be obligatory, but could give students, especially those doubting the quality of their work, an opportunity to see the weak points in their approach to the task. Moreover, as part of classroom activity, the students should be motivated to provide peer assessment to each other’s drafts. A checklist made on the basis of the assessment criteria could be beneficial for this purpose. This activity may help to reveal generally misinterpreted areas and facilitate a discussion for clarification purposes.

When grading the students’ essays, for having various task components combined, it is analytic rather than holistic approach is that is more pertinent per se. Assessment criteria cover four main areas that are targeted for the assignment and will be converted into an online rubric (See Appendix 3). Like in the previous written assignment marking, the feedback will be available for students on “Turntin Reports and Student Uploads”, while through “Do Marking” each student will be given a summary feedback composed of selected elements.

 

Principles of Assessment

Validity and Reliability

The proposed assessment tasks fall under all categories of the utility formula. Primarily, their validity can be explained by its alignment with LOs which intends to measure what it is implied to measure. The first assessment method requires that students proceed by adhering to structured or semi-structured interview types, both of which enhance the validity of the assessment. Such type of interviews render high validity since based on them, researchers are able to collect primary data about interviewee’s perceptions, roles and actions in a detailed manner (Ahlin, 2019, p10). Besides, Boldeston (2012) believes that “face validity of the questions” posed during the interview tends to improve when expertise opinions are collected. She also maintains that frequently asked thoughtful and spontaneous questions that arise during interviews add validity to the data because they involve the learner’s both affective and cognitive domains. Also, Seidman (2006) assures that pilot small group interview testing prescribed in the instructions should enhance the validity of the task because this allows modifying any poorly or awkwardly formulated questions (cited in Boldeston, 2012, p70). Finally, the task invloves the formative element because the students are requared to submit their interview questions for teacher’s feedback and feed forward one week prior to the actual interview. Divjak, Kadoic and Zugec (2021) emphasize that constructing assessment tasks of formative nature endorsing summative ones is an important prerequisite of a valid assessment (p392). Finally, the follow-up work the students will engross themselves with together the individual report they will be expected to produce have a considerable impact on their learning. Van der Vleuten (1996, p58) points out that scrutinizing materials by reviewing them, even in the simplest manner, after they were implemented strongly enhances the task validity and renders beneficial effects on students’ learning.

For the assessment to be deemed consistent and reliable, I attempted to ensure that the assignment-related processes involve a foolproof setting and marking. Indeed, when considering whether an assessment task precludes any element of it that could possibly undermine its reliability, it is crucial to contemplate how consistent the results obtained within a particular term will be among a different range of students involved over some time. Bandiera and Regehr (2004) set big hopes for interviews as a method of data collection and highlight that when they are properly designed and strictly adhere to the protocol, they become highly reliable instruments (p31). The assessment module proposed in this work determines this as one of the primary goals. Generally, the marks that students obtain for written assignments, even those being based on marking rubric, pose threat to validity and reliability because of their natural subjectivity on teachers’ behalf that assess students’ learning (Rethinasamy, 2021, p402). However, the proposed written assignments enhance their validity and reliability in standardization in students’ scoring by means of several undertakings. The rubrics aim to assess students’ work through writing, but not the writing itself. To do so, it should be scrupulously calibrated towards generic one among the teaching personnel following piloting of the assessment and collecting students’ views as well as by monitoring the mean average of the marks they obtain. Another obvious advantage is how both pieces are to be assessed. Rethinasamy (2021) also mentions about rater trainings to increase the efficiency in marking (p402). Indeed, in  the  assessment  of  students the staff must  be  competent  to  undertake their  duties  and  responsibilities these are predetermined   development  policies  and  strategies underpinned in the principle of assessment.  Next, as mentioned earlier, no holistic essay grading approach should be undertaken, therefore a scheme (analytic grading) with the help of which various components could be evaluated has been developed, otherwise, when it comes to individual teacher assessment, the reliability and validity could seriously suffer (Breland, 1983, cited in Rezaei and Lovorn, 2010, pp19-20). The validity and reliability of role-play simulation is primarily enhanced by involving an external marker, peer assessors and the use of video-recording that boosts the responsibility in grading. Besides, Pham and Nguyen (2019) state that in recording activities as well as feedback from peers and teachers, digital video recordings help students realize the hesitation markers of their speeches and most common use of pause fillers (p188). To establish more a higher degree of validity, all assessments should prioritize achievements in content coverage. As Davis (1993) warns validity is hard to determine, but focusing on content is the most practical approach in the assessment.  

 

Transparency

Regarding the following factor, all assessment tasks, distributed and explicitly explained in the very first seminar and reminded about further on along with timely clarification satisfied upon request and open access to detailed task description available on the Intranet, makes the assessment fully transparent. The provided assessment criteria should make students be aware of how they are going to be assessed and what expectations the teachers have from them. Other details, such as the purpose of signing a Group Contract Proforma, filling in a Peer Evaluation Form, deadlines, procedures related to submitting drafted interview questions, grading and how the final grade is to be calculated, jointly contribute to the assessment task being accessible and transparent. Sharing assessment criteria with HE learners has not always been a common practice and if done it was so only for the purpose of accountability not as a way to communicate certain expectations to the learners. Few experts doubt the advantages of sharing the assessment criteria. For example, Balloo, et al. (n.d.) disagree with the idea that being transparent with students fosters “criteria compliance” but reveal “self-regulatory capacity” which is promoted instead (cited in Torrance, 2007, p287). However, not every student should be expected to have a good ability of self-regulation or be able to develop one spontaneously. In this regard, to further promote the concept of transparency and student autonomy, it is vital that the teacher should support students in their planning, monitoring and evaluating their academic performance based on the well-interpreted assessment criteria. I tried to follow the suggestions of Jonsson and Prins (2019) on how to make student learning a more equitable and productive experience by turning the assessment criteria into “indicators of quality” and considering them “in their context of use”. Feedback and any given samples should be used to “anchor” the criteria to practical experience. With this particular approach, the teacher feeds his/her students up, viz. provides information on how their progress contributes to the module outcomes. 

 

Manageability

There is virtually no accurate data on how much time a student should spend in a particular period for completing certain tasks due to various factors that may influence this calculation, but it possible to make estimations. Barre (2016) has found an estimated average of 12-15 hours students spend weekly on doing academic work outside their classroom. Considering that the estimated student managed learning hours for “What’s so funny?” module is 152, the average over a 12-week course will equal less than 13 hours. So, since there are four summative assessment tasks distributed over equal intervals each lasting three weeks, there will be 38 hours off class time weekly to prepare between each assessment for preparation. Torrance et al. (2000) in their experience at almost 500 students revealed that it takes up to 15 hours to produce a 1,500-word academically written piece with several drafts and at least one external source.  For example, the total word count of summative assessments #2 and #4 comprises about 2,000. Considering the fact that the former one is done in groups, the process can be completed even faster. Besides, narratives, from the text genre perspective, require less critical engagement content-wise and not as plan-demanding as research or argument based texts. The same goes for the role-play simulation which tends to be non-competitive and less structured (Shannon (n.d.), cited in Schomberg, 1986, p32).

Next, I tried to make assessment as simple as possible to align it with the LOs and inculcate the feeling of understanding in students of what they are expected to fulfill to acquire the knowledge they are supposed to acquire. As expected, the given instructions should be interpreted in the right manner. On the other hand, the students will receive multiple formative feedback at each stage, what makes all methods of assessment feasible.  The teachers are encouraged to facilitate students at all times and be flexible, especially when students work in groups and in pairs.

 

Inclusivity and motivation

Inclusivity is a crucial principle in any course assessment design that allows a teacher to embrace diverse students’ needs and types so that everyone has equal and impartial opportunities to meet and understand to LOs. I have been determined to substantiate the course with inclusive practices by diversifying its content through different summative and formative assessment formats and integration of versatile strategies that make use authentic tasks, settings, and modes of instruction, use of various online platforms for evaluation, presentation and task submission. Such rich gamut of approaches should keep students interested and motivated throughout the whole course without experiencing any sort of discrimination and always having a sense of choice, support and facilitation.

 

Deep approach to learning

The current assessment design of the course provides intentionally excludes the attainment of any passive learning goals such as memorization and regurgitation; instead greater focus  is an deep approach to learning because the methods integrated in it develop a number of smaller learning outcomes directly associated with the problem-based learning (PBL). Nilson (2010, p190) provides a long list of skills that PBL fosters in developing and meeting the LOs, among which a good many are intertwined among assessment tasks and are bound to improve through the current design of the course: working in teams/pairs/independently, developing problem-solving and research skills, oral and written communication, analytical and critical thinking, holding leadership roles, evaluating processes in the group, reasoning on various concepts and putting the module matter into authentic practice). In the opinion of Ramsden (2004) varied assessment content encouraging this entire gamut of skills encourages not only deep approach to learning, but also leads to higher grades, greater quality enhancement of LOs and more enjoyment with the learning process (cited in Campbell and Norton, 2007, p56).

 

Student-centeredness

In my belief, a student-centered class is one wherein a student is in charge of his/her own learning. Oliver-Hoyo (2001) links it with inquiry-based environment in which students are actively engaged in learning (p9). Indeed, doing is the most tangible benefit one can gain in skills and knowledge acquisition process. Student-centeredness in my assessment design is observable through various features. Firstly, it is built on moderate scaffolding on teacher’s side and collaboration among students. The latter presents throughout the whole semester and gives the spirit of common accomplishment. The tasks imply that everyone should be involved in active collaborative work organized in small group settings and pair work. This allows for peer feedback, mutual support and social-emotional exchange. Secondly, each student can adapt the assignment to their own interests without deviating from the general requirements. They do not have rigorous demands that would constrain their creativity and desire to express themselves in the manner they deem appropriate. For instance, they can select a representative of any occupation for their interview or role-play any character(s) they consider interesting; in the meantime, not to turn writing into a race for the desired word-count or cause students’ desperation in having to alter or truncate bulks of the text for not fitting into the framework, I tried not to limit students in terms of word-limit and added a plus sign (+) next to the expected number. Finally, technology integration is another hallmark ticking the checkbox of student-centeredness. Students are expected to use it at most times and will be exposed to various Inter-and-Intranet-based tools, smart and recording devices for data generation, analysis and submission.

 

Conclusion

In conclusion, when designing the assessment for the “What’s so funny?” module, I was determined to be mindful of the prospective audience and tried to implement “backward design”, the main rationale of which is to build the course around the skills and knowledge students are expected to attain from their learning experience, not around traditionally favoured and prioritized predetermined activities and tasks. Widely used in western communities, this approach of integrating discipline-based learning into integrated curriculum was comparatively recently suggested by Grant Wiggins and Jay McTighe and effective spread among educationalists (Korotchenko, et al., 2015, p214). Following this design, assessment and evaluation should not be viewed as interchangeable concepts because the former is predisposed to enhance students’ learning through ongoing practices, while the latter for its being more summative in essence is rather focused on the final product allowing measuring the level of achievement and understanding. Thus, irrespective of whether an assessment is carried out at the end of the semester or in its progress, its functions, in most cases, remain the same: to provide feedback, give marks, improve students’ learning strategies and hold accountability to external stakeholders.

Even though we often hear about interviews, essays and role-plays, in my assessment design each of the tasks is modified to some extent and interconnected. With such inclusive and student-centered approach embedded within the curriculum, the students are predicted to obtain fair opportunities to learn and experience a greater repertoire of assessment tasks in a dynamic environment wherein the teacher acts not as a content provider, but as a facilitator. I would like to finish my CW with the quote that has deeply etched in my mind: “If you want to change student learning then change the methods of assessment” (Brown, et al., 1997).

 

References

Ahlin, E.  (2019). Semi-Structured Interviews With Expert Practitioners: Their Validity and Significant Contribution to Translational Research. Pennsylvania State University. Available from DOI: 10.4135/9781526466037 [Accessed 04 January 2021].

Afifah, I. and Retnawati, H. (2019). Is it difficult to teach higher order thinking skills? Journal of Physics: Conference Series, 1320. Available from doi:10.1088/1742-6596/1320/1/012098 [Accessed 17 December 2021].

Bandiera, G. and Regehr, G. (2004). Reliability of a Structured Interview Scoring Instrument for a Canadian Postgraduate Emergency Medicine Training Program, ACAD EMERG MED, 11(1), 27-32. Available from https://onlinelibrary.wiley.com/doi/pdf/10.1197/j.aem.2003.06.011[Accessed 04 January 2022].

Barre, E. (2016). How Much Should We Assign? Estimating Out of Class Workload. Rice Center for Teaching Excellence. Available from https://cte.rice.edu/blogarchive/2016/07/11/workload [Accessed 04 January 2022].

Boldeston, A. (2012). Conducting a Research Interview. Journal of Medical Imaging and Radiation Sciences, 43(1), 66–76. Available from DOI: 10.1016/j.jmir.2011.12.002 [Accessed 04 January 2022].

Brown, G., et al. (1997). Assessing Student Learning in Higher Education. London: Routledge.

Campbell, A. and Norton, L. (2007). Learning, Teaching and Assessing in Higher Education: Developing Reflective Practice (Teaching in Higher Education Series). Exeter: Learning Matters Ltd.

Chapman, C. (1990). Practical Assessment, Research, and Evaluation Practical Assessment. PARE, 2(7).  Available from https://doi.org/10.7275/1yf7-vd04 [Accessed 06 January 2022].

Davis, B. G. (1993). Tools for teaching. San Francisco, CA: Jossey Bass.

Davies, M. (2009). Groupwork as a form of assessment: Common problems and recommended solutions. ResearchGate, 58(4), 563-584. Available from doi: 10.1007/s10734-009-9216-y [Accessed 14 December 2021].

Divjak, B., Kadoic, N. and Zugec, B. (2021). The Use of Decision-Making Methods to Ensure Assessment Validity. Conference: 2021 IEEE Technology & Engineering Management Conference.  Europe. June 2021. ResearchGate, 391-396.

Dori, Y., Mevarech, Z. and Baker, D. (2018). Cognition, Metacognition, and Culture in STEM Education. USA Weston, MA: Springer.

Gibbons, P. (2015). Scaffolding language, scaffolding learning: Teaching English Language Learners in the Mainstream Classroom. 2nd ed. Portsmouth, NH: HEINEMANN.

Jiang, T., Li, H. and Hou, Y. (2019). Cultural Differences in Humor Perception, Usage, and Implications. Frontiers in Psychology, 10(123). Available from doi:10.3389/fpsyg.2019.00123 [Accessed 08 January 2022].

Jonsson, A. and Prins, F. (2019) Editorial: Transparency in Assessment—Exploring the Influence of Explicit Assessment Criteria. Frontiers in Education, 3 (119). Available from doi: 10.3389/feduc.2018.00119 [Accessed 04 January 2022].

Korotchenko, T.V. et al. (2015). Backward Design Method in Foreign Language Curriculum Development. Procedia - Social and Behavioral Sciences, 213-215. Available from DOI: 10.1016/j.sbspro.2015.11.624 [Accessed 30 December 2021].

Lambert, et al. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478-85. Available from DOI: 10.3109/0142159X.2011.565828 [Accessed 15 December 2021].

Locke, E. A. and Latham, G.P. (2002). Building a practically useful theory of goal setting and task motivation - A 35-year O.dyssey. American Psychologist, 57(9), 705-17. Available from doi: 10.1037//0003-066X.57.9.705 [Accessed 03 January 2022].

Lockhart, M. (2005). Peer assessment and role play: A winning alliance. Academic Exchange Quarterly, 9, 40-44.

Mack, N. et al. (2011). Qualitative Research Methods: A Data Collector’s Field Guide. USA: Family Health International.

Mathers, N., Fox, N. J. and Hunn, A. (2000). Using Interviews in a Research Project. In: Wilson, A., Williams, M. and Hancock, B. (eds.) Research Approaches in Primary Care. Sheffield: Radcliffe Medical Press/Trent Focus, 1-57.

Mirbahai,  L. and  Adie, J . (2020). Applying the utility index to review single best answer questions in medical education  assessment. Archives of Epidemiology  and Public Health, 1, 1-5. Available from https://dx.doi.org/ 10.15761/AEPH.1000113 [Accessed 04 January 2021].

Mulder, M. P. and Nijholt, A. (2002). Humour Research: State of the Art. ResearchGate. Available from https://www.researchgate.net/publication/2474262_Humour_Research_State_of_the_Art [Accessed 27 December 2021].

Nilson, L. B. (2010). Teaching at its best: A research-based resource for college instructors (2nd ed.).  San Francisco, CA: Jossey-Bass. 

Oliver-Hoyo, M. (2001). Lessons learned from the implementation and assessment of student-centered methodologies. Journal of Technology and Science Education, 1 (1), 1-11. Available from DOI:10.3926/jotse.2011.6  [Accessed 10 January 2022].

Price, R. M. (2010). Performing Evolution: Role-Play Simulations. Price Evo Edu Outreach, 4, 83–94. Available from DOI 10.1007/s12052-010-0300-7 [Accessed 05 January 2022].

Rethinasamy, S. (2021). The  Effects  of  Different  Rater Training  Procedures  on  ESL Essay Raters’ Rating  Accuracy. Pertanika Journals, 29 (3), 401 – 419. Available from doi.org/10.47836/pjssh.29.S3.21 [Accessed 01 January 2022].

Rezaei, A. R. and Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing, 15(1), 18–39. Available from doi:10.1016/j.asw.2010.01.003 [Accessed 02 January 2022].

Schomberg, S. F. (1986). Strategies for Active Teaching and Learning in University Classrooms. A Handbook of Teaching Strategies. ERIC. Available from https://www.academia.edu/58207378/Strategies_for_Active_Teaching_and_Learning_in_University_Classrooms_A_Handbook_of_Teaching_Strategies [Accessed 08 January 2022].

Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post-secondary education and training can come to dominate learning. Taylor & Francis Online, 14, 281–294. Available from doi: 10.1080/09695940701591867 [Accessed 04 January 2022].

Van der Vleuten, C. (1996). The assessment of professional competence: developments, research and practical implications. Advances in Health Sciences Education 1 (1), 41-67. Available from https://www.ceesvandervleuten.com/publications/assessment-overviews [Accessed 04 January 2022].

Yeung, B. (2009). How to Help Students Develop Interviewing Skills. Edutopia. Available from https://www.edutopia.org/service-learning-center-urban-pedagogy-interviewing [Accessed 23 December 2021].


Please check the Pilgrims f2f courses at Pilgrims website.

Please check the Pilgrims online courses at Pilgrims website.

Tagged  Humour 
  • Designing an Effective Assessment Strategy for “What’s so Funny?” Module
    Alim Asanov, Uzbekistan

  • Teacher- Initiated- Humour as a Means of Stress Relieving for Teachers During the Classes
    Aziz Kholmatov, Uzbekistan