Pilgrims HomeContentsEditorialMarjor ArticleJokesShort ArticleIdeas from the CorporaLesson OutlinesStudent VoicesPublicationsAn Old ExercisePilgrims Course OutlineReaders LettersPrevious Editions

Copyright Information

Humanising Language Teaching
Year 2; Issue 5; September 2000

Major Article

"Data and gathering it, or why an interview or survey isn't data"

Donald Freeman

Page 1 of 1

TELLING TEACHING: COLUMN #4
Donald Freeman
Center for Teacher Education, Training, and Research
School for International Training
Brattleboro, VT 05301 USA
email: teacherknowledge@sit.edu
web address: http://www.sit.edu/

In my last column, I wrote about making learning visible as a first-- and critical-- step in telling classroom teaching. Simply put, unless you can 'see' it, how can you begin to gather the information. And if you can't gather it, how can you begin to assemble explanations for what is-- or may be-- going on for students in your lessons. Data is information. In the case of telling teaching, it is information about learners and their learning processes. In the last column, I drew a distinction between performance data, which is data about classroom outcomes and results, and learning data, which is data about the people and their processes. Classrooms, schools, and school systems operate by-in-large on performance data: the written assignment or the answered question in class; the results of a unit or school-leaving exam; an IELTS, or a TOEFL test score; and so on. They do so because performance-- in our case in English-- is taken to measure learning. If you-- the student-- haven't learned it, you won't be able to 'perform' adequately, the reasoning goes.

But basically performance data is not much use in understanding learning as it is happening. Let me give an analogy and an example. The analogy is this: Let's say you are in a restaurant, enjoying an excellent meal and you want to know how the sauce is made. Unless you are a very experienced, even expert, cook, it's not likely that you can figure it out simply by tasting the sauce itself. You'd probably need to talk to the chef, or go back to the kitchen and watch it being made. The point here is that the final result doesn't usually show the process. The example: Sometimes a child, aged three or so, will proudly demonstrate that she or he can count to 10 or 20, especially when cued to do so by a parent or adult. The audience usually reacts with all sorts of banalities: 'Listen to you count!', 'I can't believe you can count to 10 at your age!', 'Look at how smart you are!'

-Then, like any performer who senses praise, the child may continue counting, and things will fall apart. "Nine, ten, eleven, twelve, thirteen, fourteen, forty-teen, sixteen, nineteen, twenty-teen, thirty-teen..." What had seemed from the initial performance as mastery, of the numbers up to 12, now from this new data, seems perhaps to be just a rhythmic chant. Perhaps the child has memorized sounds in order up to a certain point, say 13, and then the rhythm comes apart. There are two points here. First, performances can be misleading as evidence of learning because they are usually bounded, in this example from 1 to 12. If we had stopped at 12, we might have thought the child 'knew how to count.' Similarly, as teachers, we've seen students who 'do' (or perform) quite well on a test or formal assessment, but can't seem to use that same language in other ways or settings.

The second point is that data about learning shows the learner in relation to something that he or she is doing. Look at how the child uses 'teen' as a suffix: "...forty-teen, sixteen, nineteen, twenty-teen..". What does this show? Perhaps that the child understands that after 1 to 12 comes a set of sounds with 'teen' and that the 'teen' gets put at the end of each word. So the child has grasped a key feature of the morphology of numbers in English, although he or she is not yet counting accurately. The child also has control of the supersegmentals in counting, the rhythmic pattern of listing numbers. It's almost as if this chant provides a scaffold or frame in which the pieces of morphology fit. But like a jigsaw puzzle, they are not yet accurately placed.

What is in relation to what ought to be . .

Learning data casts a shadow. It shows you, as teacher or observer, what is in relation to what is not. This is precisely where the dilemma can lie. As the teacher, and one who is knowledgeable and proficient in the content, you can be tempted to look at what is missing in what the learner is doing. You can see the mistakes, the problems, because that is what needs to be corrected, put right; that is where the work is. In other words, you focus on what is not there. That focus shows you partial or complete mastery, but it doesn't necessarily show you learning. It is only by looking at the relationship-- at what is there in relation to what you know ought to be there (from your knowledge of the content)-- that you may 'see' learning underway. This space is where you can intervene as a teacher, where you can 'teach.'

Working in this space to gather data about learning involves three things. First, you have to make or maintain the space itself. This involves organizing activities and lessons so that you can 'see' what learners are (or are not) doing so that their learning becomes more visible. (I have written about this idea in my two previous columns). Second, you need to capture the information; you need to be able to hold on to it. In the case of the child counting, you might remember what was said, or you might write it down, or if you're a proud parent, you might be videotaping the event. And third, you need to capture the data in an organized manner; in other words you need to be disciplined and systematic about it. These latter two aspects are generally where the process of teacher-research and telling teaching falls apart because of time, technology, and stick-to-it-tive-ness.

Let's take capturing information or data collection. When I'm working with a group who are doing teacher-research, after we have articulated areas of inquiry and developed some questions, I'll ask them what types of data might respond to their questions. People will reply, 'Interviews', 'Videotapes', 'Surveys... questionnaires'... And I'll say, "But those aren't data. What data do you need?" People are usually baffled. Then the critical distinction will emerge: Student perceptions and opinions are data; the interviews, surveys, or questionnaires are how you might gather that data. Students' actions and talk in the classroom are data; video or audio tapes or field observation notes are means to collect that data. In teacher-research, data can basically come from three sources: the student(s), the teacher, and the setting (which includes the curriculum, school records, classroom lay-out, and so on). If you video your class having a debate, you are collecting data about what the students are saying (in their L1 and in English) and doing, something of the classroom lay-out, and perhaps what you are saying and doing (if you are in the video). If you interview three students about their performance in the debate, you are gathering data about these students' opinions and perceptions of the activity, and perhaps about their L2 use (if you talk to them in English).

Why make this distinction between data and collection strategies? Well, the data is always there; but collecting is usually what presents the challenge. Let's say you had planned to videotape the debate, but the camera isn't working, or another teacher hasn't returned it, or whatever At this point, you might well abandon the idea of collecting that data because your preferred collection strategy (videotaping) has fallen apart. But in distinguishing the data from the collection strategy, you have the opportunity to identify and undertake other ways of gathering the same information. You might, for example, make written observations as the debate proceeds (what anthropologists call 'field notes'). Or you might grab a tape recorder and audio-tape the debate. Or you might designate two students to keep a running written record of what their teammates say and do in the debate.

Just as the map is not the territory, neither is the collection strategy, the data. Seeing, knowing, and maintaining this distinction allows you to work more effectively as a teacher-researcher and someone who is telling your teaching. And, my argument goes, because separating the data from the collection strategies allows for more ways to collect the same (or at least commensurate) information, it gives you more flexibility. If you have four ways to collect this debate data-- videotaping, taking observation/field notes, audiotaping, or having students keeping a running record-- you simply have more options. You are thus less likely to become stuck or stymied if one way doesn't work out. If you can't manage it one way, you can use another. This flexibility is, perhaps ironically, the key to stick-to-it-tive-ness and maintaining the discipline of your inquiry.

REFERENCES

There are a number of good books which include useful sections on ways to collect data. I'd suggest five (listed alphabetically) that are particularly framed for teachers who are interested in gathering data about in their classrooms:

D. Freeman, 1998. Doing teacher-research: From inquiry to understanding Boston: Heinle/Thomson Learning

  • In chapter 5, I write about collecting and analyzing data. Appendix C, pp. 201-216, lists and details 12 classroom data collection techniques.

D. Hopkins. 1993. A teacher's guide to classroom research. (second edition) Buckingham, UK; Open University Press.

  • Chapter 8 (pp. 115-145) is on various data collection techniques. It includes a clear discussion of the pros and cons of each technique. The book is oriented to teachers in general.

R.S. Hubbard and B.M Power. 1993. The art of classroom inquiry. Portsmouth N.H.: Heinemann/US.

  • The book includes a thorough treatment of data collection, oriented for teachers in general. The book has many excellent examples drawn from K-12 teaching in the United States, as well as a clear discussion of using the various techniques described.

J. McDonough and S. McDonough. 1997. Research methods for English language teachers. London: Arnold.

  • Chapters 7-11 (pp. 121-188) include a discussion of the reasoning behind commonly used data collection techniques. The descriptions are less procedural. The book, which is oriented to language teachers, has particularly good discussion of quantitative techniques and data analysis..

M. Q. Patton. 1990. Qualitative evaluation and research methods (second edition) (Newbury Park, CA: Sage Publications)

  • This is a classic book which provides detailed advice, gleaned from extensive experience, on data collection and analysis. Patton writes in a very readable style, however I would recommend this as a source book to consult.


Back to the top