Making Sense of Interview Data: Why I Still Like Talking to People

(From the talk I gave at CPR Ignite: Student/Staff Research Seminar Series on 27 April 2026. The content is mine, polished into this post with Perplexity AI.)

When students come to us for advice on research projects, they often say they want to do surveys or interviews, but they are not always sure which method really fits their purpose. That uncertainty is partly why I wanted this talk to focus less on findings alone and more on process: why I use interviews, what kinds of problems interview data create, and how I work through those problems in practice.

I should say at the outset that this is not a case against experiments, statistics, or quantitative work. I often joke with my students that I am so bad at maths that that is why I teach English, but the real point is methodological rather than personal. The question is not which method is “better” in the abstract, but which method is better suited to the kind of question you want to ask.

Why interviews?

For me, interviews are useful when the research question is about perspectives, experiences, or meaning-making. If you want to know how people understand what they do, how they explain their choices, or why the same situation can look very different to different participants, interviews can give you access to that complexity in a way that numbers alone often cannot.

That does not mean interviews are somehow more authentic or more truthful than other methods. It means they are well-suited to research that starts from the view that reality is not always singular, neat, or easily measurable. In qualitative work, we are often less interested in reducing experience to a scale and more interested in how people make sense of what is happening around them.

The trouble with interview data

Of course, the moment you decide to do interviews, you also decide to give yourself a lot of work. Even a small interview project can generate a great deal of text, and once you have several long transcripts in front of you, the question becomes less “Do I have data?” and more “How on earth am I going to make sense of all this?”

Transcription is easier now than it used to be, and anyone who had to do it manually years ago will know why that feels like progress. But the bigger challenge is not simply turning speech into text. The real challenge is that people say a lot, they move in unexpected directions, and they do not package their ideas neatly for our convenience. Looking across interviews for patterns can feel a bit like standing in front of an ocean and hoping a lighthouse appears.

Is there a lighthouse helping us navigate the ocean of data? (Photo by Stan Versluis on Pexels.com)

That is why interview analysis needs guidance. Research questions matter. Theory matters. Coding matters. Otherwise, it is very easy to drown in your own data.

Ethics before analysis

Before any analysis begins, there is the much more important question of ethics. Interviews involve people, not just data, so privacy, confidentiality, and well-being have to be taken seriously. We need informed consent, institutional approval where appropriate, and a clear sense of what kinds of questions are reasonable to ask.

This is especially important when the topic is sensitive or when there is an uneven power relationship between researcher and participant. In my own work with students (Cheung & Shi, 2024; Shi & Cheung, 2024), for example, timing matters. If we interview students only after the assessment has finished and grades have been released, there is less pressure for them to tell us what they think we want to hear. That does not remove social desirability altogether, but it helps.

Good questions matter

One issue I often see in student projects is that the interview questions do not really match the research questions. The questions may be interesting, but if they do not help answer the central problem of the study, the researcher can end up with pages of material that are difficult to use.

So before writing an interview protocol, I think it is worth asking some very plain questions:

  • What exactly am I trying to find out?
  • Why is an interview the right way to find it out?
  • What kinds of responses do I actually need from participants?
  • To what extent are my interview questions in line with my research questions?

What my studies taught me

The three studies I referred to in the talk all adopted interviews as the main data collection method. They involved different topics, but they shared a concern with how people make meaning through language and experience. One looked at research-intensive English lecturers working across teaching, research, administration, and service. Another, with Wen, focused on service-learning students co-creating stories with refugee children, and on the role of English and generative AI in that process. A third examined how a service-learning subject could foster multicultural and multiliterate competence through experiential learning.

What these studies reminded me of, especially the service-learning ones, is that language learning is rarely just about grades, even though grades obviously matter to students. In these contexts, English was also a means of story-making, reflection, creativity, and intercultural communication.

That is one reason interview data can be so valuable: they help us see not just what students did, but what the experience meant to them.

Coding without losing the people

Going back to the process of working with interview data. After transcription comes coding, and this is where many researchers either become very excited or very tired. Usually, we begin with open coding, working through the transcript and identifying themes, patterns, or recurring concerns that connect back to the research questions. In one of my teacher studies, this meant working with a large set of codes related to identity, research activity, coping strategies, challenges, and productivity.

Coding is useful because it helps bring some order to the material, but it can also become too mechanical if we are not careful. Interview data are not just containers of “content” waiting to be extracted. They are traces of how people speak, position themselves, justify decisions, and make sense of their own experiences. So while coding is necessary, interpretation still matters.

Trustworthiness, not tidy certainty

In qualitative research, we often talk about trustworthiness rather than certainty. We are not usually trying to prove that there is one final, objective reading of the data. What we need to do instead is show clearly how we moved from transcripts to interpretation.

That means being transparent about coding procedures, theoretical framing, researcher position, and possible bias. It also means checking our interpretations against other data sources where possible, and sometimes going back to participants themselves. Put simply, we need to leave a trail that readers can follow. Or, less politely, we need to show that we are not making things up.

Why this still matters now

All this leads to a larger question that feels harder to ignore now than it did a few years ago. If AI can transcribe speech, summarise interviews, generate codes, and even draft a report, what is left for the researcher to do?

I do not think the answer is “everything” or “nothing”. But I do think research still depends on judgment, interpretation, ethics, and human seriousness about what matters. Tools can help us process language, but they do not remove the need to think carefully about whose voice is being represented, what kind of claim is being made, or why a particular interpretation is worth making in the first place.

That is also why I still like interviews. As a qualitative researcher, I like talking — as you can probably tell. More than that, I still think some of the best ideas come from chats over meals and drinks. That line is partly a joke, but only partly. Good interviews, like good conversations more generally, can bring out the texture of people’s thinking in ways that more controlled forms of data collection often cannot.

A question to end with

So perhaps the question is not whether interviews are messy. Of course they are. The more useful question is whether that messiness is actually part of what makes them worth doing in the first place.

Leave a comment