Feeds:
Posts
Comments

Insight #1: People, not texts

This comes from Heidi McKee’s Skype call where she recounted her journey from Literature to RhetComp and how she came to be interested in, among other things,  qualitative research methodologies. I felt a moment of recognition hearing McKee describe the moment she discovered that she was more interested in researching people rather than texts. (It also made me feel a little sad, as her realization was productive, whereas mine signaled more of a slow decline into confusion.)

McKee’s realization captures something I’ve been working on/through for a while. After my PhD coursework I became more and more lukewarm to literary scholarship, throwing my energy into the classroom and writing center. The energy I put into avoiding my exams could have produced two dissertations. But when I did work on them, I felt like a Roomba, moving noisily and jerkily over the same small patches over and over, often to end up bumping into a corner or table leg until finally getting stuck under a cabinet.

So in this case, my “insight” is entirely personal, which is fitting because I took this class for personal reasons (is academia still for me? if so, what part of the work can I do?): people, not texts (or pehaps, more accurately, people and their texts). I wish I could say I knew more about what I figured out, but at this point even having my suspicions confirmed feels like a win.

Insight #2 But people make everything harder (AKA ethics and representation)

I’ve blah-gged (my word for blogging without concision) here, here, here, and here about ethics and issues of representation. I appreciate how sensitive (and even overly sensitive, some might say) and careful many of our readings were to issues of representation, especially all the authors in Ethics and Representation in Qualitative Studies of Literacy as well as Boyd, McKee, and Porter. While comp scholars may not be be ethnographers (shout out to Rhodes and his underused psychography here!) and thus aren’t grounded/trained in the ethics that govern research on human subjects (unless everyone comes to composition research with a solid social science background), that doesn’t mean that they shouldn’t be governed by the same strict standards as ethnographers (or anthropologists, sociologists, etc.) are. And perhaps the rules governing comp research should be even stricter since, despite it sometimes seeming like social science, it remains part of the humanities tribe. (So I’m on Team McKee when it comes to deciding what’s public on the internet, at least for composition researchers.) I found the chapters in Ethics and Representation quite powerful (this is no coincidence, as writing center theory is also influenced by postcolonialism and feminism), especially their emphasis on sharing knowledge with their subjects to support best practices (I don’t care for the term intervention), sharing power, and polyvocality. I know that there is a lot of anxiety and effort devoted to plotting the future of RhetComp (as there is, of course, elsewhere, especially in the Humanities), and while I am pretty far outside of that conversation, it seems to me that continued attention to ethics and representation should guide wherever the field goes.

Insight #3 Not generalizable? Not necessarily the problem one might think.

We started the semester with Taber’s thesis, who introduced me to North and his concerns about ethnography in composition research: “Among [North’s]  multilayered warnings of ethnographic research, one of the most poignant and specific concerns is about the ‘limits as knowledge’ of ethnography due to the ‘insularity of investigation: the difficulty of somehow extending the findings of an investigation in any one community to any other’ (Taber 4). While I agree that (mis)appropriating research methods from other disciplines can be problematic, I was much less concerned about the problem of “insularity,” which is a concern in not just ethnographies but also other qualitative studies. Maybe it’s because I spent the last decade reading literature, looking for connections between a handful of texts (rather than a library full) that I don’t feel the pressure to generalize.

In fact, many of our readings have made excellent cases for case studies. For example, in Persons in Process, Herrington and Curtis acknowledge the limits of their approach while at the same time valorizing it :

We have not aimed to generalize that these students are representative types, but we do believe the backgrounds and experiences they bring represent something of the diversity amongst the students sitting in our classrooms and appropriately admitted to higher education in the United States: students of a range of national origins, ethnicities, classes, and religious convictions; females and males of various sexual orientations; students whose home languages are not all English and who possess varying ranges of facility with standard American English; and students with varied personal and family histories. As we have endeavored to show, these aspects of social and personal identity intermix in complex ways in particular students and are implicated in their learning” (355)

Other studies, especially Lee Ann Carroll’s Rehearsing New Roles and David Foster’s Writing with Authority, also defend how useful it can be to study a small group of students despite the fact that they may not represent all students. Carroll also justifies her methods for understanding a complex system like a gen ed program by looking at specific student experiences. She quotes Crowson, who privileges gaining an “understanding of the phenomena … rather than some generalizable knowledge, explanation, prediction, and control” (qtd. in Carroll 43)

Many have raised the issue of what we then do with these individual “stories.”  McKee offered a practical suggestions: researchers need to examine what’s already there, creating meta-studies that determine what researchers have found, what is missing, and where we should go from here.

Periodically, Dr. Moxley would try to gauge our attitudes toward ethnography, and I felt like at the heart of his questions was, “Does ethnography make sense as a research method in composition studies? Is it useful?” Yes and yes, I have learned.

In her dissertation, “Taken Out of Context,” Danah Boyd describes her struggle to interact with teens on the internet in a way that did not make them uncomfortable (they and their parents are well versed in stranger danger and internet predators) or invade their privacy (she notes that many teens did not realize quite how “public” their public profiles are). In the end, Boyd chose to “scrub” out all identifying information when describing the FB and MySpace pages of teens. When given access to private profiles, Boyd also chose not to save any of it on her computer or quote any part of it.

Initially, that struck me as overly cautious (and a missed opportunity to interpret data and bolster her results). Since Boyd was studying these teens for academic, not marketing purposes, I assumed she could have more leeway (since research benefits public, rather than private, interest). I can see, however, that my attitude is influenced by a decade of working with texts rather than human subjects. When making photocopies of a copyrighted text for class, I never seriously worried about getting myself (or my dept) in trouble if I took too many liberties with Fair Use (though upon review I always toed the line!). Even if I did make an extra photocopy or two, I don’t give it much thought because I don’t feel a serious ethical obligation to book publishers. However, I’m feeling pretty rotten about a survey I administered to students for a conference paper about gender and clothing size standardization. I asked nicely and I never revealed any identifying information, but I never really asked for their permission and I even asked some invasive questions. I used conversations I overhead in dressing rooms. I assumed my framework relied primarily on Freud and his notion of the uncanny,  and I didn’t give much thought to those I spoke to (or, let’s be honest, spied on).

Unlike the clear cut scenario I described above (I had not done my due diligence and not respected and ethically represented my subjects–Code of Federal Regulations section 45 part 46 fail), digital writing research, as described by McKee and Porter, is actually quite complex. Though as internet users we’ve all been advised to consider everything we put online public, ethical researchers distinguish between public and private online information, even when something isn’t password protected. (They’re not the only ones: as the ACLU points out, law has not kept up with technology).

While representing a subject is always fraught with ethical implications, McKee, in “Ethical and Legal Issues for Writing Researchers in an Age of Media Convergence,” notes that “researchers who seek to use video or other audiovisual multimedia in representing research participants should proceed with extra care” because “video representations … are more immediate, visceral representations, carrying more of an impact upon viewing and listening” (110). This warning brings to mind the video of the tutor I mentioned in my previous post (the one who took over the session), and I’m still uncomfortable about watching it, even days later. I’m sure the participant agreed to be taped, and even consented to having the video shown to others at a conference, but I wonder how prepared she would feel if she knew that her performance was met with shaking heads and negative comments (bad news, indeed!). Watching it was instructional, she likely signed an informed consent, and her name wasn’t given, but was it ethical to show us her video? Her face was clearly visible, and if I wanted to I could probably look through the community college’s website to see if I could find her (though if she isn’t faculty, she likely wouldn’t have a webpage).

Questions for McKee:

  • In light of media convergence (and especially as online journals become more mainstream), what changes should IRB undergo in order for participants to be protected?
  • Should conferences require confirmation of informed consent?

Oh say can you (4)C!

There was a lot to take in at 4Cs, and while I did my usual writing center rounds, I also paid more attention to research methodologies. Before taking ENC 6270, I knew METHODOLOGY was important, but my lit background left me with little understanding of it. I noticed an inconsistent level of meta-methodology (do we have a name for the sometimes intensive reflection on methodologies that we’ve seen in some of our readings?). So, in the spirit of Addison and McGee’s discussion of other’s projects, I thought I might do a little of my own discussion.

I couldn’t make it to all writing center panels, so my observations are not exhaustive, but could be representative of WC research that finds its way to 4Cs (WC only conferences could be different). Research relied primarily on qualitative methods (quantitative was discussed briefly in a special interest group meeting on WC, and that was primarily about providing numbers of visitors to administrators with little interest in or understanding of WC). Writing center research favored methods that included

  • observation, in the form of video-taped tutorials;
  • interviews with tutors and students (but, interestingly, mostly tutors);
  • surveys;
  • and, at least in one case, discourse analysis of tutor records of tutorials.

I did not see any presentations that were ethnographies, though one presenter called her research ethnographic because it relied on participant observation. However, while it was qualitative research, it was not ethnographic. She used p-o as a “supplement” (her words) to confirm her findings from surveys and interviews rather than drive her entire study. Accordingly, her presentation didn’t provide the kind of “thick description” implied in ethnography. Her presentation was interesting (she focused on student perception of tutors who were not native English speakers), and she was, in fact, attentive to issues of methodology. When her results indicated that L2 students were more far likely than L1 students to want a L1 tutor, she noted that her data could be skewed because L1 students may not have wanted to offend her, an L2 grad tutor. Here she is attending to issues of positionality, and I think describing her study as ethnographic may have also been a result of her thinking about her own status as an L2 tutor. She was a participant-observer–her research questions and interpretation were driven by her own experiences–but that didn’t necessarily make the study ethnographic.

There were a lot of presentations that were concerned with representing tutor’s experiences and tutor voice. Dawn Fels constructed her entire presentation out of 20 quotes from peer tutors, asking members of the audience to read the quotes for her. She justified this kind of un-presentation by explaining that tutor voices are often objectified, and, in the case of discussion about standardized writing discussion, often overlooked. In her abstract, Fels writes, “Tutors’ spoken and written narratives reveal the effects of standardized writing instruction and assessment on their work, agency, and identity. That tutors draw upon their experience in the classroom and the writing center differentiates this study from others that may offer conclusions on the effects of standardization on teaching and learning without fully limning the experience of students.” While there wasn’t a lot of time to discuss what we heard, I thought it was really interesting (and a little brave, though a WC audience is overwhelmingly friendly) to present “raw” data and then ask for interpretation. Because Fels went last, audience members still had the gist of the quotes in mind and were excited to discuss how peer tutors can, as one audience member put it, “teach teachers about our practices.”

Fels’ presentation–on a panel about how peer tutoring affects tutors–also highlighted for me how tutor-focused a lot of the panels were. In order to evaluate best practices, it, of course, makes sense to focus on tutor practices. Also, during the special interest group, several people mentioned how difficult it is to assess how WC tutorials affect student writing, and this could potentially explain why so many presenters focused on tutors (they make much more willing subjects!). One exception was Kathryn Evans’ (“Tutors and Student’s Perceptions of Silence in Writing Center Tutorials“) in that she asked both tutors and the students who were videotaped to provide informant feedback. This allowed Evans to note that tutors and students viewed moments of silence differently: while tutors primarily saw them as neutral or even negative, students primarily saw them as much more productive (as moments to capture their ideas on paper or think). Even tutors who understood that silence was an important part of a tutorial still had a hard time of thinking of those moments as productive. Evans ultimately focused on tutor voices, though she did so because her focus was on effective training methods. This made me think back to Cook’s dissertation on tutor questions, where I wished she had looked at the effects of questions, but couldn’t quite figure out how she would have done that. The answer was simpler than I thought–have tutors and students watch their videotaped tutorials.

Another group of presenters grappled with the issue of “bad news,” as Newkirk described the delivery of unflattering news. Any WC researcher is going to discover tutors who aren’t engaging in best practices, but there’s an added incentive to address it when the researcher is also the WC administrator. After showing a video of a professional tutor at a community college writing center focusing entirely on proficiency, failing to explain rules, dominating the session, and even (ouch) making fun of a student’s sentence, the presenters indicated they wanted to take the videotapped tutorials back to their respective WC tutors. I asked what they planned on doing with their tutorials, but I couldn’t tell what role they intended for their tutors–whether they were going to ask them to share in the interpretation to create a polyphonic text (as described by Blakeslee, Cole and Conefrey) or show them the videos so that they could improve their practice.

There were other interesting moments, but at least a 100 pages of tomorrow’s readings still beckon me. 🙂

The three studies we read for this week–Liew, Leon and Pigg, and McKee–all focus on the influence of technology on writing and digital composition and span personal, professional, and academic writing. These projects exemplify “direct, situated research” (Addison 172):  McKee and Liew rely primarily on content analysis and interviews to investigate the digital literacy practices of students, while Leon and Pigg use time-use diaries and screen capture as a way to represent parts of the graduate student composition/professionalization process that goes unnoticed or unreported.

Similar to my reactions to Brooke’s “Underlife and Writing Instruction,” I was struck by Liew’s “Digital Hidden Transcripts.” Both articles have helped me reconsider my views of moments of student resistance or acting out. Always a rule follower, I get anxious when people complain too much or too loudly or when they don’t “behave” as they should, especially in formalized settings like classes or meetings (at a HOA meeting, I once yelled at a neighbor who kept interrupting others and  disrupting parliamentary procedure to shut up; usually I can’t break the social contract, but he did not have the floor@!~!). If I had read the student’s blog post parodying his teacher on my own, I would have likely dismissed it, overlooking the criticism because of its packaging (I like my subversion polite, apparently).  But Liew reads this example of student dissatisfaction as a “digital hidden transcript [that] contribute[s] to education’s democratic mission” (305). Unlike the classroom, the student’s blog was a space where he could circumvent the teacher (and institutional) authority. While Liew points out that student resistance/criticism can lead to real institutional change, the parody doesn’t really deliver anything concrete, though it did encourage “dialogic resistance”: students responded to the post, creating a “collaborative critique” that more fully acknowledged the institutional pressures the teacher was under and how that contributed to his teaching (311). So in a sense the content of the original blog is secondary to the dialogue it sparks.

But schools aren’t just struggling to figure out what to do with student digital underlife–what about teacher digital underlife?  Complaints once aired over the water cooler now find their way over a worldwide blogosphere! Somewhat infamous teacher blogger Natalie Munroe (I say “somewhat” because as public enemies go, she’s fairly innocuous) is not the first example of a blog getting someone into trouble at work, but she’s currently the face of teacher blogging gone wrong. The fallout has mostly sparked debate about whether her posts that were critical of students were illegal vs unprofessional, mean vendetta vs “cruel truth”; there’s also been a lot of knee jerk reactions that Monroe is an awful person (emphasis on jerk, since both Monroe and her critics are pretty snarky). It would be interesting to look at the digital underlife as teachers–not to out them and get them disciplined or fired, but to analyze the content and the responses they beget, whether they come to what Liew refers to as an “ethical consensus” (311) or flaming (resulting, as McKee explains, from misunderstandings).  Looking at teacher blogs could provide a means to peer into the unofficial world of teachers as a way to better understand their professional development and perceptions of students in particular and education more generally. Munroe comes off as pretty crotchety and not very prudent (and, not that the crap has hit the fan,  not very reflective–though she expects people to be OK with teacher underlife she is not OK with student underlife), but she does raise important questions about student engagement and accountability (as well as teacher accountability).

Our fourth study is by Dana Boyd and focuses on the digital cultural practices of teens, specifically social networking sites like MySpace and Facebook. Dr. M has recommended Boyd’s dissertation to us on at least three separate occasions, and now I can see why. Boyd’s impressive methodology shows her wrangling with how she will select and represent her subjects as well as how her own positionality will impact her project. It’s exciting to see an ethnography that attempts to include so many subjects in order to capture their complexity. Now if only I had time to read all 400+ pages …

Reading about including and understanding researcher voice, bias, and the inevitable influence that he or she brings to the research has helped push me to think more about my own (though still developing) stake in the kind of research I would like to perform. Because my research project is still in utero (or, more accurately, in cerebrum), I am not ready to identify or confront my biases or figured out how I will “reveal … [my] voice” (McCarthy and Fishman 155), but I am seeing how I can develop a project by moving past the initial frustrations/concerns from which the ideas have come. I am at heart a problem solver, and as such I mostly envision research projects in the context of an issue that needs resolving. That places me in the position as righting/writing a wrong. Of course, the problem is that such a position doesn’t really exist, nor does it allow me to see the broader picture (or win friends and influence people, but that’s another story).

For example, my tentative research proposal comes out of the frustration I’ve been feeling at work: at the kinds of assignments students are being asked to do (1101/1102 is dominated by five-paragraph essays); the kind of instructor feedback they’re receiving (few comments beyond correctness, even outside of ESL and college prep classes); and, to some degree, the directive, assessment-driven tutoring sessions that focus on correctness and encourages proofreading. Over the past few weeks, reading qualitative studies has revealed the complicated context of competing values and perceived needs that inform or determine what and how teachers teach. Following that example, tutors are driven by what they perceive to be the needs of the institution and the needs of the students. My observations may get me somewhere, but they only get so far, and how I see and how I position myself can, without any self-reflexivity, get in the way. So how to move beyond a position that blinds or limits the researcher and her research?

For one, fully engaging with the ethics of research and recognizing that the subjects have much at stake as well. Thomas Newkirk recommends more fully informing the subjects about the possibility of, as he puts it in “Seduction and Betrayal in Qualitative Research,” “bad news” (unflattering information about or portrayal of the subject). Newkirk also suggests providing the subject with opportunities to provide “counterinterpretations or … mitigating information” (13) as well as the importance of the researcher taking on the responsibility of intervention (he cites Shirley Brice Heath’s ethnography as a successful example). Also attending to the issue of ethics in representing the subject in order to not only write about but to write for them, Patricia Sullivan, in “Ethnography and the Problem of the ‘Other,’” highlights the importance of self-reflexivity of the researcher and reciprocity “between self and other, between researcher and researched” (108), which allows for the researcher to share power and the subject to more fully participate in the research.

Reflexivity also requires researchers to think about their their own positionality (“fixed or culturally subscribed attributes” such as race and gender as well as “subjective-contextual factors such as personal life history and experiences [116]) in the research, as Elizabeth Chiseri-Strater shows in “Turning in upon Ourselves: Positionality, Subjectivity, and Reflexivity in Case Study and Ethnographic Research.” Chiseri-Strater urges researchers to consider how their positionality affects methodology (and the stakes of not revealing those effects in the research) and how positionality can be explored through narrative voice, which often makes it difficult to be reflexive because of the imperative to present a objective authoritative voice. In addition to including the author’s own voice, Chiseri-Strater also calls for incorporating the voices of the subjects in order to create a “polyphony of informant voices” (128). Extending Chiseri-Strater’s call to create polyphonic texts even further, Anne M. Blakeslee, Caroline M. Cole and Theresa Conefrey, “Constructing Voices in Writing Research: Developing Participatory Approaches to Situated Inquiry” demonstrate how subjects can be incorporated into the research process to “co-construct text and knowledge with us through negotiations and interactions, rationally and intellectually contributing to the research” (136). The example they use goes beyond triangulation, or simply checking that the “results” are correct. Instead, they describe a truly cooperative research endeavor. While still acknowledging the difficulties and challenges of this kind of cooperation between researcher and the researched, Blakeslee, Cole, and Conefrey make a compelling case for granting subjects more of a participatory role: Blakeslee comes to a more complex and nuanced understanding of the idea of collaborative work after one of her subjects takes issue with the way she has characterized him. Similarly, the Stanford Study of Writing is an example of a “cooperative endeavor” in that it not only provides quotes from students, but also incorporates students on presentation panels in order to “define the scope, nature, and function of writing today.”

Lucille Parkinson McCarthy and Stephen Fishman in a “Text for Many Voices” suggest that academic discourse poses a problem for ethnographers, not just because the objective writing style leaves so much out but also because of its “single-voiced, monologic style” (156). McCarthy and Fishman provide examples of multi-voiced research (I’ve never heard of anything like this before, so my mind was just a little bit blown, in an absolutely good kind of way) that, like the example used by Blakeslee, Cole, and Conefrey, illustrates how compelling collaborative work can be.

All of these essays illustrate that not only are qualitative research methods generally and ethnography specifically useful (and necessary)  in composition studies but also that composition studies scholars must make this (borrowed) research methods their own, somewhat ignoring  the pressure of creating a seamless and authoritative authorial voice that provides a definitive representation and interpretation. I’m excited at the possibility of not just including selective examples of the subjects’ voices but incorporating those voices–by seeking the feedback and input of my fellow tutors as well as the potential of co-producing knowledge by sharing the power of interpretation. Suddenly the project I have in mind feels less like confirmation of  “bad news”–somehow community college writing centers are doing it wrong–and an opportunity to create real dialogue. Wasn’t that the goal of research all along?

Reading The Community College Writer: Exceeding Expectations by Howard Tinberg and Jean-Paul Nadeau prompted me to revisit “Where Are the Student Voices?“, an opinion article in Inside Higher Ed by Tara Watford, Vicki Park and Mike Rose that calls for making room for student voices as part of increasing community college retention and graduation rates:

While [current] research [on community college students] offers a broad outline of the lives of low-income students, we know surprisingly little about their daily experiences, about the day-to-day reality of being poor and attending college. Likewise, we know little about their perception of the institutional response to their needs. This paucity of on-the-ground knowledge is a prescription for policy disaster, for the history of social policy is littered with reforms that failed because local knowledge was ignored. […] Documenting the experiences and perceptions of students is integral to the development of effective interventions.

I didn’t notice this the during my first read through, but the first comment is by Tinberg, who “worr[ies] that in assembling metrics to gauge “student success”… that we gain little understanding of what students actually learn and what obstacles stand in the way of their learning.”

Watford, Park, and Rose advocate for a more complete understanding of community college students’ (and especially low-income students) broader experience. Tinberg and Nadeau’s book works towards that larger goal by investigating first-semester cc students’ attitudes toward and experiences with writing in order to understand “To what extent … writing instruction plays a part in acclimating students in that first semester, or indeed, in placing a formidable obstacle before them” (1). Though Tinberg and Nadeau do not focus specifically or intentionally on low-income students, that category is represented along with traditional, non-traditional, prepared, and underprepared. Like Carroll, Tinberg and Nadeau rely on a framework that privileges developmental psychology, and as such they seek to understand how student writing develops rather than gauge improvement. While their study does not follow as many students for as long as Carroll’s (Tinberg and Nadeau describe in great detail the difficulty of encouraging and retaining busy cc students who often work full-time), it nevertheless provides a great deal of useful information regarding faculty and student attitudes about writing and education, where they overlap and contradict (and, perhaps, why). They do so not only through a series of surveys, but also (and perhaps more importantly, at least as far as Watford, Park, and Rose’s goal is concerned) through completing interviews during times and at places convenient to their students and providing opportunities for making connections and receiving mentoring.

Tinberg and Nadeau also briefly discuss the issue of rhetorical adaptability, whereby students develop writing skills and metacognition that allows them to succeed in new rhetorical situations, a topic addressed by Hassel and Giordano in their article that I read for my critique. While Tinberg and Nadeau acknowledge that “students need to receive the tools with which to succeed in writing tasks from somewhere” (128) and that the composition classroom should be one of those places, they do not take on the primary burden of writing instruction and, like Carroll, suggest that it can and should happen across the disciplines.

Tinberg and Nadeau’s book resonates with me not just because it provides insight into a population I’m currently (and only just recently) working with, but also because it seems to meld education and advocacy.  Tinberg and Nadeau don’t just study their students but they also advocate for them. Their book’s focus on revision (and especially the ways that professors sometimes fail to encourage or successfully show students  how to engage in that process) has also helped me to further refine my own research interests: specifically, how writing center tutors can encourage and support students to revise as well as help them to edit and proofread their own work. This goal becomes even more important (and difficult) in a community college setting, where directive tutoring and skills-based teaching is emphasized, to the detriment (I would say) of student overall writing development. Now, to turn this into a research project …

Cook, Carrie Lynn (2006).  The Questions We Ask: A Study of Tutor Questions and Their Effect on Writing Center Tutorials. Ph.D. dissertation, Indiana University of Pennsylvania. Retrieved February 11, 2011, from Dissertations & Theses: Full Text. (Publication No. AAT 3206641).

AUDIENCE/RESEARCH QUESTION

At the heart of Cook’s project is the desire to make writing tutorials better—to encourage the kind of meaningful talk that takes “writers through the difficult process of changing one’s own ideas and gaining new insight into one’s own processes of writing and making mean, allowing for writers to leave a tutorial more confident and with an informed plan” (167). Much meaningful talk comes from the ability to ask effective questions, as her literature review of educational and writing center theory and cognitive psychology demonstrates.

These sources also point out that teachers and tutors often “lack sophisticated questioning strategies” (21) because they have either received no training in or had their own educational experiences with them. She summarizes four studies of tutorials that focus explicitly on questions (Reigstad; Bell; Blau, Hall, and Strauss; and Munger), all of which acknowledge the importance of asking questions, but do not examine the context of the questions or the speaker’s intentions and motives. As such, Cook asks, “what questions do tutors ask during tutorials; and what apparent effect [do] such questions have on the tutorial” (1)?

Her findings confirm for her that there is a need to provide tutors with specialized training in order to “know how to be cognizant of the diverse ways of making meaning, how to question well, and how to effectively model questions for students” (25). As the project’s focus is on the need for more specialized training, it is intended for writing center scholars and administrators. (She explains that it is also for writing center tutors, but tutors would likely prefer more instruction rather than observation.)

METHODS AND RESULTS

Cook observed 20 videotaped tutorials of 10 tutors and 20 students (so each tutor was observed twice working with a different student) at Eastern Kentucky University’s Writing and Reading Center. Only one student was a graduate student and only one was ESL. Cook logged each question, counting both obvious questions and any time a tutor’s voice went up at the end of a sentence, and then looked at the context of the question to determine which general category and then specific subcategory it fell into.

1) Interpersonal

“Tutors can use such questions to help manage the flow of tutorials, gain permission, establish rapport, check a writer’s understanding or mood, take part in small talk, distract writer’s in order to ‘save face’ or shirk responsibility, bring a writer’s attention back to the tutorial, and inform themselves on what information and strategies are needed for a given task” (105).

  • process (to establish rapport)
  • consent
  • rapport
  • gauging (writer understanding or mood)
  • filler (chit chat)
  • distracting
  • refocusing
  • orienting (to inform tutors on  how to proceed with the tutorial)

2) Making meaning

“These questions facilitate the shared goal writers and tutors have of making meaning during their collaboration in the tutorial” (106).

  • clarifying (tutor understanding)
  • verifying (that tutor understanding is correct)
  • transferring (expertise to writer)
  • suggesting (suggesting changes or leading writers through discovery)
  • prompting
  • modeling (modeling the thought processes of experienced writers)
  • drawing (draw out information from writers)
  • exploring (challenge and stimulate writer’s ideas and views)

Cook spends two chapters providing examples of the 16 types of questions. Of the 473 questions logged, about 40% were interpersonal and 60% were meaning making. The most common kinds of questions were orienting, clarifying, and verifying. She focuses one chapter on two sessions—one with effective questions and another with mostly ineffective questions—to show how the different impact the types of questions have on the tutorial.

Her observations affirm the importance of good questions—and the difficulty of asking them—as suggested in her literature review: “By focusing on the question’s phrasing and delivery and on the mission of the writing center, tutors can skillfully use questions in a way that goes far beyond marking a session as directive or collaborative. They can use questions in a manner that serves to meet a writing center’s mission of leading writers toward growth and learning” (182). She then calls for tutor training that familiarizes students with these different kinds of questions and the impact they can have on the tutorial.

Because her project only includes one kind of research method–observation–Cook is limited as to how where she can go with the data. Without other methods employed for triangulation (interviews, writing samples, or surveys), there is little possibility of interpretation or a means to describe the student’s actual experience of the session and its impact on his or her writing.

EPISTEMOLOGY/RHETORICAL STANCE/GENRE

Cook calls her study “descriptive” in that it “examines tutor questions and their effects; it does not examine the mental process behind questioning” (8). Cook’s methods were primarily positivistic, in that she makes observations, begins to see patterns, and then makes some conclusions based off those patterns. Her approach is primarily typological, in that she focuses a great deal on identifying categories. To some degree, her approach is also phenomenological in that she attempted to determine the affect the questions had on the writer and his or her understanding of what they should do next. She does so primarily by listening to the student responses and observing body language.

In the conclusion, Cook identifies other questions about the effects and outcomes of effective questions, especially the impact that questions have on student writing and student understanding of the writing process, which could have been answered by employing mixed methods. Developing case studies of the writers that included interviews, drafts of papers and the final paper could be used to measure the impact of different kinds of questions. Further study could also  find out why tutors ask the questions that they do—what role does training or educational or writing experience play? Or it could  measure how a particular kind of  training impacted how tutors asked questions.