Sunday, April 20, 2014


Response 
4/22
Both Marshall and Rossman (2011) and Creswell (2009) offer refreshing tips on making an argument and anticipate ethical issues in the proposal. To argue for the soundness of qualitative research, Marshall and Rossman introduce the emic perspective (p. 258), which is centered on how local people think (Kottak, 2006, p. 47). Compared with the etic approach which emphasizes what a researcher thinks, the emic approach allows immersive experience in participants’ community and the validity of participants’ point of view. The emic focus lays a foundation for long-time observational, participatory, and experimental studies in the qualitative strand. This community-situated perspective also makes sense for small-scale investigations, such as case studies, focus groups, and interviews in such domains as cultural studies, linguistics, literary studies, political science, and women’s studies (Marshall & Rossman, p. 264). As the epistemological assumptions of these domains suggest, what matters in the emic approach is the constructivist conceptualization of knowledge-making, especially recognition of the participant’s ability to know. To me, this epistemological difference between the emic and the etic approach might serve as the major validity argument for qualitative research.
An equally important consideration of a qualitative research’s soundness is its pathos: the research’s appeal to the reviewer and policymaker. As Marshall and Rossman (2011) contend, an understanding of the reviewer, advisor, and policymaker is a sure way to persuade (pp. 259, 263, 274). What strikes me as relevant is the consideration of how much reviewers outside our field might have known or need to know about what we do and what our methodology is capable of doing. Discourse analysis, for example, is a method that people from other fields might question. I find it particularly important for discourse analyst to provide a valid answer to the challenge that different readers might interpret a discourse in different ways. This challenge, however, is also a place where discourse researchers might offer a counterargument: it is the multiplicity of interpretation that warrants subjective coding of discourses.
Additionally, researchers can corroborate the validity of their research in the writing. Marshall and Rossman (2011) suggest that qualitative researchers may incorporate thick description of the methodology and design to provide an “audit trail” (pp. 253, 261). This is at least one thing the researcher can do to assist reviewers to assess the validity of a research. On the one hand, including intensive notes in the writing (p. 273) and extensive references in the proposal (p. 261) will create an opportunity for the reader to make their own decisions (p. 254). On the other hand, thick description, among other ways to make the research transparent, will present the researcher as passionate about the research and conscious of the audience, which increases the ethos and persuasiveness. The 60-page proposal on page 261 wouldn’t have the persuasive power if not for the thick description.
Along with the going-thick strategy of writing, good writing practices suggested by both Marshall and Rossman (2011) and Creswell (2009) also add to the argumentative effect of a proposal. Relevant to the purpose to convince reviewers are two more important considerations: the pilot study and ethical issues. The pilot study is a chance for researchers to argue for the qualitative approach and its applicability. For one thing, a pilot study may help test out the participant-specific issues that might affect the replicability of the research. As qualitative research is often questioned regarding its generalizability to other groups, the pilot study might help identify issues that are only relevant to the particular time, space, and communities. Real world changes, Marshall and Rossman claim, make replicability irrelevant to qualitative studies (p. 254). For another, pilot studies provide a venue to build trust and respect of participants (Creswell, p. 88) and add to the practicability of the research.
Qualitative researchers might also clarify potential ethical issues as a way to validate their research. Among various valuable suggestions in both Marshall and Rossman (2011) and Creswell (2009), I find the non-intrusion principle highly valuable. It is advised that researchers should leave a research site without disrupting its natural order (Marshall & Rossman, p. 267; Creswell, p. 90). Another way researchers can exercise ethical practices is to view participants as co-researchers (Marshall & Rossman, p. 267). Researchers may promote reciprocity by avoiding coercion in recruitment, interviewing, and interpretation of data (Creswell, p. 90), refraining from deception in the purpose (p. 89), using unbiased language in the writing (p. 92), and checking accuracy of interpretation and showing the final presentation with participants (p. 91). These practices not only ensure benefits for participants but also increase the validity of the research. 

Response 
4/15
The research roundtable this week offered me a great opportunity to work with colleagues and reflect on my own pilot study. To start with, the physical setting of the roundtable was perfect. The casual place, the background noise, the actual roundtable, and the beer, all contributed to an atmosphere that was conducive to unconstrained conversation between fellow student researchers. For me, I was not afraid of talking about something that was in progress and ask what might have been stupid questions. While normally I would hesitate in making suggestion, the setting sort of encouraged me to “offer advice.” Four people in a group made it just as cozy and comfortable to share something that is very, very rough in nature. Yet, I find the casual setting and event very beneficial for student researchers and research in progress. Everyone in the group was ready to be supportive by listening to the concerns and trying to answer questions. I believe I wouldn’t have said the same thing should the event take place in a conference room.
I was able to share some questions I had about my pilot study and the research design. I had always wanted to have someone audit my process of participant recruitment, which I found to be very hard. In my pilot study, I had planned to do an experimental study on teachers’ commentary on second language writing. I planned to have 2 composition teachers to respond to student writing in way to highlight cultural, rhetorical, and larger writing issues and 2 other composition teachers to emphasize the language issues, including grammar, vocab, sentence, and mechanics. Accordingly, I planned to have 2 second language students to evaluate one type of teachers’ commentary and 2 other second language students to evaluate the other type. Meant as an experiment, my original design needed only 8 participants. But the design involved a couple of issues. First, I will need to recruit my colleagues who teach college writing. I wanted to find out if it seemed like an ethical issue for my research colleagues. Second, I was not quite sure about the experiment. Would it represent the actual practices of composition teachers? My colleagues’ response will help me answer the question. Third, I was not certain if second language students may respond to teachers’ comments in similar patterns. My colleagues experience might offer my insights. While I had talked to my second language friends and found that they thought about teachers’ commentary differently, I would benefit from perspectives of colleagues with teaching experiences.
I also needed feedback from fellow student researchers on my revised design and recruitment process, which I carried out prior to the roundtable. In the revised design, I created an artificial essay with multiple language and writing issues. I wanted only student participants who were expected to evaluate each of a list of 23 comments from teachers and answer some extended questions. The design changed the experimental nature of the project to a qualitative study. I received really valuable feedback from colleagues, who pointed out that the artificial sample and the out-of-classroom context might have weakened the validity of the design. One of colleagues also offered a similar setting in the Writing Center. Writing tutors here at Kent State are trained to give different types of comments to students who have different needs. This case made me think that even with second language students, I cannot assume that they have the same concerns about their writing and expect the same from teacher’s commentary. My initial findings proved the case to be consistent with second language students.
While I receive value feedback on my own design, I also benefit from thinking about various issues in colleagues’ research. Our group touched on such important issues as entry into the research site, the contention between confidentiality and researcher’s positionality, and disciplinary concerns in research design. These issues provided an opportunity for me rethink about my design. I was led to ask other important questions about my project: How should I connect the research with the major disciplinary concerns? How does the project look like to intended journals? How should I incorporate the actual classroom practices to increase validity? And how can I recruit second language students efficiently and ethically?


Response
4/8
I find pilot studies very helpful in a couple of ways. First, before carrying out projects that involve a large number of participants, researchers, or data, it is valuable to test out how the project looks like in terms of logistics. Oftentimes, we face the question of how many participants to recruit and find no definite criteria for that decision. It is good to know from the pilot study whether we might be compromising validity of the design because of time, cost, or the labor involved in recruiting intended participants. I am curious if it is a good choice to opt for a small-scale, but different method for the current pilot study. Focus groups, for example, seem like a fitting method. But then, how is it relevant to the large-scale project we plan to conduct later? I wouldn’t have come to these questions if not to do a pilot first. Specific to my project, I feel like I have to limit the number of participants because of time. While I started out to randomize participants, I am now considering recruiting participants from what I see as stratified subgroups of intended participants. The pilot study offers me a different context to consider the same questions for a larger project. So, I wonder if there is an issue of reliability and validity in pilot studies.
It is also interesting to consider other validity issues involved in a pilot study. An important issue is whether or not we may include data from pilot studies in the actual, full-length project. There is the reliability issue again. Specific to my project, I find that I will need to adjust my questions to make them more specific and to cover more issues. The changes, though, will render both participants and data invalid for a later, actual project. I feel like I will see the pilot study as part of our rationale for a research design. The problems we encounter and the assumptions we prove may have a better place in the rationale for a research.
Pilot studies might often be regarded as irregular and gave rise to ethical issues. I was approached by a student researcher asking for participation in a pilot study on speech pathology when I visited Cleveland State University during the spring break. The person told me that because the research was a pilot study, no IRB approval was needed. I was not offered to sign an informed consent form nor was informed of anything about the purpose and nature of the research. I did not feel respected and was sort of rushed to the interview and the experiment (having my reading of minimal pairs of pronunciation recorded). It is not certain whether or not we need IRB approval for a pilot study. I don’t remember if van Teijlingen (2001) said so. But it is certain that participants need the same respect and protection in pilot studies as in actual, full-length studies.
The Chris McCandless case in the NYT article makes a valid point about the need of pilot studies as both bona fide researches and arguments for extended research. While many pilot studies involve only a small size of samples or participants, they are able to generate data that are meaningful. In case studies, focused groups, and other small-scale sampling methods, researchers might be able to create analytical categories and coding schemes that are valuable in themselves. Furthermore, pilot studies might be used to walk reviewers through the methodology, rationale, and design that are qualitative in nature (Marshall and Rossman, 2011, p. 242). For this purpose, the pilot study might be viewed as an argument for the proposal.
One thing that piqued me in Marshall and Rossman (2011, chapter 9, p. 236) is the sample budget. Much of the budget is distributed to salaries and payment to consultant and contracts. For one thing, the proportion of the budget conforms to the valuation of human labor. For another, the recognition of the intellectual work in the academic tradition in the States is a sure way to ward off possible corruption. Researchers in China have a hard time justifying a research budget because salary is often not a valid expenditure. But the denial of intellectual work leads to restrained financing and widespread embezzlement in research.


Response
3/11
Marshall and Rossman’s (2011) chapter 4, 5, 6, and 7 offer valuable suggestions on entry, sampling, cross-cultural settings, and exit, to name a few of the critical points in the research process. The importance of entry might be overlooked if researchers take it for granted that one can always make correction as he or she goes. What might be problematic with this approach to entry is that unplanned entry into a site might “contaminate” the site and make it no longer useful for the research. Should participants be asked to take a survey again, their prior exposure to the survey questions might influence their decisions and responses. In many cases, there is only one chance for researchers to enter a site to collect legitimate data. In sending recruitment emails, for example, researchers should be careful to provide sufficient information and consider potential ethical issues. The example provided on p. 102 includes necessary information for potential participants to decide whether or not they want to participate. As an example, the email includes an adequate introduction to the types and number of participants to be recruited, the sponsors of the project, and any benefits and harms the project might incur. Marshall and Rossman also advise researchers to phrase the recruitment email in a personal manner, as emails are personal and might be easily ignored and deleted (p. 101). Another issue with recruiting emails is that there is perhaps only one chance that recipients might be interested and agree to participate. The second time when a recipient reads a recruitment email, he or she might be either further driven away of coerced to participant. In either case, the recruitment process violates the ethical code of voluntary participation.
How participants are sampled is also an ethical issue, as well as one of validity and efficiency. To start with, the site to recruit participants and conduct face-to-face interviews and observation needs careful considerations. Public places, as Marshall and Rossman (2011) suggest, might be associated with various political implications (pp. 108-110). Classrooms or teachers’ lounges, for example, might impose pressure on the participant because other fellow students or teachers might be present. To implement the voluntary participation principle, researchers should recruit potential participants at a place where associated perception of the place will not affect potential participants’ decision-making. It is advisable that interaction between the researcher and participants take place in a physical setting that is agreed upon by both sides as comfortable and unconstrained. Moreover, the types of sampling are closely related to the purpose (p. 111) and participants of the research. Snowball sampling, for example, might have both procedural validity and efficiency. Participants who know each other might have already been in the same community in which the research is situated. The interpersonal connection between potential participants also helps researchers to recruit participants without unnecessary intrusion into the site. Another type, stratified sampling, might work well in qualitative researchers because participants and phenomena of qualitative research are often not as randomly distributed as those in quantitative research. The seemingly controlled sampling method is closer in meeting the natural research setting requirement than randomized sampling. Convenience might be tempting but may often incur problems. Recruiting participants from a researcher’s colleagues or students might be convenient but harms voluntary participation and credibility.
Much as negotiating entry and reciprocity is critical to building trust and respect from participants, the exit strategy is also important to a research. Marshall and Rossman (2011) warn researchers not to “grab the data and run” (p. 130). Exit is more than leaving with gratitude; it concerns the type and epistemology of research. To further involve the participant after collecting data and writing the paper means that researchers need to view participants not merely as “subjects” but as co-researchers. In the latter view, participants might continue to contribute to the research by working with researchers to check interpretations, representations, implications, and benefits. Exit is also the phase of research when the promised benefits are cashed in and the phase when the validity of research is further tested.
In cross-cultural settings, participants should be viewed as co-researchers in checking translation of their responses (Marshall and Rossman, 2011, p. 165) and culturally sensitive contents in the research. A researcher needs to consider providing sufficient linguistic and cultural support in translation and at the same time refrain from intrusion and misinterpretation. 

Saturday, March 29, 2014


4/1/2014
James King (1999) asks an intriguing question: “Can men use feminist theory” (p. 487)? An attempt to answer the question, I am afraid, needs to consider the positionality of the researcher, values of the inquiry, and organizational dynamics of the research community. King finds researchers’ accommodations and appropriations to be a matter of positionality in that researchers need to define who feminists are and who are crossing the border (p. 487). The current scholarship, according to King, is inadequate to account for the complexity in the relationship between researchers and participants. For this relationship to be legitimate and ethical, as Ellen Barton (2000) would argue, the distance between researchers and participants should be considered as necessary in some situations (p. 404). It seems rather naïve if researchers claim that feminist positionality is essentially collaborative and reflective. Men, according to this default methodological model, will find it impossible to conduct any feminist research because they cannot participate fully in the participant community by “apprenticing” or “envisioning” (p. 487) feminist identity. They do not have to, if they accept Barton’s assumption that empirical methodologies are not in conflict with feminist research, among other methods endorsed by the field of rhetoric and composition. If it is methodologically legitimate to observe the participant objectively from a distance, researchers may only need to show “an emphatic understanding of the other” with “an attention to assume, for some interpretative time span, the position of the other” (King, pp. 486-87). This objective stance makes sense because it is not always demanded by participants that researchers be part of their community. Deep down in this stance is the possibility of knowing at least something that is not constructed in the interaction between researchers and participants. We should ask if all knowledge is socially constructed and move on to differentiate what knowledge is socially and collaborative constructed and what not.
The complex positionality of the researcher calls for an alert to the cultural and political values in methodology. King (1999) warns us that liberatory politics in Marxist criticism is deceptive (p. 482). Intending to represent “the voice of all marginalized,” liberatory politics favors a collective voice that subsumes individual voices (p. 482). The liberatory intention is corrupted by the methodological erasure of the individual. Applied to the male feminist question, liberatory politics privileges an ethical claim that is inherently flawed in its methodology that disrespects the individual, that is, the participant from a feminist community. Like other traditions of critical inquiry, Marxist criticism is focused on naming the hegemonies (p. 486) and, in so doing, fails to address the individual that is the basis of interpretative democracy. If we accept that the fundamental value of feminist inquiry is democracy, we should be aware of the cultural and political consequences in such demand that male researchers be feminists to be able to conduct feminist research. We should applaud King’s position that all researchers have a “dialectical relationship with their research” (p. 485). Naturally, the insider/outsider binary is not necessarily a reasonable question if we take the democratic relationship to mean full respect for the participants’ choice to be represented as they are, with empathetic, not sympathetic, understanding from the researcher. Researchers will be condescending should they choose to identify with or become participants from feminist communities. Being participatory is not necessarily more ethical than keeping an objective distance from the research.
At the center of the ethical consideration of researchers’ positionality to their research and participants is the issue of turns in the field of composition and any other academic community. Why should any field take a turn to make knowledge more legitimate? Is an ethical turn (Barton, 2000, p. 400) more productive than the social turn in composition or the linguistic turn in science? While it is apparent that any such turn is largely academic politics, we should be mindful of the methodological implications. Does an ethical turn make research more productive in terms of knowledge-making than in terms of politics? I fully agree with Barton that the ethical stance is detrimental to efficiency and effectiveness in research (pp. 399-400). I’d suggest we look at the fundamental values that define our field, rather than chasing turns of community attention. Barton is right in worrying that such turns divert our attention from research that is essentially about how people write and think (p. 407). 

Sunday, February 23, 2014


Week 7
2/25/2014
I was struck by Annie Dillard’s (2005) comment: “Never, ever, get yourself into a situation where you have nothing to do but write and read. You’ll go into a depression. You have to be doing something good for the world, something undeniably useful; you need exercise, too, and people” (p. xiv; as quoted in Broad, 2012, p. 2006). When I planned a project for this course, I was concerned with how to answer the call of Powell & Takayoshi (2012) that our work should “extend our ‘use value’ beyond academe” (p. vii). Similarly, Sullivan & Porter (1997)’s critical approach to research as praxis outlines a road map of transformation from the personal to the public (p. 62). Should I launch an activist project that aims at changing the way second language students learn writing in this institute? I am excited at the very thought of delivering any actual change to practices within a community. Kemmis, McTaggart, & Nixson’s (2014) The Action Research Planner, Grabill’s (2012) community-based research, and Blythe’s (2012) activist research all suggest that the ultimate goal of research is to “change social practices” (Kemmis, McTaggart, & Nixon, p. 2). If the real value of research is in social practices, dramatic changes are expected to take place in the current academic values and organization. The “publish or perish” dictum should be geared more towards practical uses of research in our field, much as industrial application and patents are in scientific fields. As we often lament on publications that have only a handful of readers, we must change the value-orientation of research to give more credit to practical implications.
This ideological change necessitates an epistemological change in our field from a exclusively constructivist perspective to a more inclusive one. Haswell’s (2012) call for appreciation of quantitative methods, Broad’s (2012) proposal of empirical-qualitative research, and Calfee & Sperling’s (2010) mixed methods all ask us to go a bit beyond the social constructivist castle. What matters in this school of thinking is an epistemological awareness that knowledge is not located only in the social and the subjective. It is true that the subjectivity of the researchers may serve as “an interpretive lens” (Powell & Takayoshi, 2012, p. 112), but we might close up alternative perspectives from the contextual (Broad, p. 205) and the public and social (Sullivan & Porter, 1997, pp. 62, 68). That a researchable problem is located in the personal and the subjective (e.g., Powell & Takayoshi, p. 113; Lakeslee & Fleischer, 2007, p. 18) should not be taken as an attempt to separate the subjective from the objective, contextual, and social practices. In fact, Powell & Takayoshi (2012) and Sullivan and Porter (1997) contend that research should often be “collaborative and participatory” to have “empowering potential” (Addison, 1997, p. 111, as quoted in Powell & Takayoshi, p. 9). The warning that Sullivan and Porter (1997) issue in their critical approach is that we should not cut off our connection with the communities of participants, researchers, and policy-makers (p. 68). It seems likely that a researchable problem is located in the gap between the personal and subjective and the contextual and objective. We are creating a researchable problem by constantly comparing our perceived problem that the actual, “real” problem “out there.”
Along with what is researchable is the issue of what to research. Echoing Sullivan and Porter’s (1997) critical approach that identifies ideological, practice and method components in methodology (Grabill, 2012, p. 211), Grabill (2012) suggests that we look at such stance issues as the identity, purpose, power and ethics, and position (p. 215). In her response to Blythe, Takayoshi articulates the value of such meta-research as improving communities of participants and researchers (Powell & Takayoshi, 2012, p. 285). I do not repute the value of meta-analysis, or research about research, especially in regard to reflexivity, subjectivity, and ethics, but we still have to answer a core question: what can our work do to benefit the social? If, for confidentiality issues, Teston’s study (in Powell & Takayoshi, 2012) only lends to our understanding of the research process, we are left with the question about the real value of our research.
Can we eventually produce any good for the community in the way science does to industry? To me, the question involves our value orientation and ethics. For the former, we will need to change what we view as valuable in the academic circle. Specifically, we will need to break the binary of public or perish and give due credit to social practices. For the latter, we must extend ethical considerations to the participants. Rather than promising some intangible and remotely relative benefits, we will need to bring real changes to the participants’ community. We are not to “leav[e] people to decide for themselves” (Barton & Marback, 2012, p. 76) in case of ethical issues, but to acknowledge our power as the researcher and take action to deliver some real changes. 

Wednesday, February 5, 2014


2/4/2014
Calfee and Sperling (2010) argue that a major purpose of mixed methods researchers is “to present a valid ‘story of reality’ (Luyten, Blatt, & Corveleyn, 2007)” (p. 15). Particular versions of reality, it seems to me, divide researchers into various methodological traditions. In regard to language and literacy, positivists are concerned with the true score of students; while constructivists are focused on diversity of literacy. Which of the two is a truer story of reality? Granted, statistics that claim to be generalizable to a large population inevitably flatten representation of what might be the reality. But, is there any value in this type of representation? A large, state-mandated test that finds a particular subgroup of the population to be insufficient in literacy is an easy target of constructivist criticism.
I’d suggest caution here for two reasons: 1) statistics or any other type of experimental, positivist research is by definition limited in its generalizability. A correlation study, for example, claims and sets out to find only a partial account of the whole picture. Correlation between teachers’ comments and their effect on students’ writing explains only one category of factors, with other known or unknown factors being controlled, that might be at work. The research has neither an intention to objectify students who are diverse human beings nor an ability to overgeneralize the contextualized. Correlation researchers will be content to find that an identified factor accounts for a certain percentage of a phenomenon.
2) What counts as “a story of reality?” A satisfactory answer to this question should not stop at offering an overall claim that diversity is the reality. What if a large-scale, standardized test finds a less-privileged subgroup to score lower than a traditionally privileged subgroup? Should we take the result as the representation of a reality or criticize it for creating that reality? The latter standpoint is ethically unassailable, but methodologically problematic. As a technology, testing may be intentionally or inherently discriminant. If designed, administered, or interpreted intentionally to the disadvantage of a particular group of people, a test is no doubt discriminant. However, in some cases tests may represent differentiated performances and achievements of test takers. If any subgroup falls within the low-achieving section of all test-takers, should researchers at least consider if this is an existing, rather than created reality? Before jumping to conclude that differentiated test scores are racially, socially, economically, or culturally discriminating, should researchers at least see if discrimination has already created differentiated achievements that are only accurately represented in tests and test scores?
It is just too easy to problematize standardized tests by politicizing any such tests, and along with them, any quantitative methods and measurements. Calfee & Sperling’s mixed methods should be more important than an attempt to bridge qualitative and quantitative methodologies; they should bring changes to our epistemology. Seeing knowledge as socially constructed has the benefit of foregrounding diversity and complexity in literacy, but runs the risk of losing sight of patterns and trends. Similarly, search for patterns and causality would necessarily overlook individual cases. What may serve the purpose of bringing qualitative and quantitative communities together entails an ideological and cultural overhaul, if we are not content with mere talk.
More specifically, a major issue at hand is whether or not researchers see any commonality in their individualist, democratic practices of literacy instruction and research. Let me offer an extreme case to illustrate the issue. Suppose a literacy instructor or researcher is fully aware of cultural, racial, and social diversity in his or her class and has developed a successful coping strategy. What then? Should he or she promote the strategy to a larger population or constrain it to the specific context of a class? Should he or she not choose to generalize the practice in fear of making a quantitative mistake, is this not exactly what we mean by discrimination? Or, should the diversity-driven instructor or researcher move further to erase diversity by promoting the practice in his or her district, state, and the entire nation? Isn’t this exactly what a standardized test is doing? Finally, is diversity a merit to celebrate or a status to change? These questions compelled me to believe that the real worth of Calfee & Sperling (2010) lies in a proposal for revolutionizing our epistemology.
Thinking of the qualitative and the quantitative not as a binary but as a dialectical continuum has the benefit of a fuller understanding of accountability. Everyone, instructors, students, and policy-makers, should be held accountable for progress, not diversity, in literacy. Please correct me if I am too bold: diversity is as irresponsible and discriminatory as homogeneity.