Month: November 2016

The more things change…

*let’s pretend this is an annotated bibliography for the sake of the assignment

  • Hickey, D. T., McWilliams, J., & Honeyford, M. A. (2011). Reading Moby-Dick in a Participatory Culture: Organizing Assessment for Engagement in a New Media Era. Journal of Educational Computing Research, 45(2), 247–263.

“New media literacies distinguishes between “technical stuff” (new ways of communicating via increased interactivity, multimodality, and accessibility of communication) and “ethos stuff ” (the spirit of collaboration, contribution to community, and negotiation of and interaction with community norms)” (Lankshear and Knobel 2007).

I was introduced to this Hickey et al. study as I was reading through Greenhow’s “Youth, Learning, and Social Media” (2011). Greenhow stated in her conclusion,

“Toward advancing a solution, Dan Hickey, Jenna McWilliams, and Michelle Honeyford, in their article “Reading Moby-Dick in a Participatory Culture: Organizing Assessment for Engagement in a New Media Era,” identify discrepancies between traditional instructional practices that emphasize individual mastery of abstract concepts and skills, and new media literacy practices that rely upon collaborative, social, and context-specific activity.”

I was excited and intrigued. The ideas of collaborative assessment, and also of meaningful integration of media practices with traditional literature in a traditional classroom could, literally, change everything for 9-12 ELA. And for a classroom to be using a (in my incredibly biased opinion) moldy old classic like Moby Dick that has (again, in my incredibly biased opinion) zero relevance to high school students is a phenomenon worth reading about. How did they build assessments that involved student collaboration without students and parents railing against the unfairness of group work, group assessment, and subjective grading? How did they involve social media and social practices and media literacy throughout the Moby Dick unit in ways that truly engaged students and engendered learning beyond the “let’s play with technology and cool tools” ways? How did they ensure that students were consistently functioning in the application, analysis, and synthesis levels of thought? I couldn’t wait to read about how the researchers remixed the classroom and the teaching strategies and truly connected with the students, the literature, and social media beyond the “cool tools” aspect in powerful and life-changing ways.

Well, as my buildup clearly indicates for those of you who have studied foreshadowing and irony…none of the above actually happened. The school, first of all, was not a traditional k-12 but rather some sort of charter school or alternate remedial high school. The teacher only had 15 students in her class, and the school closed “because of funding issues” after the first year of the study. The use of Moby Dick was not inspired by an attempt to truly engage students but rather because “given its reputation as a boring and difficult book…it proved ideal for illustrating how new media practices could be used to engage students through classic texts and to extend traditional literary analysis to encompass new media literacy practices as well.” In essence, they purposely used a difficult book to prove a point, rather than using a text that could and would fundamentally impact students in a meaningful and positive way. I can’t even begin to unpack the ethical discomfort I have at this point.

But that aside, this article had literally nothing to do with new media literacy practices. The study involved an entire curriculum handbook for teachers that focused on “four units: Appropriation and Remixing, Motives for Reading, Negotiating Cultural Spaces, and Continuities and Silences.” In this specific article, the Appropriation and Remixing unit was used. The authors claim “the Appropriation and Remixing unit introduced traditional literacy practices such as analyses of genre and audience, “hybrid” practices such as distinguishing between creative expression and plagiarism, and new media practices of appropriation and remixing (sampling and combining media to create new expressions) and transmedia navigation (following the flow of stories across multiple media modalities).” The lesson that was used for the study was an “Annotation and Ornamentation (A&O) lesson [that] taught close reading by inviting students to annotate (provide definitions of words, explanations, and historical facts) and ornament (add illustrations, extensions, and personal connections to the text) printed manuscript-formatted pages of the text.” In laymen’s terms, students read a book and annotated it, and then discussed their annotations. The study claimed that this was groundbreaking because it “emphasized literary analysis as a social, not an individual, activity by leveraging and making visible to others the unique expertise each student brought to annotating Moby-Dick.” And the groundbreaking assessment practices? Discussion prompts for formative assessment, short answer test questions on the unit exam, and cherry-picked questions from the state standardized assessment tests that assessed the standards taught in the unit on the final exam.

In essence, this article, although claiming to be about new media literacies, was really just about teaching classic literature using constructivist methods. Students were building their own knowledge through reflection on the text and on the use of annotation and ornamentation. They were discussing their texts (annotations) and the classic text together, and interacting with each other and with the teacher in order to make meaning. They were using paper and pencil methods to annotate and reflect, and face-to-face discussions to make meaning collaboratively, and for formative assessment. This article should have been titled, “using written and oral student reflection to build meaning and understanding of classic literature in small group, face-to-face settings.” Although the four units mentioned in the curriculum handbook would be worth perusing for ideas and inspiration, the only thing this specific article inspired was frustration. Repackaging student reflection and constructivist teaching and learning as “participatory culture” and “new media literacy”is ridiculous. Perhaps I should write an article entitled “Socratic Seminars as New Media Literacy in a Participatory Culture.” Stay tuned for ground-breaking educational ideas, circa 5th century BCE.

  • Greenhow, C. (2011). Youth, learning and social media. Journal of Educational Computing Research, 45(2),139-146.
  • Lankshear, C., & Knobel, M. (2007). Researching new literacies: Web 2.0 practices and insider perspectives. E-Learning, 4(3), 224-240.

Words

 because, words

  • Laspina, J. A. (2001). The “Locus” of Language in Digital Space. Language Arts, 78(3), 245–254.

“…by providing a new medium for textual representation, digital technology may ultimately be reshaping not only our culture but also our mode of cognition.”

Laspina crafts a wandering essay in this 2001 NCTE publication, connecting VR (virtual reality) to textbook design, to the foundations and formations of the cognition of language, to semiotics, and back again. Perhaps cutting edge at the time, now Laspina’s essay feels both antiquated and prescient. Much like Orwell’s 1984, Laspina does not attempt to predict the future, but rather to put his finger on the problems of the present and define them. And, much like 1984, there is a lot of rhetoric to dig through and even an invention of language, that highlights many of the issues about which the author is speaking. In a nutshell, Laspina claims:

  • VR was supposed to be the wave of the future, but it still isn’t functioning how everyone said it would. It turns out that it is incredibly difficult to create any sort of VR with fidelity.
  • We aren’t quite ready for VR yet. Our ambitions got ahead of our capabilities. Also, VR tends to make us uncomfortable and interact in ways that do not feel normal.
  • Likewise, Textbooks are also getting ahead of their capabilities, attempting to create visual and language connections and interactions without understanding the cognitive processes that we use to read and comprehend.
  • Putting textbooks online (or on CD-ROMS) without extensive teaching in how to access and use the textbooks, both for teachers and for students, is not educationally sound.
  • There are interactions and cognitive functions in processing language, and these cognitive functions are different when you add visuals and links, and make the text interactive.
  • Reading an interactive text is different than reading words on a page. We process the words differently, our behaviors are different, and our interaction with the text is different.
  • When you add other elements to a text (visuals, hyperlinks, symbols, graphics), you fundamentally change both the text and how we interact with it.
  • In essence, we are making moves in education because they are cutting edge, but we may not quite understand the impact of these moves and we might not have the foundational understanding of what these moves mean or how best to implement them.

Laspina keeps referencing “the locus of language” (his italics) throughout his essay. Much like Orwell’s newspeak, Laspina doesn’t clearly define his term or the use of it. I cannot tell if he means the actual position of the words on the page, or if he means the position of language acquisition and comprehension in our brains. Perhaps he is being clever and means both. Regardless, his point about semiotics (although he does not use that term) is spot on, if his delivery is somewhat convoluted. We do approach language and comprehend language differently with the inclusion of semiotics; once we make the text non-linear (with hyperlinks), the cognitive processes are effected even more. In essence, we read texts in digital spaces differently than we read texts that are “old school.” And it’s not enough for educators (and textbook publishers) to simply make texts digital; we need to understand how students read these texts differently in order to better design our resources and teach our content and our students.

I was a bit surprised, as I read through this, as to its inclusion into an NCTE publication. It is not particularly well-crafted, and I felt like Laspina could have made his points both more concise and more meaningful, had he been willing to back away from his VR introduction and examples. Other than his talking about how all of the predictions regarding VR hadn’t actually come to fruition yet (because the technology wasn’t there, because it turns out it’s really hard to do with fidelity, and because it turns out people aren’t super comfortable in that environment), he was using VR as a comparison to our uses of digital text. Perhaps his essay was included in this usually incredibly selective publication because it was groundbreaking at the time? As I think back to 2001 (time of publication), I was using the Internet and technology in my classes, but tech was not the primary mode of delivery for the courses I was designing and teaching. When I did my Masters in 2000, I got most of my resources out of a library and my thesis was written on a 3.5 floppy disk. Our integration of technology and specifically digital text is much different today. But, when I look at my own classes now, technology is still not the primary mode of delivery. We still read books, the old-fashioned, smells-like-dusty-corners-and-a-bit-like-worms, torn up, printings-don’t-match-each-other books that are dog-eared, doodled in, and full of sticky notes. I often have to defend this practice to those who believe that e-readers will solve all of our funding and comprehension and engagement problems simultaneously. And I continually have to explain that we approach text differently in different spaces, and that we process language differently and interact with language differently based on the fonts, semiotics, sequencing, and social and cultural uses of the platform itself. Perhaps this reality is what Laspina was attempting to put his finger on.

A lesson on writing

in “framing the question”

  • Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., … Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication: JCMC, 12(2), 412–433.

“In addition to the benefits of receiving adequate feedback, students may also benefit from giving peer feedback.”

This “exploratory” study sets out to determine if requiring peer feedback in an online graduate course will lead to an improvement in the quality of online discussion. Study participants were required to respond to discussion questions online, and then score each other’s postings using a scale based on Bloom’s Taxonomy. According to the authors of the study, “quality was maintained” in the student discussion postings; however, the hoped-for improvement of the quality of the discussion postings did not occur.  The premise for the study was based on “Black (2005), [who stated] most online discussions consist of sharing and comparing information, with little evidence of critical analysis or higher order thinking.” The authors set out to explore whether or not peer feedback could “promote these higher levels of thinking.” The feedback students gave, however, was in the form of grading, and not in any sort of discussion-engendering format. Students “provided feedback to each other, specifically related to the quality of their postings” on a Bloom’s Taxonomy scale. The study had  “the expectation that this would enable them to grow and learn from each other, and thus, to co-construct knowledge and understanding (Roehler & Cantlon, 1997).” Although there is little evidence in the study that the students grew and learned from each other and co-constructed knowledge and understanding, and although the discussion postings themselves did not improve after the implementation of peer feedback, the student perceptions of the worth of giving and receiving feedback is worth discussing and gives merit to the overall study.

Although the study design did do what it said it was going to do — require students to post online responses to discussion questions, require students to give feedback to each other’s postings, and evaluate if the quality of their postings improved over time — the focus of the study seemed to be more on the perceived benefits of giving and receiving feedback rather than the quality of the discussion postings. In fact, most of the article discussion focused on student perceptions of giving and receiving feedback. In addition, when students did respond about the quality of their own discussion postings, their responses were focused on “us[ing] words to help them [the student scorer] see” and it was “useful to know what the person would be looking for.” This doesn’t speak to higher quality discussion postings or higher order thinking at all; the students were not discussing their writing through the lens of higher order thinking but rather through the lens of the grade. This speaks to giving the graders what the graders want to see in order to get a good grade. That is not a higher order thinking skill, but rather a knowledge, comprehension, and application skill (worth a 1 on the rubric.)

Unfortunately, the discussion questions themselves weren’t asking for higher order thinking (and the researchers do later acknowledge this), so it seems disingenuous to grade a student — or ask students to grade each other — on a scale that asks them to do something that the questions themselves weren’t asking students to do. Also, the discussion questions did not actually engender discussion, or if they did, this was not discussed in the study. The initial framing of the study was that online discussions are lacking in substantive content…but really, this study wasn’t asking students to discuss anything. They were crafting a detailed answer to an instructor-posted question and framing their answer in a way that a student grader could score it. This is the opposite of “discussion.” The addition of earning a grade for each response on Bloom’s Taxonomy would also hinder open and thoughtful discussion, not promote it. As a student, if I know that any response I give will be scored on a rubric, I am more apt to simply give the required number of responses and then fall silent, rather than risk my grade by posting less grade worthy responses for the sake of discussion. Grading every response turns the focus onto the grade and takes the focus away from the promotion of thoughtful discussion.

The Bloom’s Taxonomy-based rubric, ironically, was asking for the feedback-givers to use higher order thinking skills (evaluation, worth 2 points on the scale) but it would not have engendered higher scoring discussion postings, unless each discussion question specifically asked the students to analyze, synthesize, and evaluate (2-point answers). The majority of the discussion questions were asking for 1-point answers (knowledge, comprehension, and application). Non-substantive comments received zero points on the rubric. But, because there would be no benefit to students to post non-substantive comments, and because the discussion prompts themselves were asking for knowledge, comprehension, and application, there was really no where for the discussion postings to go to score higher; likewise, students would not suddenly start posting non-substantive answers partway through the course, so the scores would not logically drop, either.

I struggle with the premise and perceived outcomes of this study. As a k-12 teacher, I often have students score each other’s writing and give each other substantive feedback related to “bump scores” (how to move up on the rubric) and to recognize what they did well to earn the scores they earned. Not only does this help students see through a reader’s lens, but it helps them in critiquing their own writing. However, student grading can never be a part of the actual grade; it would be unethical for students to actually grade each other. Instead, it’s the act of giving feedback that earns the grade, and not the initial piece of writing. As a means of formative assessment, I am checking for student comprehension of the elements of writing that are effective and substantive. I am grading the feedback, not the essay. And, as a classroom teacher, formative assessment must be timely, and the entire purpose of it is to check for understanding and instruct teaching. In this study, the assessment grades took weeks to get back to the students, and the grades did not influence or instruct the teaching whatsoever. Although the authors discussed the importance of feedback as formative assessment, the design of the study did not allow the feedback to function as formative assessment.

In essence, this study should have been titled, “Will requiring students to give feedback enhance the quality of the feedback they give?” By reframing the question, the authors could have precisely focused in on the benefits of giving and receiving peer feedback instead of getting lost in the periphery discussions of the quality of online postings, formative assessment, and Bloom’s Taxonomy.

 

 

 

Meaningful engagement

divergence <—> synthesis

  • DeSchryver, M. (2015). Web- Mediated Knowledge Synthesis for Educators. Journal of Adolescent & Adult Literacy 58(5), 58(5), 388–396.

“I encouraged students to use the Web as a cognitive partner that could help them think and be creative with the Web, directing them to ‘start using the Web for more synthesis and/or creative thinking’…The more learners practice, and are cognizant of the results of this practice, the better they will get.”

In this article, DeSchryver briefly unpacks his Theory of Web-Mediated Knowledge Synthesis and describes the studies that led him to this theory. Beginning with a nod to McEaneaney’s claim that “digital literacies are evolving in a way that litbots (literacy robots) will eventually make meaning from Web resources for human readers,” DeSchryver believes that, as litbots take on the meaning-making roles for us online (synthesis for meaning), we can free up brainspace and time and focus on the higher-order meaning-creating roles (generative synthesis). Through multiple case studies of “advanced learners using the Web for ill-structured reading-to learn and reading-to-do tasks” DeSchryver builds a theory with seven interacting elements:  (1) divergent keyword search phrases; (2) synthesis for meaning; (3) in-the-moment insights; (4) repurposing; (5) reinforcement; (6) note-taking; and (7) creative synthesis. He unpacks the 7 elements individually, with examples from the case studies showing how he taught and reinforced these elements and how students responded with their “a-ha moments.” The end of the article is a call to action for k-12 teachers to start scaffolding these elements into their practice to engender creative synthesis in k-12 students.

This theory, although complex, makes a lot of sense; however, the beginning focus on the evolution of technology to take care of our need to synthesize for meaning was somewhat distracting. I believe that we will always need to synthesize for meaning, regardless of the tools within reach; the key is to understand that students and adults fundamentally need to also engage in creative synthesis, and that our evolving technologies provide us with the ability to do both simultaneously. The Figure 1 illustration does not seem to have all elements interacting equally with each other, which is an assumption that I had made; are some of the elements not as interactive with others? It is decidedly difficult to illustrate the interactions in a way that is both thorough and coherent; placing creative synthesis at the center of it makes sense, as that seems to be the “generative synthesis” that DeSchryver is discussing at the beginning of the article. In essence, the six other elements all lead to and are supported by creative synthesis. I also wonder about the interchangeability of the ideas of generative synthesis versus creative synthesis. Although generare and creare are synonyms, I don’t know if they are used the same way in learning theory (a tangent I could research in my free time).

My struggle with the article is not with the theory but with the call to action; using a theory generated by (created by?) case studies with Masters-level adult learners in order to find applications for k-12 has a pedagogy/andragogy disconnect. Elementary students, especially younger ones, are eager to explore and create; unfortunately, because of so many factors related to how we do school and what our society views as the purposes of school, this passion to explore and make meaning is often missing in upper k-12. High school students have become adept hoop-jumpers and grade-getters, and the exploratory nature of generative synthesis is almost a lost art. My students want to find the right answer as quickly as possible and then move on to entertainment. It is a constant struggle to redirect and redirect and redirect and force them to spend the time they need both in online spaces and in offline contemplation to truly begin to make meaning beyond the scavenger hunt-and-peck of the internet webquest that they have become adept at completing as quickly as possible with as little reflection as possible. Don’t get me wrong, I love my students. But k-12 students are rarely self-directed and intrinsically motivated to simply “explore” without some sort of “correct answer carrot” at the end. (And their parents are generally not supportive of education without tangible and gradeable outcomes; and administrators are not keen on teaching and learning that cannot be formatively assessed based on learning targets, success criteria, and performance tasks on a daily basis).

All that being said, the questions DeSchryver asks and the tasks he has his students do are very similar to my Genius Hour design in my junior ELA classes.  What he proposes can be done in K-12. I know that I and my colleague Janet Neyer in Cadillac are both fundamentally trying. But it has to be heavily managed and heavily scaffolded, and although the majority of students will eventually plug in, the level of their thinking and the proficiency of their creative synthesis is often distressingly low. So much of teaching and engendering critical thinking and creative or generative synthesis comes down to teaching habits of mind. In a class where we also have to teach subject-verb agreement and the rules of capitalization and the differences between a colon and a semi-colon…teaching students to engage with their devices in a meaningful way and to disengage meaningfully to allow for reflection…and teaching them to search for divergence instead of searching for the fastest correct answer is both exceedingly inspirational and depressingly daunting.