I’ll be presenting on Digital Sound Pedagogy at Hodgepodge Coffee in Atlanta this morning at an Atlanta Connected Learning meetup. The slides I’ll use for my presentation can be viewed below or at this link.
Next Fall, I’ll be teaching a new ENGL 1101 course at Georgia Tech. The assignment description follows:
Sound and Vision
In this class, we’ll explore what James Joyce’s Stephen Dedalus called the “ineluctable modality of the audible” and the “ineluctable modality of the visible.” Sound and vision have historically been emphasized as the two major sites of perception and have competed with each other for metaphorical primacy in the language of philosophers. In this class, we’ll do our best to move beyond the idea of sound and vision as subordinate vehicles for content that actually matters—rather, we’ll consider sound as sound and image as image. Discussion will be situated in recent debates in sound studies and visual culture studies, and students will work to make projects that reflect on sound, silence, sight, and image. We’ll ask what we find when we really listen and really look. As the late David Bowie asked in the song from which the course draws its name, “Don’t you wonder sometimes / ‘Bout sound and vision?”
Last spring, my colleague Lauren Neefe recorded an interview with me about my work on sound, machine learning, and laughter. After the interview, she edited the interview into the first episode of Flash Readings, a podcast that showcases the research of the Marion L. Brittain Fellows at Georgia Tech. The podcast is now up at TECHStyle, our online forum for digital pedagogy and research.
Each episode of Flash Readings focuses on a particular sound in relation to a Brittain Fellow’s research. The episode featuring my research, titled “Laughter Worth Reading, focuses on two instances of laughter on recordings of William Carlos Williams’s “This Is Just To Say,” and in the episode I consider the difference such laughter makes in terms how audiences perceive poems and on how critics should interpret them. It also gestures toward the work I’ve been doing in the wake of the HiPSTAS Institute.
In the last six months, I’ve been working the Twitter Bot Pentametron, which finds tweets incidentally written in iambic pentameter, pairs them into rhymed couplets, and retweets them into followers’ feeds. I’m interested in Pentametron because of the way it finds poetry in the language of digital culture, invents a crowdsourced form of algorithmic authorship, and overlaps with the concerns of conceptual poetry and Flarf.
I presented on Pentametron at the Textual Machines Symposium at the University of Georgia last spring and at the Association for the Study of the Arts of the Present conference in Greenville, SC this fall.
My students are presenting pecha kuchas this semester in my course on Digital Culture, and I put together this sample pecha kucha that distills some of my ideas on Pentametron into a 20-slide, 6:40 presentation. The presentation can be viewed below.
I’ll be facilitating a workshop on “Designing Productive Blog Assignments” in the Georgia Tech Writing and Communication Program’s DevLab today. The workshop grows out of my mixed experience with blog assignments, which have been a key component of my courses since Spring 2013.
In the past, I’ve had my students use blogs in a variety of ways: as a place to construct an ad-hoc poetry anthology, as a place to stage writing in progress, as a low-stakes way to respond to readings, and as a place to showcase polished writing and presentations. I like blogs because they make readerships real and prompt students to think about writing as a sustained public activity, not a private task to be done once in a while. Blogs, moreover, promote a different kind of discussion and help constitute the classroom as a community of discussion that takes place both in person and in writing.
Of course, blogs come with their drawbacks. Mark Sample has written about the challenges of faithfully reading—and grading—coursework on blogs, and he has also written about how that exhaustion builds up over time. Now that I’ve taught with blogs four semesters in a row, and especially now that I’m teaching more students at once, I’m having some of the same trouble with maintaining motivation and integrating blogs well into my classes.
Today’s workshop hopes to think through some of the lessons my colleagues and I have learned about blogs in the classroom and to take up Sample’s call to design “A Better Blogging Assignment.” Some of the issues we’ll think about:
- What kinds of blogs work best for what purposes? Should students host their own individual blogs, should they form small-group blog communities, or should the whole class post to a single blog?
- How can blog discussions be better integrated into classroom discussions? Sample uses a model in which various students have various roles each week, and my most successful student-blogging semester involved “student leaders” organizing their thoughts and bringing blog discussion into the classroom.
- What are comments sections for, and how can we use them to promote genuine discussion? This is one of the hardest aspects of blog assignments, in my experience. It’s hard to motivate students to comment productively, and even harder to figure out how to assess comments.
- How do instructors manage the logistics of a course blog? This is, again, extremely difficult. There’s a lot of reading to do and a lot of different moving parts to manage. How do we keep up with course blogs, and how do we, in the end, grade them?
I hope we’ll all leave today’s workshop with a clearer sense of answers to some of these questions and some new ideas about how to make our blog assignments better. After the workshop, I’ll post some of our conclusions here and on TechStyle, a collaborative blog written by Brittain Fellows at Georgia Tech.
I’ve set up a simple web site that brings together the pecha-kuchas from my American Modernisms course this semester. My hope is that other students will be able to add to the videos posted on the site in semesters come and that eventually the compendium can become a resource for beginning students of modernism. The collection hosted at the site now has a few of the concepts, events, and isms that shaped modernism, but I hope that in time its coverage will be more widespread. The site is hosted here.
A few more examples of student work for the modernism pecha-kucha compendium project.
Cameron Mankin did his p-k on primitivism.
Fiona McGregor did hers on decadence.
Evelina Dubrovskaya did hers on surrealism.
Ellen Sands did hers on impressionism.
Still to come: presentations on cubism and ragtime.
My-Anh Nguyen, a student in my modernisms course this semester, put together this terrific pecha-kucha on the Armory Show. Other students, who presented on impressionism, primitivism, surrealism, decadence, ragtime, and cubism, will be uploading their videos soon, and I’ll begin to incorporate them into a separate web site. For the time being, though, I’ll post them here.
My upper-level “American Modernisms” course is in the midst of an assignment that tries to use pecha-kucha presentations productively in a literature classroom, an idea I initially thought about last spring. Each student has selected a modernism-related term and will develop a pecha-kucha about that term and its relation to the course content.
Once all the students have presented, they’ll record them and add them to a compendium of pecha-kuchas about modernism-related terms. There are only seven students in my class this Fall, so it’s a small group, but I’m looking forward to their p-ks on impressionism, decadence, cubism, surrealism, primitivism, the Armory Show, and ragtime.
To show them how the assignment would work, I prepared this pecha kucha on Dada:
I’ve posted a session proposal for this weekend’s THATCamp over at the THATCampVA 2013 blog.
The text of the proposal:
Tools for exploring big sound archives
Brandon Walsh has already proposed a session about tools for curating sound, so what I’m proposing here might well fit into his session, but in case what I’m proposing is too different, I wanted to elaborate.
At THATCamp VA 2012, I proposed and then participated in a discussion about how digital tools could help us not just think about tidily marked plain-text files, but also the messier multimedia data of image files, sound files, movie files, etc. We ended up talking at length about commercial tools that search images with other images (for example, Google’s Search By Image) and that search sound with sound (for example, Shazam). A lot of our discussion revolved around the limitations of such tools–yes, we can use them to search images with other images, but, we asked, would a digital tool ever be able to tell that a certain satiric cartoon is meant to represent a certain artwork. For example, would a computer ever be able to tell that this cartoon represents this artwork?
Our conversation was largely speculative (and if anyone wanted to continue it, I’d be happy to have a similar session this time around).
Since then, however, I’ve become involved with a project that takes such thinking beyond speculation. As a participant in the HiPSTAS institute, I’ve been experimenting with ARLO, a tool originally designed to train supercomputers to recognize birdcalls. With it, we can, for example, try to teach the computer to recognize instances of laughter, and have it query all of PennSound, a large archive of poetry recordings, for similar sounds. We might be able, then, to track intentional and unintentional instances when audiences laugh at poetry readings.
The project involves both archivists and scholars–the archivists are interested in adding value to their collections (for example, by identifying instances of song in the StoryCorps archive), and the scholars are interested in how this new tool might help us better visualize and explore poetic sound and historical sound recordings.
My sound-related proposal, then, is this: to have a conversation about potential use cases for this and similar tools. Now that we know we can identify certain kinds of sounds in large sound collections, how should we use such a tool? Since Brandon’s already interested in developing sound collections using Audacity, I thought we might also add this big-data/machine-learning tool into the mix of the conversation.