Sound Familiar?

From Ian Bogost’s “This is a Blog Post About the Digital Humanities

Let’s go over that again. At the MLA and in a new book, digital humanists debated the role of computer media like blogs in the practice of humanism. In the wake of the MLA, a famous and controversial literary theorist notes that the MLA featured debates about the use of media like blogs in scholarship, and raises concern about the nature of media like blogs in scholarship, largely through discussion of a book by an MLA officer about the ways scholarship is changing when done on blogs, which was first a blog and then became a book. Digital humanities advocates respond in blogs and blog comments about blogging, arguing, among other things, that digital humanities are not really postmodernist. Ahem.

When I lived in Los Angeles and worked in the entertainment industry, I remember coming to a realization: a great deal of Hollywood entertainment is about the entertainment industry. Think about it. Fame, Barton Fink, Super 8, Tropic Thunder, Party Down, Adaptation, Full Frontal, Peeping Tom, Ed Wood, The Truman Show, Sunset Blvd., The Barefoot Contessa, Somewhere, Hollywood Ending, Seinfeld. I guess it makes sense. Write what you know, the aphorism goes. At first, that means heartbreak or black heartedness, but eventually, with success, what one knows is what one does. And currently, what one does in the humanities is talk about the humanities. This is particularly true of the digital humanities, some of whose proponents are actually using computers to do new kinds of humanistic scholarly work in breaks between debates about the potential to use computers for new kinds of humanistic scholarly work.

And now I’ve written a blog post about it.

Work as a problem with DH

One of the things I’m hoping has become clear–perhaps painfully so for a few of us–is the extent to which a digital humanities project represents a substantial amount of work. Even a supposedly small element in a project, like metadata tagging, can turn out to be a difficult and time-intensive piece of work, one which generally isn’t much fun but which has to get done. Continuation, expansion and maintenance of an existing site, like EBBA, typically entails a repetition or reiteration of this kind of work. If the EBBA site adds a new search term (searching by place or by type of image), then a new set of metadata must be developed and then entered for every single item in the archive.

Projects involving tools require either direct coding or adoption of existing code from other sites. In theory, a tool developed to speed the entry of metadata for the EBBA project could be employed for similar projects; in practice, tools often are either outdated or coded in a way so specific to a single project that they require modification for broader use. Hence, the development and proliferation of project platforms (WordPress and Blogger, for example, are blogging/web-page platforms; Omeka is a platform tailored for more metadata-intensive application). Standards for metadata exist alongside such platforms, like the TEI standard we’ve looked into earlier in the semester.

When a collaborative online project exists in the way that user involvement typically exists (with Facebook or myspace or Youtube or Wikipedia), everything seems easy enough. You log in, post your edit to the wiki entry, other people vet the entry, and it’s all good. In practice, this kind of open development conceals from the typical user the degree of work which was involved, from developing and maintaining a platform (like the wiki platform), to administrating a site (Wikipedia servers must be purchased, maintained, replaced, just as one example), not to mention the difficulty in starting up such a project in the first place. And while the Internet bills itself as a kind of instant gratification engine–Google tells you how many tenths of a second it took to generate its results, for instance–the actual time and investment of energy required to develop an online project, not to mention the requirements to put it into practice, looks much closer to scholarly and academic norms and expectations. Most projects I’ve seen which make their histories known took years to get past the starting gate. EBBA, for example, began development in 2003. That means the EBBA development has gone on longer than, say, the iPhone, from initial conception to present implementation.

One of the challenges digital humanities face going forward, within the academic community, is articulating more clearly to those who assess scholarly work the degree of time, planning and organization required to prototype, much less “complete,” a single project.

Response to Week 8 reading

In the article “What’s in a Name?,” Kenneth M. Price is primarily concerned with the labels we choose to describe our work. It seems to me that this tension is inextricably associated with apprehensions about the current state of the humanities and liberal arts in general. Essentially, I mean that there is a growing concern with legitimizing, justifying, or otherwise defining the importance of the humanities in general that seems to be informing this article. For example, Price states:

For many people, electronic work is even more dubious [than traditional editorial work]: what relatively short history it has is marked by distrust, denigration, and dismissal. We all know the charges, however distorted they may be: digital work is ephemeral, unvetted, chaotic, and unreliable. When suspicion of the value of editing combines with suspicion of the new medium, we have a hazardous mix brewing.

This clearly articulates a concern with how much authority or respect digital editing might be given; a concern that is largely based upon how “others” might perceive the value of the work, in this case based upon how we choose to title the genre. Price frets, “In the fraught circumstances of the academy, driven by a prestige economy, humanities scholars are well advised to be highly self-conscious about what we do and how we describe it.” This fear is driven by a sense of legitimacy, place, and belonging that I would argue should not be extant in digital humanities the way that it is in regular humanities. Perhaps this is still an issue of definition, semantics really, in the sense that if we define digital humanities (in this case digital editing) as something other, or capable of more, or at least not limited in the same ways as traditional editorial work, then why be afraid of how the genre will be defined? Let the product of the genre define it. If, on the other hand, we define digital humanities as simply an extension of what has been before, limited by the same constraints of glacial-speed progress, then we should be afraid of what others might impose upon the work, and we should be selfishly and actively gobbling up and claiming the right to digitize every text ever written, staking our claim and marking the territory as “ours!” My rhetoric makes it obvious where I fall on the spectrum, and while there may be some growing pains associated with a new field, I do not believe the hazards must necessarily be as many as Price is suggesting.

As Folsom and Price explained in their grant application to the NEH, “The amount of Whitman’s work is so huge that no two scholars could hope to edit it effectively in a lifetime — fourteen scholars spent the better parts of their careers editing the materials that now make up the Collected Writings. But we do believe that developments in electronic scholarship have made it possible to enhance and supplement the Collected Writings by editing the materials that have not yet been included.” This would be true for any large body of work, or for any canonical author. The answer does not lie in traditional editing. Why limit the technology we have available, and the speed with which these large projects might be successfully assembled, by using traditional methods, and traditional criteria to define who can assemble these collections? Crowdsourcing can solve the problem of being reliant upon a previous scholarly edition printed in traditional form. Starting from scratch is not as daunting when you have a thousand scholars (albeit scholars whose experience litters the spectrum of experience) working on the project instead of just 2.

In some ways, for me, it seems as though some of the concerns voiced here are similar to the idea of pac-man (the traditional humanists) trying desperately to gobble up all of the pellets (digitize traditional works) before the ghosts (librarians, system engineers, or other “less specialized” persons) do.

Twitter Tool Review

In my review I will include some discussion of Twitter’s practical usefulness and usability, but I will not dwell on it because I think the conversation has moved beyond whether or not Twitter is useful. In its first few years of activity, we might have called Twitter a “tool.” Now, we might say that Twitter is how we process cultural moments. Whether or not I think its interface is “intuitive” feels kind of irrelevant.

Website
Launched: July 2006
Cost: Free
Requirements: Any computing device
Rival Tool:  Facebook
Pros: Fast-paced, powerful networking and information tool
Cons: Cluttered and potentially overwhelming
Other Reviews: This is a good example of the binary conversation (Waste of time or best thing ever?!) that has been presented in conversations about Twitter for years. This piece on “The Twitter Explosion” is aging but potentially useful.
Reviewed by: Alex Pieschel
Review Date: 27 February 2013

In “Stuff Digital Humanists Like,” Tom Scheinfeldt argues that Twitter is “more open” than Facebook, and it “allows for the collaboration and non-hierarchy that the Internet and digital humanities values.” I agree with this statement to an extent, but I also think we should be skeptical of Scheinfeldt’s use of the term “non-hierarchy.”

Twitter’s About page states, “You don’t have to build a web page to surf the web, and you don’t have to tweet to enjoy Twitter.” I agree and think this an apt comparison. That said, Twitter, like any other kind of social media, has its own embedded hierarchy. Though one can “follow” most anyone on Twitter, one cannot expect prominent political figures, artists, and academics to “follow back” and engage with one’s ideas. For some, Twitter can feel less like collaboration and conversation and more like anonymity and irrelevance.

One might call Twitter a minimalist networking tool. It lets you do a lot with what at first appears to be severely limited agency. Twitter allows users to employ eight basic functions:

Tweet: is Twitter’s word for hurling truncated thoughts into the digital void. We might consider Twitter’s 140-character limit its most distinguishing feature. Even if we remove replies, retweets, and hashtags from the equation, the character limit in itself asks for a very specific kind of writing. This limited form of micro-blogging encourages an author of tweets to be concise, nonsensical, or perhaps even poetic.

It’s worth noting that some people prefer to get around the character limit by numbering tweets (1/3, 2/3, 3/3), by tweeting a link and then offering commentary in the following tweet, or by expressing a complete thought one broken piece at a time.

Favorite: is interesting because a tension has developed between its practical use and its social connotations. You might favorite a series of tweets because they contain links you don’t have time to read now but want to read later. Or you might favorite a series of tweets because you want to retweet them periodically throughout the next week. (I’m ashamed to admit, I’ve done this before with @fanfiction_txt). Or you might favorite a tweet simply because you want to inform the user of your approval. On Valentine’s Day, I remember someone tweeting something to the effect of, “Favoriting is the ‘I like you, but I just want to be friends’ of Twitter.” This multiplicity sometimes makes it difficult to discern the motivations of one who frequently favorites. The tweet below seems to play with the ambiguity between social intent and practicality.

Retweet: is more powerful than Favorite because it repeats the circulation of an idea, increasing its longevity. A retweet allows you to reaffirm and act in solidarity with another. A tweet that is tweeted once is fleeting, but a tweet that is tweeted a thousand times has potential staying power. TweetDeck makes retweeting more powerful because it enables editing, which allows you to not only share an idea, but to explain what you think about it and leave a more personal digital imprint. Adding your critical, curatorial voice to someone else’s ideas gives people a reason to follow you. In the example below, I give my own summary of a blog post in one word: “Poetic.” I then include the quote I found most powerful in said blog post, and the original tweet follows my own framing.

Reply: initiates a conversation that involves performance because followers can observe your “replies,” which appear stacked on top of each other in a conversation ladder.

Hashtag: is an interactive metadata tag denoted by the symbol “#.” The hashtag is not an official function of Twitter, but Twitter is where the hashtag is most popular (but that could change). The first Twitter hashtag was used in 2007 by Chris Messina, who wanted to keep track of conversations at a tech conference.

Twitter doesn’t own the hashtag, which is why it can function outside of Twitter in Instagram or television advertisements. Employing an appropriately specific hashtag is a useful way to control and catalog a conversation, as illustrated in the tweet below, or to document a specific event (like a natural disaster or presidential election) in which information is changing quickly.

If you’re looking for something specific, hashtags can bring up a lot of tedious junk. They’re better for taking in snapshots than conducting directed research because a feed’s usefulness is unpredictable.

It’s worth noting that some people now consider the hashtag a tacky, overeager form of self-promotion. In addition, hashtags are sometimes used facetiously and self-referentially, “a metajoke about metadata — a bit like setting up an entire hanging file just to store a single Post-it,” as Julia Turner puts it. Most of the people in my feed don’t use hashtags, even though most of them probably know how they work: another useful reminder that most actions on Twitter carry social connotations that can either reinforce or contradict their practical function, depending on how they’re used.

The hashtag is less ambiguous than favoriting, but its mechanisms are more complex. Turner waxes philosophical on this point, illustrating how the hashtag has evolved beyond straight metadata:

But the hashtag, for the dexterous user, is a versatile tool — one that can be deployed in a host of linguistically complex ways. In addition to serving as metadata (#whatthetweetisabout), the hashtag gives the writer the opportunity to comment on his own emotional state, to sarcastically undercut his own tweet, to construct an extra layer of irony, to offer a flash of evocative imagery or to deliver metaphors with striking economy. It’s a device that allows the best writers to operate in multiple registers at once, in a compressed space. It’s the Tuvan throat singing of the Internet.

Follow: allows you to regularly monitor a specific user’s tweets.

List: lets you organize the people you’re following into manageable, logical groups. With this function, you can fashion imagined communities in which users express themselves alongside one another as if they were conversing.

Search: is most useful for pursuing conversations that arise from linked articles or specific events like The Oscars or the season premiere of Madmen. An appropriately specific, directed search term can be just as useful as a hashtag. With the right search, you can figure out who has circulated a specific link and what they’re saying about it. The search function, like the hashtag, potentially offers as much clutter as it does useful information.

I think Scheinfeldt is on point when he argues that Twitter is “mostly about sharing ideas whereas Facebook is about sharing relationships.” Twitter, because of its lack of visual emphasis, is more conducive to fashioning a digital identity that is entirely based on ideas. Furthermore, Twitter feels more fast-paced and immediate than Facebook. One might not expect or even aspire to keep up with everything in one’s feed, and I would venture to say that many people don’t.

It’s difficult to construct a digital identity in 140 characters or less. For this reason, I believe Twitter, when considered in the context of meaningful ideas, is most effective when attached to an outlet that features a longer form of writing. The different kinds of voice compliment one another, and embedding tweets in a blog post can prove especially useful, since every interactive element of the tweet is preserved. In this fashion, you can lend longevity to ideas that might be otherwise ephemeral.

Twitter seems to operate on a continuum between social networking tool and information feed; each user must carve out a space on that continuum. Perhaps this is why some find it confusing or overwhelming. One might be compelled to ask, “Why isn’t Twitter more than one thing?” I think it’s possible that Twitter is popular because its tensions and ambiguities produce interesting, unexpected results.

Possible Projects for Collaboration

I stumbled across this while I was doing some research for class today. It’s a list of DH projects that are currently looking for help, and most of the entries include what kind of work they’re looking for and who to get in contact with. It might be useful in looking for end-of-the-semester projects.

http://dhcommons.org/projects

What Makes a Game Political

Screenshot from Cart Life

Since we’re discussing games today (Or more specifically, games that try to teach you something), I thought this New Statesman article might be useful. I think politically charged games, in their attempts to make arguments through the limitations of a designed system, face challenges similar to those faced by the Edutainment games we read about in the article assigned for class.

The New Statesman piece raises questions like “What makes a videogame political?” and “What happens when unforeseen consequences undermine a game’s intended argument?” The article highlights some games that have clear, political agendas and others that would probably prefer to be considered on the basis of artistic merit.

McDonald’s Video Game, mentioned in the title of the piece, is funny, dark, and disturbing. In this game, you play as CEO, and try to run the McDonald’s corporation successfully, which necessitates maltreatment of animals, injecting food with ambiguous drugs, regularly firing and rehiring employees, and strategically implementing devious ad campaigns. Though the game is effective and presents a persuasive case, an observation from Ian Bogost deftly addresses a hole in the game’s argument:

It’s very anti-corporation, but a lot of students play it and say, ‘wow, I really empathise with the CEOs of multinational companies now – they have such hard jobs!’”

This pattern suggests to me that the game’s capacity to inspire empathy was underestimated by those who wished to harness its persuasive power. Before presenting McDonald’s Video Game as a persuasive tool, one assumes that the player’s response will be,  “I really hate all of these terrible things this corporation does to get ahead!” But instead, the player might think, “Wow it’s really hard to be a successful corporation!”

But perhaps it’s unfair to attribute this presumably unintended response to a flaw in the game’s design. Perhaps we should consider this response as one of the inherent complications of the society in which we live. After all, the game encourages its players to do terrible things by placing them in a situation in which they must do terrible things to succeed. This situation creates some level of empathy, whether it’s intended or not. In order to keep the company afloat, the player must exploit the system. When playing this game, morality isn’t just gray; it becomes invisible. One might argue that the cold, calculating numerical system underneath McDonald‘s Video Game doesn’t carry any political connotations; the manner in which the system is dressed up is what makes it political.

After playing McDonald’s Video Game, one might be compelled to ask: Can McDonald’s be “fixed?” Is it possible for something like McDonald’s to exist without most of us looking the other way when it’s convenient? Does focusing exclusively on the corrupt practices of a corporation like McDonald’s allow us to overlook more important, underlying, systemic issues that allow those problems to exist in the first place? Such questions, I think, mark an effective political game. No one likes to be preached at, and if a game is just an excuse to shout at people on the internet, then I think that game must resign itself to irrelevance.

It’s interesting to see McDonald’s Video Game, Sweatshop, and Darfur is Dying discussed alongside more personal works such as Cart Life, Lim, and The Castle Doctrine, which were presumably designed with artistic intentUsually people consider political games, educational games, and entertaining or artistic games as separate entities, but I think these categories are too easy, and they’re holding back people who are interested in games. This is probably why the designers of both Cart Life and The Castle Doctrine actively resist the labeling of their games as “political.” I think Richard Hofmeier’s reaction is more justified because I believe Cart Life is one of the most important, affecting games ever made. Until now, I hadn’t much considered its embedded political argument because I don’t think it’s as important to Cart Life‘s aesthetic.

I first played Cart Life last year, while I was working at Starbucks. For me, playing this game was a poignant, personal experience that spoke to working in a retail environment, repeating the same claustrophobic set of routines everyday, and trying to make a living while retaining emotional stability. But now that I think on it, it’s true that political statements are embedded in Cart Life. The game inspires empathy by evoking a feeling of constrictive repetition, which implies a political argument about the systemic problems of capitalism and bureaucracy. It’s not an easy argument that you can put in your pocket and save for your next political debate, but it is one that helped me try to become a better human.

Project Review: Electronic Literature Organization

Project name: Electronic Literature Organization

Web address: eliterature.org

Status: functional, non-profit

Affiliation: MIT

Focus: “Born-digital” literature directory.

I first heard about the Electronic Literature Organization through my twitter feed. Having opted to follow a fair number of the digital humanities twitter feeds from Daniel Cohen’s compiled list, I’ve recently been introduced to a plethora of DH projects. I chose ELO because I think it is a fair representation of both the best and worst aspects of current digital humanities, which makes it a fairly interesting project to study.

The most interesting thing that ELO does is to catalog obscure “born-digital literature,” that absolutely would have been lost (and perhaps should have been) without their persistence in finding, salvaging, and storing the projects. For example, Anipoems, a set of black and white animated gifs that are loosely translated as “peoms,” is described and linked from their directory. The page itself is typical of something homemade you might have encountered on the web in 1997, the year it was created. The site offers little in the way of entertainment or enjoyment, but from a historical perspective it is fascinating. This is the crux of the beauty and beastliness of ELO. Predominantly they have simply created a list of literary “other” projects. Members can find and list the web locations of these projects, along with a synopsis of their function or history. Ultimately, that is a pretty boring idea, the realization of which is no less boring. However, there is some gold beneath the dross that is worth mining.

Though the catalog style presentation of the project is somewhat unappealing, it functions in two rather successful ways. First, it functions as a free-to-the-public database of information that is searchable in much the same way that our library databases are. While it is debatable how useful this particular database would be for standard scholarly research, it does have the same bells and whistles as scout or Ebscohost, but without the associated cost. Users can browse the collection by author name, for example, and be presented with a list of all works by the author. They can also search and sort by title, year of publication, subject, and even by meta tag. Beyond the useful features, the site offers a second function that is perhaps its most viable justification for existence. It has compiled a record of known digital born works, which historically is unprecedented. The very existence of such a record is a sort of archive of where digital humanities has come from, and in that way the project represents one of the better aspects of what DH might be. The curator role is not the only way in which this project is a functional contributor to digital humanities.

ELO has compiled two electronic literature ‘best of’ collections, and hosts those digital collections on their servers. They have also produced CD-ROMs of the same collection for offline consumption, a concept which seems near to their objectives, but rather tenuous and unsustainable. They also host fairly large conferences to promote academic awareness and acceptance of born-digital works. This production comes at a cost, however, which is why ELO not only requires paid membership for contribution to the directory, but also to attend their conferences. This cost is in addition to the steep cost of the ticket to attend the conference in the first place, which is $150 (or $100 for lowly grad students). In my opinion this type of pay to play membership is exclusive of a large component of people who might otherwise contribute, but may not be able to afford the associated fees. I recognize the logistical necessity of the costs, but I have to wonder how that affects the open-source vibe of typical digital humanists.

The final note about the ELO I’ll mention pertains to their interesting history. The project was initiated in 1999 as an NPO, and found its first home at UCLA in 2001 as an inter-departmental collaboration between faculty and students there. In 2006 the project moved across the country to the Maryland Institute for Technology in the Humanities, before moving to MIT’s main campus in 2011. This is interesting for at least two reasons. First, it shows the projects affiliation with big name academic institutions, and speaks to those organizations’ interest in digital humanities projects. Secondly, it demonstrates the necessarily transient nature of such projects. Must DH projects follow grants and funding? If not, how dependent are these projects upon “high-end” schools that can afford to fund them? I would be interested in discovering the exact reasoning behind the gypsy like migration of the project, and I wonder how much of the decision was based on requirements outside of economic.

Projects and Funding

I was surprised at how exclusive some of the funded Digital Humanities projects seemed. For example, “From Chopin to Public Enemy” is a book written by a music professor. I cannot contribute to this kind of project, nor does the project offer any openness to contribution. The book has been published under a single name. A few of the projects seem to be capitalist ventures with a “digital humanities” aspect on the side: “A Saloon, an Auction House, an Undergarment Store” is a project in which old buildings are renovated in order to give visitors a sense of what shopping was like in history. It does say it is a museum, but it does not say it is non-profit. The same goes for “Hidden Treasure” which seems like a renovation project which gained funding by agreeing to host a few humanities presentations from time to time. I could be wrong.

Many projects seem genuinely rooted in the goals of the Digital Humanities. The projects under Education, like the “Melding Literature and Medicine” certainly have all the right intentions (though I’m not sure how useful this specific fusion really is). The projects under Preservation remind me of a discussion we had earlier in class: is preservation Digital Humanities? How much funding is needed to put videos that are already recorded online? (I’m looking at you “Lightning Talks”). And what use are these videos when the the collection isn’t even comprehensive? “Chronicling America” seems to be a bit more useful; using QR codes allows access to their database from anywhere, arguably a lot more involved than simply digitizing material that already exists.

Response: Practicing the Digital Humanities – Daniel Cohen

The idea of engaging with new media types as they emerge is daunting, to say the least, but with the technological capabilities of modern computing and the internet, it seems entirely possible to accomplish. Cohen’s approach to new media publishing, especially the idea of automated publishing, is something that DH should be striving to harness. I think the emphasis of his argument should actually be on the urgency with which DH scholars should be anticipating and capitalizing on what should be a “humanist” lead endeavor (categorization, preservation, and publication of new human communications). Specifically, the viability of an algorithm that determines which digital publications are provoking the most discussion among scholars within a particular field, and then publishes those articles to some central ‘trending’ web-accessible location is exceptional. Cohen, in his address to the ACLS sums up the crux of the issue: “if you don’t do something like this, someone else will.” I think that is the heart of the matter with humanities (digital or analog)–if we opt to stagnate the void left by the absence of what we should be doing WILL be filled by some other entity, be it CS or some other science. The blogs and tweets and status updates are the new, streaming source of information, as it is being generated straight from the source of a human mind. If DH can organize and develop some standard for channeling this creative energy, then there will be no MORE viable modern program of study. The whole prospect is intriguing, but impetus to act is urgent–if we don’t embrace the new version of humanities study, it could easily be absorbed by some more eager and active field.

Andrew Torget, Monday Feb 11th 3-4 PM

This event qualifies for your Theory review, but I encourage you all to attend regardless.
David

Speaker: Andrew Torget, “The Promise and Perils of Doing History in the Digital Age”
Mon, February 11, 3pm – 4pm
Gorgas Library, Room 205

What will become of the humanities in the Age of Google? Andrew Torget will talk about the unprecedented challenges and opportunities that face historians in the twenty-first century. Tracing the evolution of the digital humanities over the past two decades, Torget will explore how new research methods (such as geospatial analysis and text-mining) are creating a quiet revolution among historians, and what that could mean for how we understand the past.

Andrew J. Torget is a historian of nineteenth-century North America at the University of North Texas, where he directs the Digital History Lab. The founder and director of numerous digital humanities projects — including Mapping Texts, Texas Slavery Project, Voting America, and the History Engine — Andrew served as co-editor of the Valley of the Shadow project, and as the founding director of the Digital Scholarship Lab at the University of Richmond. The co-editor of several books on the American Civil War, Andrew has been a featured speaker on the digital humanities at Harvard, Stanford, Rice, and the National Archives in Washington, D. C. In 2011, he was named the inaugural David J. Weber Research Fellow at the Clements Center for Southwest Studies at Southern Methodist University. He is currently completing a book titled Cotton Empire: Slavery, the Texas Borderlands, and the Origins of the U.S.-Mexico War.