I found the Powerpoint on Intro to Relational Databases and SQL fairly straightforward and informative. However, a large portion of “Why Databases?” was unclear without the author’s explanation. In an attempt to expand on the argument for databases in Digital Humanities, I’d like to propose a possible interpretation of the section so nicely decorated with Simpsons characters. As an introduction to the “One Idea” model, I wonder if the photos of the Table of Contents and Index represent inflexible data sets, an old model for cross-referencing that is static and limited compared to databases. The “One Idea” seems to be something that could be stored as a database or as a document, perhaps a research topic about which a scholar wants to gather and curate essays. The first contributor adds four essays to this document or database. The second contributor adds three essays, which become part of the curated information on the topic. The pattern continues until the “One Idea” includes 18 essays. If the essays and the data about them, including subject tags, are stored in a database rather than only in document files, it becomes possible through querying to pick out which essays meet certain additional criteria, in this case, perhaps those that reference Shakespeare’s works. Using the database, this list of essays (generated by querying the subgroup Shakespeare) can also be sorted using other data attached to each essay, perhaps in this case the year the essay was written. This allows for much more flexibility and useful searching in a digital archive than only links to documents provide. I’d be interested to hear other theories about this section, if anyone else was curious about its contribution to the overall argument.
I guess I’m officially a nerd, because TEI is exciting. Until this reading, I had never heard of SGML, much less that HTML originated from it. Since TEI is also a form of SGML, it and HTML must be siblings, or at least cousins (although HTML sounds like the black sheep of the SGML family). Fun! I also appreciate the explanation of XML, as I wasn’t really sure what it is. My only complaints about the reading were all the broken links (which made it hard to understand some of the instructions, since they referred to documents that aren’t there anymore), the typos, and the very 1990’s frame formatting. I’m guessing “A Very Gentle Introduction” is also very old in computer years.
I was interested in the idea that “preservation is a key problem for an emerging digital culture,” something I hadn’t really considered before this class. Our discussions on bitrot and other issues of digital deterioration have helped make me more aware of the problem, but I’m still a bit stuck in the mindset of “going digital means preserving.” Part of my impetus for my barrage balloon DH project is to preserve original photos and other balloon-related memorabilia in digital scans and to disseminate them online. Sharing the photos I collect is still best accomplished digitally, but could the actual photographs be better preserved than the digital images I make of them, despite the threat of fire, acid, vermin, etc.? To bring my questions more in line with textual documents, what about my very fragile copy of the World War II children’s book Boo-Boo the Barrage Balloon? What can TEI do for Boo-Boo and his compatriots Blossom and Bulgy?
After reading about TEI’s encoding options for various text elements, I can guess that TEI would let me encode the text of Boo-Boo the Barrage Balloon with indicators of quotations and formatting, milestone events (like when Boo-Boo saves London from the Nazis), a bibliography, and a header about the book itself. (Incidentally, I appreciate Mueller’s inclusion of hyperlinks in his TeiXBaby language to update TEI for the Web.) I could then use CSS to format the text of Boo-Boo to make it approximate the text in the book – though without the charming illustrations. Once I get the hang of TEI, I might take a stab at encoding Boo-Boo. I doubt he will be of much interest to scholars, but it will give me some practice!
Speaking of encoding, the reading’s instructions for encoding TEI were a bit confusing to me, although familiarity with HTML helps (especially with containers like <head>, <div>, and <p>, which are the same in HTML). I expect it will make more sense once I actually start encoding, and I’ll learn the language as I go. I can’t wait to get started!
Text encoding is a language. Through the post-colonial lens, it is of interest that this language be construed as “universal” despite the fact that it is specifically designed to be constructed within English alphanumerics. After understanding the basic references, one is then able to implement and categorize various languages—French, for example, was shown in the Mueller introduction. Reading this introduction was a similar experience as reading an instruction manual, although I found much of the information helpful in laying the foundation for what TEI is and is capable of creating.
One specific point of interest in relation to this topic is the idea of digital decomposition. The introduction, itself, seems to have fallen victim to a lack of upkeep as some of the hyperlinks contained within the document lead readers to error messages, such as the following:
Despite this, I found the article to be insightful and a good reference for possible future endeavors within the TEI arena. Some of the other links, such as the First World War Poetry Digital Archive, did function, and provided necessary examples of the types of output that can be created by following the various steps listed. Digital upkeep is important and necessary when the only other available form of preserving literary texts is the costly and burdensome literal preservation of books as tangible objects.
The other somewhat revelatory tidbit of information that I gleaned from the text had to do with the somewhat poetic form of TEI. One is able to create elegant code. That is, code that performs its function while simultaneously appearing clean and extremely concise. This type of code is only functional once it successfully passes through the “Validator,” which is ripe for poetic/philosophical inquiry in its own right.
The second page we were led to on the syllabus was also very helpful and much more beneficial for would-be TEI encoders, in my opinion. Here we have a straightforward tutorial, complete with examples, tests, exercises, etcetera, that help to get those that are interested actually doing the task rather than simply talking about it. With any language, immersion is the most successful strategy for retaining and practicing the desired outcome. This is a great way of learning, at least at the basic level, TEI.
I am doing this blog entry differently than normal — I am writing my questions as I go along and I will strike them if they are answered after further reading. This discusses the origins of TEI.
1. Ok so SGML is a markup meta-language. SGML is a set of rules for making a markup language. SGML is composed of containers and an SGML conformant language — rules about containers and their “content models.” A “document type definition (DTD)” follows rules specified by SGML.
2. Why would I put a “declarative markup” with a “style sheet” instead of just making the words styled/formatted correctly in the first place? The author says combining the style and declarative markup is counterintuitive because a marriage of style and structure is a deeply engrained in our beliefs.
3. The author says style and structure is disjointed in markup, but then he says:
“The strength and weakness of SGML derive from the same fact: you need a document type definition, which means that you have to think ahead. Writing in SGML or any of its variants involves a willingness to shoulder upfront investments for the sake of downstream benefits.”
So are style and structure really disjointed? We still marry style and structure because we’re thinking ahead about the DTD and consequently design the SGML.
XML is the answer to the problems of SGML and HTML. “You can use XML without thinking ahead and make up your elements en route as long as they nest within each other. This is called writing “well-formed” rather than “valid” XML. Purists discourage this but people will do it anyhow.” So I was essentially correct in my above question; they created a spin of language known as XML to allow us to marry style and structure without losing the computing power of SGML.
Seriously, why not HTML if it is universal? “HTML always lives in sin because it constantly violates the cardinal rule of separating information from the mode of its display.”
But why is this important to separate information from mode of its display? Why not call it humanities computing if we want to separate the two? “If you want to use the Internet to move stuff in and out of databases, it becomes very useful to have a markup language with clearly defined containers and content models.That is the impetus behing XML, the “Extensible Markup Language,” which will supersede HTML wherever complex and precise information is at a premium.”
I thought the quotation included at the end of Carr’s article from the playwright Richard Foreman was interesting, the idea that if we lose the ability to read and acquire knowledge we will also lose our “inner repertory of dense cultural inheritance,”. Cultural inheritance, I think, can apply as much to ways in which we acquire knowledge as ways in which we produce the art that becomes part of our culture. A lot of the information in the article rang true for me-not because I’m sure search engines and other sites and are somehow intercepting my ability to concentrate, but because I am someone who inherently does not like to read. I would rather just go make something.
I listened to a radio program recently that discusses the impact of new forms of distraction on creativity and goal-setting behavior.
The program puts forward the idea that because the Internet becomes a way for us to be easily distracted while reading, it disallows the natural capacity for our minds to wander. So, someone who might be bored while reading and inclined to daydream, or someone who might be staring out the window on the bus trip home no longer just “thinks”. Apps on smartphones (instagram, facebook, pinterest etc) allow our brains to constantly be engaged with something. I find that if I am writing in a word document, I am inclined to research terms and ideas impulsively to make sure that I am getting my ideas “right”. Even in poems etc. The only time I (honestly) feel like I am working to the full extent of my creativity capacity I am working on paper, in a medium that involves mostly drawing. I wonder if the impact of new forms of distraction might have on future generations. I suppose if people are losing the capacity to be creative then they are already predisposed to the kind of automated, “flattened” thinking Carr is afraid of.
The first question that is often asked when I talk to colleagues about large amounts of encoded text is what or how to use such a data set. Encoding every word of Shakespeare does not strike most academic or interested parties as particularly useful. Isn’t it just a concordance? But the advantage to a digitally encoded corpus is the ability to collaborate with an expert in writing applications in which to manipulate the data through metadata. This process requires a good deal of imagination to form question with which we can ply a corpus of text. The applications and scripts allow interested parties two main options. The first of which is the novel or unlooked-for discoveries within a body of text that was previously inaccessible, or obfuscated by the amount of data. The second option is to design a large scale question (that may need revision) that previously required speculation or generalization. I am not claiming that big data renders these types of rhetorical and logical moves obsolete, but it can open doors for questions that seem out of reach for individual researchers.
Let me try and by a little more specific. Topic Modelling is one way that we are able to dynamically sift through large amounts of information. David M. Blei provides an introduction to the concept here:
The trick is to figure out the ways in which interested parties can, as he puts it, “zoom in” and zoom out” to relevant information and data. It allows readers the opportunity to find patterns, connections, and manipulate discourse. A more specific example still (and one that I find fascinating as a fan and an intellectual) is the Philip K. Dick android built by Hanson Robotics and researchers at the Institute for Intelligent Systems, like Andrew Olney at the University of Memphis. The android was built to look like Dick complete with beard, eyes, and facial expressions, but more than this is was programed to speak like him.
The software developers used typical conversational models called bots to generate the grammatical glue of the conversation, but they also used topic models and concept maps of Dick’s corpus of writing to create responses to verbal questions that come from the writers own words. After hearing Olney give a lecture on how he managed to achieve a conversable android Philip K. Dick, I not only got excited about the possibility of electric sheep, but new ways in which to manipulate discourse. I am currently still looking for the right questions to ask for this methodology, but tracking concepts between images, poetry, fiction, non-fiction, journals etc. can allow us to recreate not just discourse, but echoes of a discussion.
I read Carr’s article “Is Google Making Us Stupid?” last Wednesday, on a day when I couldn’t brain. It was one of those days when I couldn’t articulate my thoughts, when I had to sit and stare at something I’d written before it made sense… and unfortunately, it was a day I taught two composition classes. I felt sorry for my students as I struggled to explain the rhetorical situation in a way that made sense. I posted the above meme on my Facebook, as I do on all days when I have the dumb. (The plush in the meme is Styx, a handmade Drifloon [a Pokémon] plush I bought on eBay. His facial expression sums up how I felt.)
The one thing I was able to process was Carr’s article, and it presented the first argument against DH with which I could agree: “The result is to scatter our attention and diffuse our concentration” (par. 19). I found his claims to be well-supported and his tone far more measured than that of most of the authors we’ve read, both pro- and anti-DH. I also appreciated his balance of personal and anecdotal experience with hard research—and that he offered a counter-argument: “Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine” (par. 31). (And I loved his references to Hal. Since I was a kid, I’ve been defending Hal’s actions and his humanity when he was ordered to act against his protocol in keeping the true nature of the mission to Jupiter secret from Dave and Frank. But that’s another blog post.)
Still, I don’t think Carr is just a worrywart, although I only have anecdotal evidence: my own experience reading and teaching. I myself don’t have trouble concentrating on and processing long printed texts (not yet, anyway). I can concentrate on books and articles just fine. However, I experience Carr’s description of drifting concentration when reading online. I get distracted by other Firefox tabs, or my eyes jump over the page. I think this reaction is the main reason I prefer to print out articles and read them on paper. I’ve also noticed how television has come to mimic the Internet, which Carr points out in paragraph 20. I had never connected the infuriating text crawls and lists of which segments are coming up next with the Internet, but now it makes perfect sense.
As for teaching, I can’t vouch for how my students read, but I’ve seen the Internet creep into their writing, from texting abbreviations like “u” turning up in academic essays to entire papers which seem to be typed on cell phones judging from odd, “auto-correct” style substitutions for words (not to mention the complete lack of capital letters). Much composition theory has turned to multi-modality, coming up with ways to use texting, Tumblr, Facebook, and other forms of Internet composition to teach. I tout these methods as a way to use students’ extracurricular writing strategies, and others embrace multi-modality as a way to make composition easier for students of different abilities. But Carr makes me wonder if we’re just old media playing by the new media rules (par. 20). Is composition’s real reason for adapting our teaching to the Internet that we want to stay relevant?
Inside the walls of universities, colleges of education are telling future teachers, “It’s okay that students can’t remember what year Christopher Columbus sailed the ocean blue; they can look it up on the internet.” Professors of teacher education say what is important is students can use, manipulate, analyze, and criticize the information, not that they can remember a piece of information. Professors of teacher preparation embrace technology, or at least accept there is no reversal in technological advances, so why fight it? The author mentioned Socrates and his disdain for the written word; Socrates believed technology would make us less intelligent because the brain would become less sharp, essentially, from less use. If we look at the brain like a computer, then not having to retain facts in the brain leaves more memory and processing power for the important functions like critical analysis, evaluation, or analyzation.
Though, in terms of technology and computational outputs, there is usually not room for intersectionality or a complicated answers. Computation lends itself more to math and sciences when it comes to solving problems — find an objective output. However, answers about the humanities and social issues usually are less than objective. Humanities and answers to life’s great questions require interconnectedness, sometimes go unanswered, or only get more complicated as more evidence is uncovered.
So, the technical aspects of digitization and technological improvements relinquish energy formerly spent on memorization of facts/information to increased utilization for processes like analysis. Though, with a loss of touch with the quintessential aspect of humanities – humanness – objective answers will be promoted and be the process by which the complicated, non objective answers about life’s deepest questions will be answered. Therefore, what is the answer to ‘is Google making us stupid?’ The human says it is not as simple as an algorithmic binary of yes or no, it is to be contemplated on a level deeper than Google’s ability to piece together the right process with the right sequences of words.
Nicholas Carr’s prognosis for the human brain is that the omnipresent machine is shrinking our capacity for sustained thought, and there’s nothing we can do about it. The issue of control seems relevant to this discussion; Carr thinks that we can’t control our own brains when faced with the Internet.
I identify with his experience: “what the Net seems to be doing is chipping away my capacity for concentration and contemplation.” If I’m working on my computer, it can be hard not to check that e-mail when it pops up or look up or handle something that I suddenly remember. When I’m reading a printed book, I stay focused for much longer, but phone notifications still lure me in to check them sometimes. If I’m writing, and, as usual, the words aren’t flowing easily, it’s especially tempting to give up every so often for a minute or two, distract myself, and then dive back in. I know this hurts my productivity and concentration, and I’ve been trying to stop, with partial success. What strikes me about my own experience with this issue are my feelings of control or lack thereof. When I give in to the urge to check Facebook, I know I’m making a choice, however compulsive. When I resist, it’s a small victory for my willpower. Sometimes I can almost feel my brain telling me to do what I’ve slowly been programming it to do – distract itself. My actions have led to its “rewiring,” but I think (/hope?) I can reprogram new habits.
Carr comes near the issue of control when he discusses the introduction of clocks: “In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.” We decide how to schedule our lives. We can choose to listen to our senses, or we can choose to listen to the clock. It takes more self-control to continue the natural action of obeying one’s internal clock when a clock is present. Similarly, it takes more self-control to focus on reading or writing one text for an extended period of time when the Internet stares you in the face, overflowing with e-mails, social media, bullet point posts, and kittens. But these carrots can be refused, if not all the time, at least within working hours.
James Olds, a professor of neuroscience at George Mason University, “says that even the adult mind ‘is very plastic.’ Nerve cells routinely break old connections and form new ones. ‘The brain,’ according to Olds, ‘has the ability to reprogram itself on the fly, altering the way it functions.’” Carr uses this source to show that the Internet has reprogrammed our brains, but it equally indicates that we can reprogram them back the way we want them. Habits are hard for our brains to break, but we can break them. I don’t mean to suggest that we can or should stop using the Internet, but I think that with determined self-control we can choose to regain, to some degree, our attention spans and our focused and analytical method of thinking.
The very first thing I noticed in reading the Carr article, “Is Google Making Us Stupid?” was the fact that the editors missed an end quote in the second sentence. Pity. While I read the article, my mind immediately jumped to categorize Carr as a Luddite—a point he touches on very briefly near the conclusion as a means to dismiss such a category. My largest concern, though, comes from passages such as, “When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws…” Well, first off—nice use of the term “gewgaws.” Secondly, and perhaps more importantly, what this statement contains that is subject to critical deconstruction is the implication that the “Net” is at all separate from human thought or action. Carr grants that the “Net” influences the way in which we think, but he fails to recognize that the human hive-mind creates the content and structure that “It” operates within. To speak of the “singularity” is another topic altogether and would slightly change the thesis I might offer.
Going back to the Luddite business—of course there have been skepticisms in relation to a vast variety of new technologies as they are implemented throughout history. One, in particular, that Carr mentioned is the transition from oral traditions to the written word. It would be foolish at this point to argue that the Internet has not drastically changed the way in which human beings interact and process information. The question that remains is whether the change is an improvement.
The second instance of Carr’s tendency to present the Internet as a separate entity from those who founded, wrote, and continue to grow the Net is a line like, “Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today.” Here again, I would suggest that the separation between our thoughts and the Internet is a false dichotomy. While the Internet continues to exert an influence over our mental processes, it is very much a product of those same mental processes.
Of course, striking fear into the common reader is one tactic that one might employ in order to sell a newspaper. What this article does not seem to offer is any variety of suitable alternative or solution to the wolf cry which it seems to offer. But, Carr takes it even further when he vilifies those who are working to create the next development of the human psyche by qualifying their efforts, “Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ.” Ultimately, while I thought that Carr brought up interesting points and was published (online) at a venue I respect, it fell short of what I would consider a revelatory read.