Category Archives: Uncategorized

Validator!

Text encoding is a language. Through the post-colonial lens, it is of interest that this language be construed as “universal” despite the fact that it is specifically designed to be constructed within English alphanumerics. After understanding the basic references, one is then able to implement and categorize various languages—French, for example, was shown in the Mueller introduction. Reading this introduction was a similar experience as reading an instruction manual, although I found much of the information helpful in laying the foundation for what TEI is and is capable of creating.

One specific point of interest in relation to this topic is the idea of digital decomposition. The introduction, itself, seems to have fallen victim to a lack of upkeep as some of the hyperlinks contained within the document lead readers to error messages, such as the following:
Screen shot 2015-02-16 at 8.53.09 PM

Despite this, I found the article to be insightful and a good reference for possible future endeavors within the TEI arena. Some of the other links, such as the First World War Poetry Digital Archive, did function, and provided necessary examples of the types of output that can be created by following the various steps listed. Digital upkeep is important and necessary when the only other available form of preserving literary texts is the costly and burdensome literal preservation of books as tangible objects.

The other somewhat revelatory tidbit of information that I gleaned from the text had to do with the somewhat poetic form of TEI. One is able to create elegant code. That is, code that performs its function while simultaneously appearing clean and extremely concise. This type of code is only functional once it successfully passes through the “Validator,” which is ripe for poetic/philosophical inquiry in its own right.

The second page we were led to on the syllabus was also very helpful and much more beneficial for would-be TEI encoders, in my opinion. Here we have a straightforward tutorial, complete with examples, tests, exercises, etcetera, that help to get those that are interested actually doing the task rather than simply talking about it. With any language, immersion is the most successful strategy for retaining and practicing the desired outcome. This is a great way of learning, at least at the basic level, TEI.

TEI

I am doing this blog entry differently than normal — I am writing my questions as I go along and I will strike them if they are answered after further reading. This discusses the origins of TEI.

1. Ok so SGML is a markup meta-language. SGML is a set of rules for making a markup language. SGML is composed of containers and an SGML conformant language — rules about containers and their “content models.” A “document type definition (DTD)” follows rules specified by SGML.

2. Why would I put a “declarative markup” with a “style sheet” instead of just making the words styled/formatted correctly in the first place? The author says combining the style and declarative markup is counterintuitive because a marriage of style and structure  is a deeply engrained in our beliefs.

3. The author says style and structure is disjointed in markup, but then he says:

“The strength and weakness of SGML derive from the same fact: you need a document type definition, which means that you have to think ahead. Writing in SGML or any of its variants involves a willingness to shoulder upfront investments for the sake of downstream benefits.”

So are style and structure really disjointed? We still marry style and structure because we’re thinking ahead about the DTD and consequently design the SGML.

XML is the answer to the problems of SGML and HTML. “You can use XML without thinking ahead and make up your elements en route as long as they nest within each other. This is called writing “well-formed” rather than “valid” XML. Purists discourage this but people will do it anyhow.” So I was essentially correct in my above question; they created a spin of language known as XML to allow us to marry style and structure without losing the computing power of SGML.

4. Seriously, why not HTML if it is universal?  “HTML always lives in sin because it constantly violates the cardinal rule of separating information from the mode of its display.”

5. But why is this important to separate information from mode of its display? Why not call it humanities computing if we want to separate the two? “If you want to use the Internet to move stuff in and out of databases, it becomes very useful to have a markup language with clearly defined containers and content models.That is the impetus behing XML, the “Extensible Markup Language,” which will supersede HTML wherever complex and precise information  is at a premium.”

A link between boredom and creativity?

I thought the quotation included at the end of Carr’s article from the playwright Richard Foreman was interesting, the idea that if we lose the ability to read and acquire knowledge we will also lose our “inner repertory of dense cultural inheritance,”. Cultural inheritance, I think, can apply as much to ways in which we acquire knowledge as ways in which we produce the art that becomes part of our culture. A lot of the information in the article rang true for me-not because I’m sure search engines and other sites and are somehow intercepting my ability to concentrate, but because I am someone who inherently does not like to read. I would rather just go make something.

I listened to a radio program recently that discusses the impact of new forms of distraction on creativity and goal-setting behavior.

http://www.npr.org/blogs/alltechconsidered/2015/01/12/376717870/bored-and-brilliant-a-challenge-to-disconnect-from-your-phone

The program puts forward the idea that because the Internet becomes a way for us to be easily distracted while reading, it disallows the natural capacity for our minds to wander. So, someone who might be bored while reading and inclined to daydream, or someone who might be staring out the window on the bus trip home no longer just “thinks”. Apps on smartphones (instagram, facebook, pinterest etc) allow our brains to constantly be engaged with something. I find that if I am writing in a word document, I am inclined to research terms and ideas impulsively to make sure that I am getting my ideas “right”. Even in poems etc. The only time I (honestly) feel like I am working to the full extent of my creativity capacity I am working on paper, in a medium that involves mostly drawing. I wonder if the impact of new forms of distraction might have on future generations. I suppose if people are losing the capacity to be creative then they are already predisposed to the kind of automated, “flattened” thinking Carr is afraid of.

Do Dhers Dream of Electric Questions?

The first question that is often asked when I talk to colleagues about large amounts of encoded text is what or how to use such a data set. Encoding every word of Shakespeare does not strike most academic or interested parties as particularly useful. Isn’t it just a concordance? But the advantage to a digitally encoded corpus is the ability to collaborate with an expert in writing applications in which to manipulate the data through metadata. This process requires a good deal of imagination to form question with which we can ply a corpus of text. The applications and scripts allow interested parties two main options. The first of which is the novel or unlooked-for discoveries within a body of text that was previously inaccessible, or obfuscated by the amount of data. The second option is to design a large scale question (that may need revision) that previously required speculation or generalization. I am not claiming that big data renders these types of rhetorical and logical moves obsolete, but it can open doors for questions that seem out of reach for individual researchers.

Let me try and by a little more specific. Topic Modelling is one way that we are able to dynamically sift through large amounts of information. David M. Blei provides an introduction to the concept here:

http://www.cs.princeton.edu/~blei/papers/Blei2012.pdf

The trick is to figure out the ways in which interested parties can, as he puts it, “zoom in” and zoom out” to relevant information and data. It allows readers the opportunity to find patterns, connections, and manipulate discourse. A more specific example still (and one that I find fascinating as a fan and an intellectual) is the Philip K. Dick android built by Hanson Robotics and researchers at the Institute for Intelligent Systems, like Andrew Olney at the University of Memphis. The android was built to look like Dick complete with beard, eyes, and facial expressions, but more than this is was programed to speak like him.

http://youtu.be/1bYiXIVyguU

The software developers used typical conversational models called bots to generate the grammatical glue of the conversation, but they also used topic models and concept maps of Dick’s corpus of writing to create responses to verbal questions that come from the writers own words. After hearing Olney give a lecture on how he managed to achieve a conversable android Philip K. Dick, I not only got excited about the possibility of electric sheep, but new ways in which to manipulate discourse. I am currently still looking for the right questions to ask for this methodology, but tracking concepts between images, poetry, fiction, non-fiction, journals etc. can allow us to recreate not just discourse, but echoes of a discussion.

I Can’t Brain Today. I Have the Dumb.

I read Carr’s article “Is Google Making Us Stupid?” last Wednesday, on a day when I couldn’t brain. It was one of those days when I couldn’t articulate my thoughts, when I had to sit and stare at something I’d written before it made sense… and unfortunately, it was a day I taught two composition classes. I felt sorry for my students as I struggled to explain the rhetorical situation in a way that made sense. I posted the above meme on my Facebook, as I do on all days when I have the dumb. (The plush in the meme is Styx, a handmade Drifloon [a Pokémon] plush I bought on eBay. His facial expression sums up how I felt.)

The one thing I was able to process was Carr’s article, and it presented the first argument against DH with which I could agree: “The result is to scatter our attention and diffuse our concentration” (par. 19). I found his claims to be well-supported and his tone far more measured than that of most of the authors we’ve read, both pro- and anti-DH. I also appreciated his balance of personal and anecdotal experience with hard research—and that he offered a counter-argument: “Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine” (par. 31). (And I loved his references to Hal. Since I was a kid, I’ve been defending Hal’s actions and his humanity when he was ordered to act against his protocol in keeping the true nature of the mission to Jupiter secret from Dave and Frank. But that’s another blog post.)

Still, I don’t think Carr is just a worrywart, although I only have anecdotal evidence: my own experience reading and teaching. I myself don’t have trouble concentrating on and processing long printed texts (not yet, anyway). I can concentrate on books and articles just fine. However, I experience Carr’s description of drifting concentration when reading online. I get distracted by other Firefox tabs, or my eyes jump over the page. I think this reaction is the main reason I prefer to print out articles and read them on paper. I’ve also noticed how television has come to mimic the Internet, which Carr points out in paragraph 20. I had never connected the infuriating text crawls and lists of which segments are coming up next with the Internet, but now it makes perfect sense.

As for teaching, I can’t vouch for how my students read, but I’ve seen the Internet creep into their writing, from texting abbreviations like “u” turning up in academic essays to entire papers which seem to be typed on cell phones judging from odd, “auto-correct” style substitutions for words (not to mention the complete lack of capital letters). Much composition theory has turned to multi-modality, coming up with ways to use texting, Tumblr, Facebook, and other forms of Internet composition to teach. I tout these methods as a way to use students’ extracurricular writing strategies, and others embrace multi-modality as a way to make composition easier for students of different abilities. But Carr makes me wonder if we’re just old media playing by the new media rules (par. 20). Is composition’s real reason for adapting our teaching to the Internet that we want to stay relevant?

Let me Google that for you

Inside the walls of universities, colleges of education are telling future teachers, “It’s okay that students can’t remember what year Christopher Columbus sailed the ocean blue; they can look it up on the internet.” Professors of teacher education say what is important is students can use, manipulate, analyze, and criticize the information, not that they can remember a piece of information. Professors of teacher preparation embrace technology, or at least accept there is no reversal in technological advances, so why fight it? The author mentioned Socrates and his disdain for the written word; Socrates believed technology would make us less intelligent because the brain would become less sharp, essentially, from less use. If we look at the brain like a computer, then not having to retain facts in the brain leaves more memory and processing power for the important functions like critical analysis, evaluation, or analyzation.

Though, in terms of technology and computational outputs, there is usually not room for intersectionality or a complicated answers. Computation lends itself more to math and sciences when it comes to solving problems — find an objective output. However, answers about the humanities and social issues usually are less than objective. Humanities and answers to life’s great questions require interconnectedness, sometimes go unanswered, or only get more complicated as more evidence is uncovered.

So, the technical aspects of digitization and technological improvements relinquish energy formerly spent on memorization of facts/information to increased utilization for processes like analysis. Though, with a loss of touch with the quintessential aspect of humanities – humanness – objective answers will be promoted and be the process by which the complicated, non objective answers about life’s deepest questions will be answered. Therefore, what is the answer to ‘is Google making us stupid?’ The human says it is not as simple as an algorithmic binary of yes or no, it is to be contemplated on a level deeper than Google’s ability to piece together the right process with the right sequences of words.

Ctrl + Brain + Del

Nicholas Carr’s prognosis for the human brain is that the omnipresent machine is shrinking our capacity for sustained thought, and there’s nothing we can do about it.  The issue of control seems relevant to this discussion; Carr thinks that we can’t control our own brains when faced with the Internet.

I identify with his experience: “what the Net seems to be doing is chipping away my capacity for concentration and contemplation.” If I’m working on my computer, it can be hard not to check that e-mail when it pops up or look up or handle something that I suddenly remember.  When I’m reading a printed book, I stay focused for much longer, but phone notifications still lure me in to check them sometimes.  If I’m writing, and, as usual, the words aren’t flowing easily, it’s especially tempting to give up every so often for a minute or two, distract myself, and then dive back in.  I know this hurts my productivity and concentration, and I’ve been trying to stop, with partial success.  What strikes me about my own experience with this issue are my feelings of control or lack thereof.  When I give in to the urge to check Facebook, I know I’m making a choice, however compulsive.  When I resist, it’s a small victory for my willpower.  Sometimes I can almost feel my brain telling me to do what I’ve slowly been programming it to do – distract itself.  My actions have led to its “rewiring,” but I think (/hope?) I can reprogram new habits.

Carr comes near the issue of control when he discusses the introduction of clocks: “In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.”  We decide how to schedule our lives.  We can choose to listen to our senses, or we can choose to listen to the clock.  It takes more self-control to continue the natural action of obeying one’s internal clock when a clock is present.  Similarly, it takes more self-control to focus on reading or writing one text for an extended period of time when the Internet stares you in the face, overflowing with e-mails, social media, bullet point posts, and kittens.  But these carrots can be refused, if not all the time, at least within working hours.

James Olds, a professor of neuroscience at George Mason University, “says that even the adult mind ‘is very plastic.’ Nerve cells routinely break old connections and form new ones. ‘The brain,’ according to Olds, ‘has the ability to reprogram itself on the fly, altering the way it functions.’”  Carr uses this source to show that the Internet has reprogrammed our brains, but it equally indicates that we can reprogram them back the way we want them.  Habits are hard for our brains to break, but we can break them.  I don’t mean to suggest that we can or should stop using the Internet, but I think that with determined self-control we can choose to regain, to some degree, our attention spans and our focused and analytical method of thinking.

Gewgaws: false dichotomies in scare-tactic journalism

The very first thing I noticed in reading the Carr article, “Is Google Making Us Stupid?” was the fact that the editors missed an end quote in the second sentence. Pity. While I read the article, my mind immediately jumped to categorize Carr as a Luddite—a point he touches on very briefly near the conclusion as a means to dismiss such a category. My largest concern, though, comes from passages such as, “When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws…” Well, first off—nice use of the term “gewgaws.” Secondly, and perhaps more importantly, what this statement contains that is subject to critical deconstruction is the implication that the “Net” is at all separate from human thought or action. Carr grants that the “Net” influences the way in which we think, but he fails to recognize that the human hive-mind creates the content and structure that “It” operates within. To speak of the “singularity” is another topic altogether and would slightly change the thesis I might offer.

Going back to the Luddite business—of course there have been skepticisms in relation to a vast variety of new technologies as they are implemented throughout history. One, in particular, that Carr mentioned is the transition from oral traditions to the written word. It would be foolish at this point to argue that the Internet has not drastically changed the way in which human beings interact and process information. The question that remains is whether the change is an improvement.

The second instance of Carr’s tendency to present the Internet as a separate entity from those who founded, wrote, and continue to grow the Net is a line like, “Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today.” Here again, I would suggest that the separation between our thoughts and the Internet is a false dichotomy. While the Internet continues to exert an influence over our mental processes, it is very much a product of those same mental processes.

Of course, striking fear into the common reader is one tactic that one might employ in order to sell a newspaper. What this article does not seem to offer is any variety of suitable alternative or solution to the wolf cry which it seems to offer. But, Carr takes it even further when he vilifies those who are working to create the next development of the human psyche by qualifying their efforts, “Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ.” Ultimately, while I thought that Carr brought up interesting points and was published (online) at a venue I respect, it fell short of what I would consider a revelatory read.

The Sky is Falling?: The Inevitable Losses of the Digital Age

I have a lot of sympathy with Carr’s ideas in “Is Google Making Us Stupid?” Even as young as I am (22), I have seen my attention span shorten every year since the beginning of my teenage years. At the age of 15, I could read for nearly three uninterrupted hours. Now, even when I read for pleasure, I have to take a break at least every hour; the time is even less when I find a piece of writing less than riveting. The culprit? Probably my iPhone. Every five-minute wait, every ten-minute break, every short walk to class is occupied with checking email, skimming Facebook, listening to music, or being entertained by my phone in some way. As a result, both my attention span and my powers of concentration have diminished.

This is, without question, an infinite loss, but also probably an inevitable one. Should I give up the internet and my smartphone so that I can keep my ability to concentrate deeply for longer periods of time? Maybe. Am I going to? No. Is anyone going to? No. At this point, it would be more than impractical. But it’s also somewhat impractical to bemoan this loss (too much). Yes, the technological advances of the 21st century do involve some loss, maybe even great loss. But, as Carr points out, so did the spread of writing and the advent of the printing press. Though we can’t predict our own intellectual futures in the face of the vast network of information now available to us, I think we can be fairly confident that it’s at least not the end of human thought and development.

I do share Carr’s legitimate concerns about our diminishing ability to think deeply about all the information that we have access to. But I imagine that those who seek intellectual stimulation and growth will be able to find it through nearly any medium and those who don’t wouldn’t have found it anyway. At least, I hope that those of us who value deep thinking will not lose that ability, easily distracted though we may be.

Jet Skiing and the Benefits of Google Stupidity

Carr’s article, “Is Google Making Us Stupid,” is not so much about the effects of Google upon us; rather, it is about how we assimilate into and subsume technological advances. I do not get the feelings that Carr bemoans Google or technological industry, but that he recognizes new ways of the collection, perception, and digestion of information have massive implications. Carr acknowledges Plato’s fear that writing condemned the memory to be an ancient predecessor to the fear we have of technology taking us over. Every generation has Luddites. John Ruskin believed that industry meant the demise of the soul, saying: “Let me not be thought to speak wildly or extravagantly. It is verily this degradation of the operative into a machine, which, more than any other evil of the times, is leading the mass of the nations everywhere into vain, incoherent, destructive struggling for a freedom of which they cannot explain the nature to themselves.”

This is essentially what Carr is gesturing towards: the notion that man and machine cannot work together without man fundamentally changing. Perhaps that is true. Perhaps that is good. Carr proposes roughly three possibilities: that we jet ski atop oceans of information rather than diving beneath them, that HAL begins his takeover with the help of Skynet, or that—like Nietzsche—we ‘just’ change. In the section about Nietzsche’s typewriter, I did not notice anything especially deleterious. The story was supposedly intended to elicit the notion that tools affect our production. Obviously. But I do not think we must allow that production to be changed for the worse, and I do not think Nietzsche’s change into aphoristic telegram-style is necessarily bad.

Daniel Levitin’s article, “Why the modern world is bad for your brain,” underlines multiple ways technology has limited or inhibited us. Much of the article deals with the overabundance of information, such as emails, which is similar to Carr’s concern that: “Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us.” He states that “we risk turning into ‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.” Email is user-dependent. Turning into pancake people, or dilettantes, who only skim the surface is user-dependent. Plato presumably thought ancient Greeks would turn into pancake people if they relied on writing too much.

Writing diminished our memory but gave us more in return. Maybe our ability to focus or to read for long sessions will be diminished, but we may gain intellectual advantages. One way, obviously, is to read as much as possible in print, or don’t answer the phone, or don’t multi-task. I think the concern here is that we have no control over how technology advances. Plato, Ruskin, Levitin, and Carr seem to leave out our agency to a degree, they picture a helpless human at the mercy of a Skynet or HAL, but in this age of PoCo digital humanities, one would think we would include machines like Iron Giant or Johnny 5.