Wednesday, December 2, 2015

Fantasy and Reality

After reading from The Space of Flows, I was thinking about what our imagined spaces—the cities and societies of science fiction and fantasy—might tell us about our beliefs (and our fantasies and fears) about the networked civilization we live in. The new high-speed train station in Madrid was an interesting and direct juxtaposition between our attachment (nostalgia?) for an old style of cultural and personal specificity and, on the other hand, our modern urge for a different kind of connection, with and within a wider, boundary-less world. But there was already a train station on that site before it was renovated, the high-speed train really did connect two, very-real cities, and the issue of whether they ought to have connected these cities at all had specific and real consequences, all because these are the constraints of existing in a 'real' lived-in reality. A city created in a science fiction novel or a fantasy film, however, is not beholden to these constraints.

But while thinking about the implicit meaning behind, say, the archaic Hogwarts castle in Harry Potter or the automated high-speed highways in Minority Report, I kept also going back to the real world. What is it that makes something "real" anyway? When you talk about fictional spaces, what are the characteristics that make them feel real, even when they're not? The specific evidence of authenticity we expect from our fictions reveals something about our truths. But what is that evidence? And has that required evidence—what we need from these stories to convince us they could/should be believed—changed over time? Does dystopic fiction, from 1984 to The Hunger Games, seem plausible (in their exaggerated way) because these kinds of stories foreground the separation of flow and place—the separation of the power of elites (represented by streamlined and sanitary spaces) and the (messy, chaotic) history invisible to them? Do we want to be reminded of, in Castells' words, the "structural schizophrenia" of contemporary society—of what we're losing when we gain? What do we 'get' out of creating these fantastic, unreal spaces?



Thursday, November 12, 2015

MOOCs and Motivation

Personally, I haven't had much success with MOOCs or other online courses. By and large, my experience has been in line with Margaryan, Bianco, and Littlejohn's results, where the courses are characterized by their lack of authenticity or personal meaning/connection. For me, this translates to a short, unsatisfying experience because the original motivation that brought me to register for the course in the first place is often unsustainable in an inauthentic environment.

I was thinking about this idea of motivation with respect to some of the topics we've discussed earlier in the semester, including identity. If there is a 'different' identity (or identities) expressed when we're online, then does motivation exist differently online? Social capital, as well as the mechanisms for gaining or losing social capital, is transformed in online spaces, and it feels like this would change how and why we are motivated to do things. As a very over-simplified example, if we believe that online social capital can be earned through emphasizing certain characteristics or behaviors like being especially funny or witty, or using more pictures, emoticons, or memes—in short, if we believe our friends will like us more for having brief, visual, humorously interesting things to post to social media—then those same characteristics or behaviors probably ought to be more focused on in online educational spaces. If the nature or impetus of our online motivation is shifting because of social media, then MOOCs, being online, ought to wield that change more organically.

Wednesday, October 28, 2015

Now and Then

The comparison between 'now' and 'then' is often central to discussions of new technology, and is especially evident in the NYTimes piece ('Attached to Technology and Paying a Price'). Richtel is constantly invoking a comparison between the 'way things are' and the 'way things were', either directly (e.g., "For better or worse, the consumption of media [...] has exploded. In 2008, people consumed three times as much information each day as they did in 1960") or by implication.

But I wonder what relevance exists in comparing a digital/networked present to a non-digital (or other-digital) past. Many of the arguments made during the first video we watched last class (the hip hop one, with all the shots of sad people in dark rooms looking at empty, soulless screens) revolved around us being 'less' connected/present/happy now, with our omnipresent technology, versus an alternate, past existence without that same technology. Using a perspective which relies on conceptions of 'less' (or 'more') of anything makes an implicit value judgment, i.e., there is a correct or right amount of time to spend using technology, and we are using it more than that amount.

Which is not to say that whatever that amount is is not a reachable (or even worthwhile) goal; it might be important to reduce our use and be more socially present. But, in it's oft-presented form, the argument is based on a dichotomy which does not exist. You cannot go back to the past, 'before' the technology. Even if you do not use the technology, the option to use it still exists in the world, and this is very different than simply not using a technology because you couldn't, because it didn't exist. A baseline or a version of 'normal' which is conceived based on the rules of the past cannot be relevant to today, because what was once normal by necessity—by a lack of options, and by a lack of critical awareness—is now a choice, regardless of what one chooses. Why should past definitions of normalcy influence current behaviors?




Thursday, October 22, 2015

"Sorry to bombard you with questions."

Hine ends her email query asking peoeple about their "offline contexts" with thanks and an apology to her perspective respondents (p. 75-76). And I was struck, while reading both Hine's chapter and Thomas' paper, how important it is in virtual ethnographic studies to approach participants on their own terms. Not just using their nicknames or their language in your communication, as is particularly noticeable in the way Thomas respects the online identities of the children in her study, but, more simply, the fact that that communication is itself virtual. Since you want to know about children online, you have to actually go in there and see what they do online.


After presenting her list of initial questions, Hine offers a compelling rationale for 'restricting' herself to online interactions, in part by suggesting that this limited focus is not a restriction at all; there may not even be a boundary between offline and online identities. While acknowledging this openness and permeability has potential hazards—that answers and identities may be misrepresented or fabricated—it's crucial to understanding the context behind those answers and identities.


* * *

Later, I was struck by a particular diptych of quotes, during Hine's reflection on her interaction with Campaign for Justice webmaster Peter: "that even behind web sites which give no clues to the identity of their producers there were individuals with biographies, emotions and commitments" and, in the next paragraph, "this ethnography is about what the Internet made Louise" (p. 80). The key words in these statements, for me, are "behind" and "made." They seem to be contradictory; how is it that an identity could exist 'in spite of' (as the first quote suggests, as if the web site is a mask to hide behind) and 'because of' online interactions? This paradoxical struggle seems central to understanding online identities, where issues of 'real'-ness and authenticity are questions of portrayals and portrayers.

Wednesday, September 30, 2015

Redefining

Something that's come up in our blog posts and conversations, both implicitly and explicitly, and was a big underlying theme in this week's readings, is how we're redefining (repurposing?) words and concepts in the age of the internet. "Identity," "status," "networking," etc. etc. have all developed new connotations (meanings?) when called up by ICTs. And while we're struggling to make sense of what these words mean—or what we mean when we use them, since a new usage usually comes before a new definition—it makes me wonder what our revisions are leaving behind.


I'm thinking about the many symbols that are a common part of our current visual lexicon that, for all intents and purposes, have no literal representation left anymore (see the link above for a sample). Their meaning is entirely new, especially for people who never knew any other usage, because the thing that inspired the image doesn't exist anymore. But while this one-to-one replacement seems innocuous (especially when we can convince ourselves the new meaning isn't far off from the old), there is a danger in forgetting. The history of the symbol becomes invisible, ignored; the messiness of the change is mopped up. To me, the readings this week represent the messiness—the struggle to make sense of the revision, to process the change as it happens. The clarifying work is the interesting part, not the end result.


Wednesday, September 23, 2015

Maybe Pasek and Karpinski Could Have Used a Little More Humor?

From Baym (2006):
Danet et al. (1997) argued that the computer medium is inherently playful because of its ‘ephemerality, speed, interactivity, and freedom from the tyranny of materials’. 
The most common variety of playful language activity online is probably humour, which seems to be more common online than off. In a large project (see Sudweeks et al., 1998) in which dozens of researchers from several countries and universities conducted a quantitative content analysis of thousands of messages from international Usenet newsgroups, BITNET lists and CompuServe, Rafaeli and Sudweeks (1997) found that more than 20 per cent of the messages contained humour. In my analysis of a Usenet newsgroup that discussed American soap operas (Baym, 1995), I found that 27 per cent of messages addressing a dark and troubling storyline were humorous. The forms of humour included clever nicknames for characters (e.g. Natalie, also called Nat, was dubbed ‘Not’ when a new actress took over the role, and became ‘Splat’ when the character was killed in a car accident), plot parodies, and many others. Surveys revealed that humour made both messages and participants stand out as especially likeable.
It's always seemed like everybody wants to be a comedian on the internet, but I didn't know there was empirical evidence.

The medium is dictating both the style and content of the online message, because humor (especially in text form) requires a simultaneous deployment of style and content—like the "clever nicknames" and plot parodies in the soap opera Usenet group. But, too, the effects of adapting to this mode of communication must have real world implications which online usage.

So, here are some questions I wonder about (most of which there are probably answers to, if I looked):

  • Is the average person likelier to consider him/herself "funny" than during previous non-"plugged in" generations?
  • Do heavy internet users have a more inflated sense of self (especially with respect to humor) than light internet users? 
  • Do we generally believe people whom we correspond with exclusively online are funnier or more light-hearted/playful those we know only "in real life"? 
  • Are we disappointed when people aren't as humorous in person as we believed them to be online? 
  • Do we feel anxious because we worry we aren't clever enough?
(My guess is that some people develop anxiety from their internet selves, while others derive self-importance—which is a cop-out and answers nothing.)

Wednesday, September 16, 2015

An Information Paradigm

TL;DR version: Something I read for another class (Thomas Kuhn on paradigms) and other stuff happening in my life (applying to doctoral programs) shaped how I understood this week's readings. Living in the "Information Age" requires different modes to even understand what the "Information Age" is.

I began last week thinking about big questions—What's changed, if anything? How would I know?—and I began this week with the same questions.

But the experience has been completely different.

Earlier this week I was jumping back and forth between readings for different classes. I read Preece & Shneiderman's "Reader-to-Leader" framework first, and one of the things I was most struck by while reading was how intuitive their conclusions were. Over my previous year at TC, I have expressed beliefs about technology/design/communication/etc. that echo much of the analysis articulated in Preece & Shneiderman—though certainly not as intelligently or with any kind of supportive evidence. Instead, I felt like much of it I had generally internalized over time (i.e., of course well-organized and attractive layouts positively influence reading!) or I had specific examples from which to draw experience (i.e., when "designing" an educational fan fiction website last semester, some classmates and I talked about kinds of user moderation systems we had seen on other sites). So many of Preece & Shneiderman's conclusions, I thought, must have become universally accepted if they had already reached me, a relative layman in the field. And in such a short amount of time (since 2009), too!

Then I jumped to a reading for another class, Thomas Kuhn's seminal essay The Structure of Scientific Revolutions. A description, from Wikipedia, if you're unfamiliar:
Kuhn challenged the then prevailing view of progress in "normal science." Normal scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of such conceptual continuity in normal science were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.
Kuhn's concept of paradigms—and the model for how they are created/destroyed—was illustrative for me in how I might answer (or at least describe/define) the big questions above. But I was also thinking of Kuhn's paradigms as a way of explaining how I felt while reading Preece & Shneiderman. Much of their framework has become an accepted part of our current paradigm; we no longer feel the need to question it's assumptions, rationale, or conclusions because we can take them as a given (a key signifier in identifying the existence of a paradigm). And this wide acceptance must occurred shockingly fast.

Another part of Kuhn which I couldn't help but connect to this week's readings was how a paradigm is defined not so much by the "puzzle" it is attempting to solve but rather how it attempts to solve the puzzle. After Kuhn, I went back to Webster's chapter on "The Information Society," which, in my new frame of mind, seemed at it's heart to be describing a paradigm. Webster describes the way the world is and how we understand it (the so-called "Information Age"),  but also—importantly—that our understanding is affected by how we go about doing the understanding (what research we do, and how, and what data we collect).

This theme of how we ought to collect data and which data ought to take empirical priority is repeated in the other readings as well. It has also, lately, been especially relevant to me for personal reasons. I'm thinking about applying to doctoral programs this fall, and that process has been monumentally instructive in thinking about kinds of research and their relative relevance to communication. Webster's suggestion that we ought to be more inclusive of qualitative data—that the story of the informational data we're currently collecting actually needs to be told with both words and numbers—is an encouraging signpost.