Wednesday, September 30, 2015

Redefining

Something that's come up in our blog posts and conversations, both implicitly and explicitly, and was a big underlying theme in this week's readings, is how we're redefining (repurposing?) words and concepts in the age of the internet. "Identity," "status," "networking," etc. etc. have all developed new connotations (meanings?) when called up by ICTs. And while we're struggling to make sense of what these words mean—or what we mean when we use them, since a new usage usually comes before a new definition—it makes me wonder what our revisions are leaving behind.


I'm thinking about the many symbols that are a common part of our current visual lexicon that, for all intents and purposes, have no literal representation left anymore (see the link above for a sample). Their meaning is entirely new, especially for people who never knew any other usage, because the thing that inspired the image doesn't exist anymore. But while this one-to-one replacement seems innocuous (especially when we can convince ourselves the new meaning isn't far off from the old), there is a danger in forgetting. The history of the symbol becomes invisible, ignored; the messiness of the change is mopped up. To me, the readings this week represent the messiness—the struggle to make sense of the revision, to process the change as it happens. The clarifying work is the interesting part, not the end result.


Wednesday, September 23, 2015

Maybe Pasek and Karpinski Could Have Used a Little More Humor?

From Baym (2006):
Danet et al. (1997) argued that the computer medium is inherently playful because of its ‘ephemerality, speed, interactivity, and freedom from the tyranny of materials’. 
The most common variety of playful language activity online is probably humour, which seems to be more common online than off. In a large project (see Sudweeks et al., 1998) in which dozens of researchers from several countries and universities conducted a quantitative content analysis of thousands of messages from international Usenet newsgroups, BITNET lists and CompuServe, Rafaeli and Sudweeks (1997) found that more than 20 per cent of the messages contained humour. In my analysis of a Usenet newsgroup that discussed American soap operas (Baym, 1995), I found that 27 per cent of messages addressing a dark and troubling storyline were humorous. The forms of humour included clever nicknames for characters (e.g. Natalie, also called Nat, was dubbed ‘Not’ when a new actress took over the role, and became ‘Splat’ when the character was killed in a car accident), plot parodies, and many others. Surveys revealed that humour made both messages and participants stand out as especially likeable.
It's always seemed like everybody wants to be a comedian on the internet, but I didn't know there was empirical evidence.

The medium is dictating both the style and content of the online message, because humor (especially in text form) requires a simultaneous deployment of style and content—like the "clever nicknames" and plot parodies in the soap opera Usenet group. But, too, the effects of adapting to this mode of communication must have real world implications which online usage.

So, here are some questions I wonder about (most of which there are probably answers to, if I looked):

  • Is the average person likelier to consider him/herself "funny" than during previous non-"plugged in" generations?
  • Do heavy internet users have a more inflated sense of self (especially with respect to humor) than light internet users? 
  • Do we generally believe people whom we correspond with exclusively online are funnier or more light-hearted/playful those we know only "in real life"? 
  • Are we disappointed when people aren't as humorous in person as we believed them to be online? 
  • Do we feel anxious because we worry we aren't clever enough?
(My guess is that some people develop anxiety from their internet selves, while others derive self-importance—which is a cop-out and answers nothing.)

Wednesday, September 16, 2015

An Information Paradigm

TL;DR version: Something I read for another class (Thomas Kuhn on paradigms) and other stuff happening in my life (applying to doctoral programs) shaped how I understood this week's readings. Living in the "Information Age" requires different modes to even understand what the "Information Age" is.

I began last week thinking about big questions—What's changed, if anything? How would I know?—and I began this week with the same questions.

But the experience has been completely different.

Earlier this week I was jumping back and forth between readings for different classes. I read Preece & Shneiderman's "Reader-to-Leader" framework first, and one of the things I was most struck by while reading was how intuitive their conclusions were. Over my previous year at TC, I have expressed beliefs about technology/design/communication/etc. that echo much of the analysis articulated in Preece & Shneiderman—though certainly not as intelligently or with any kind of supportive evidence. Instead, I felt like much of it I had generally internalized over time (i.e., of course well-organized and attractive layouts positively influence reading!) or I had specific examples from which to draw experience (i.e., when "designing" an educational fan fiction website last semester, some classmates and I talked about kinds of user moderation systems we had seen on other sites). So many of Preece & Shneiderman's conclusions, I thought, must have become universally accepted if they had already reached me, a relative layman in the field. And in such a short amount of time (since 2009), too!

Then I jumped to a reading for another class, Thomas Kuhn's seminal essay The Structure of Scientific Revolutions. A description, from Wikipedia, if you're unfamiliar:
Kuhn challenged the then prevailing view of progress in "normal science." Normal scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of such conceptual continuity in normal science were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.
Kuhn's concept of paradigms—and the model for how they are created/destroyed—was illustrative for me in how I might answer (or at least describe/define) the big questions above. But I was also thinking of Kuhn's paradigms as a way of explaining how I felt while reading Preece & Shneiderman. Much of their framework has become an accepted part of our current paradigm; we no longer feel the need to question it's assumptions, rationale, or conclusions because we can take them as a given (a key signifier in identifying the existence of a paradigm). And this wide acceptance must occurred shockingly fast.

Another part of Kuhn which I couldn't help but connect to this week's readings was how a paradigm is defined not so much by the "puzzle" it is attempting to solve but rather how it attempts to solve the puzzle. After Kuhn, I went back to Webster's chapter on "The Information Society," which, in my new frame of mind, seemed at it's heart to be describing a paradigm. Webster describes the way the world is and how we understand it (the so-called "Information Age"),  but also—importantly—that our understanding is affected by how we go about doing the understanding (what research we do, and how, and what data we collect).

This theme of how we ought to collect data and which data ought to take empirical priority is repeated in the other readings as well. It has also, lately, been especially relevant to me for personal reasons. I'm thinking about applying to doctoral programs this fall, and that process has been monumentally instructive in thinking about kinds of research and their relative relevance to communication. Webster's suggestion that we ought to be more inclusive of qualitative data—that the story of the informational data we're currently collecting actually needs to be told with both words and numbers—is an encouraging signpost.

Wednesday, September 9, 2015

Looking At Change

A big question, philosophized and debated over in my Theories of Communication class last semester—debated, but left largely unanswered—is what, if any, of these overarching sociocultural/technological trends have changed over human history. Does the system itself change, or just the specifics of the networked nodes? On what scale must you look to see the mechanisms of change change? Or, in other words, in what sense are we doomed to repeat history?

While these questions continued to rattle around my brain while reading these essays about technological determinism, I was particularly struck by a set of questions that is, perhaps, actually a subset of the questions above: In our struggle to understand change, how do we represent that change? Each of the readings offered different chronologies in various forms, from Moore's law (which, when described and emphasized by Ceruzzi, has a definite "and then this happens, and then this happens, and then this happens, etc." infinite spiral feel to it) to Heilbroner's argument that the steam-mill had to happen after the hand-mill. At the end of reading Ceruzzi, on the back page I felt compelled to draw what I can only describe as a timeline, though it shows no specific time. What I was trying to decipher, I think, was akin to asking a question like, "Are we all on the same timeline?" Or, said another way, "Are we all limited to following the same timeline?"

While recent history is marked with battles for technological (or at least commercial) superiority—like iOS vs. Android, or VHS vs. Beta—the competing options seem to exist within the same chronology. Whether I choose a Windows or a Mac OS (or an open source system like Linux) does not make a difference to which technological culture I exist in; they all exist in the same one. The only examples I could think of, where someone could reasonably be considered to break with contemporary technocultural norms, would be a chronological break. That is, when I choose to not use a computer (thereby existing in a pre-computer culture) or if I want to live "off the grid," as some people do (though the legitimacy of these anti-technology lives, according to Ceruzzi, is suspect). Ceruzzi's sadness at giving up his slide rule in favor of a calculator illustrated this limited, singular chronology for me, and maybe inspired the original train of thought.

But my point is not so much the train of thought itself, but rather my underlying interest in how technological, scientific, and historical epistemologies should be talked about, illustrated, or otherwise represented. Early on, Marx and Smith use the word "growth" to describe the chronology of human technological history. Ceruzzi uses the word "progression" to describe it. I noticed each of these two words because both of them go just beyond pure description. Generally speaking, both of the words have subtle-but-not-insignificant positive connotations. Are we forced, then, even in our critiques, to see the results of technological determinism—hard or soft—as some kind of success? Is any discussion of technological determinism automatically situated in specific language—verbal, visual, or otherwise? Could we even be having this discussion if we weren't in the cultural system of which technological determinism is also a product? What would that look like? Can we look outside the system? Can we look from outside the system? What would we see?