Archive for June, 2008|Monthly archive page

TIGSource PGC

A reminder that playing and voting for the Procedural Generation games competition at TIGSource is underway. Over 60 games!

elo Friday

Friday continued in the afternoon with the ReVisioning Electronic Literature: Origins and Influences break out session. I was bummed that I had missed an earlier talk, “Infinite Interfaces and Intimate Expressions: Hand-held Mobile Devices and New Reflective Writing Spaces” by Lissa Holloway-Attaway due to a conflict with the previous talks.

Luckily, this following panel was very strong. It began with Jessica Pressman’s “Digital Modernism“, taking digital modernism as an adaptation of literary modernism and a subset of electronic literature. DM pursues innovation, adapts, appropriates, remediates, and demands close reading. One of the most interesting parts of this talk was Jessica’s story about Bob Brown’s ‘The Readies’, an electronic reading machine conceived in the 1930s. She compared this to a recent work, Dakota (perhaps not work safe…).

Jessica was followed by Mark Marino talking about chatbots, the Turing Test, identify friend or foe (IFF) systems, authentication, and some other things where I didn’t know how they fit into the overall scheme, but like a lot of these talks I think Mark was presenting highlights of a much longer paper. He returned to the question of what would make a more complex chatbot (or, conversational agent (CA) in his words), proposing one that is contextualized, constrained, and interpreting rather than parsing. The first two characteristics describes some IF design practice quite well.

The third — well, what is an interpretive CA exactly? Does it need a knowledge model? Would we need one for IF?

I don’t really think it’s necessary — I guess you could consider a knowledge model for the fiction of the work itself, though since that often includes real-world assumptions you’re kind of back to square one. Interpretive in some sense implies a processing of data (talks at the conference sometimes touched on this process/content concept), and for a good knowledge model capable of interpretation you would need a good chunk of data, but what I wondered over the weekend but haven’t really followed up on is how ‘dumb’ data really is. Is there a philosophical or technical concept of intelligent data where the data implies process? It seems to be the case that many people assume data is dumb and process ‘uses’ data. Do most people assume that (and of course is it generally true?).

After Mark came Jeremy Douglass, a talk I looked forward to having read Jeremy’s dissertation. His “Implied Code” followed the dissertation fairly closely.

One thing I noticed at the conference is that the successful talks seemed to reach out and talk about ‘big issues’ or present a really cool specific technology or idea. Explaining a specific medium or work can be interesting but it’s not as grabby (for me anyway). There’s just too much translation you have to do (to get from the specific concepts of specialized content and communicate them to a general audience), and when you do that you’re not really telling a story to the audience and keeping them into it.

Q & A:

Jimmy Maher asked Jeremy: there is a debate over the role of the parser in IF, it seems attractive because of the illusion of possibility, what do you think?

Jeremy: what is good art, and bad design? I am an advocate of a limit aesthetics (see his aesthetics of frustration in his dissertation Command Lines)…high failure is the picture plane, e.g., the canvas where the interesting work happens for me.

Jimmy: I think the strength of IF is not in infinite possibility — would some things be satisfying even if they did happen? The strength is in (scripting the interactor. — massive paraphrase ftw).

Here Jessica responded but I lost that response. Audience mentioned Yeats, implied memory, and mnemotechnics.

Jeremy: using the term ‘geography’ with IF can be misleading — you are dealing with the imaginary world of someone else. How does one relate to a memory palace that is not one’s own?

Mark Marino: scripting the interactor, cueing the interactor, conversational affordances and cues all important.

Q for Jessica: you make a provocative definition of modernism — to play devil’s advocate here, what does Dakota bring to the table — what is the new formalism here?

Jessica: you need to ask what is exciting, what is new — this (Dakota) is in the context of the common medium of Flash.

response: it’s an aesthetic of restraint

Jessica: yes.

response: that’s weird. (laughter)

Jessica: yes, you are retelling an old narrative in a new version.

Q for Jessica: considering the totalitarianism of someone like Pound, is the cultural capital of modernism a double-edged sword (referencing Dakota which references Pound’s Cantos I think)?

Jessica: yes, this is actually remediated in Dakota, there is an allusion to the critic and issues of translation.

Q for Jessica:
regarding Bob Brown and The Readies, PARC built a readies machine, you could increase your reading speed dramatically, is there similar software out now?

Jessica: yes, you can read faster, but do you absorb literary content faster? This juxtaposition is what interests me — you ask if close reading is possible here.

Jimmy asks Jessica: Bob Brown was displeased about reading not advancing into the modern era along with visual art, film — couldn’t he say the same about ‘viewing’?

Jessica: true, I’m not sure of Brown’s logic here — you come to ask how critical reading becomes/became an art form.

audience response: we tie reading and writing together, but most attention is paid to reading, not writing (writer-response theory anyone…?)

Mark: it comes down to our reading tools that are available (not sure if I have this right).

Jeremy: there is an advantage here to an open source environment, with the text and its ‘text’, as well as natural language interfaces — the visual presentation, the natural language layer, and the machine code below that.

audience: unfortunately there is a separation of critical reading and writing (?), and similarly with code — if it is black boxed, that is limiting pedagogy. There is a big difference if the reader understands the code. This can be something distinctive about e-lit and it hasn’t been emphasized.

Mark responds: if you’re interested in reading code…

audience response: what we do is ‘writing the reading’ — making a thing that generates the reading, but this doesn’t equalize the practice necessarily (I think this was John Cayley — an interesting voice).

Jeremy: we are writing a reader, by creating affordances.

Mark:
and also people create tools for the reading.

Jeremy: it doesn’t just have to be ‘authoritative’.

I’ll have to leave it there and finish off Friday in another post. As you can see there was a lot of content Friday — when I finally left I was pretty much totally full up.

putting your avatar in a funny hat doesn’t make it less problematic

After the first Friday morning session was Break Out 1.2, Games and Narratives.

I came in a little late to Jennifer Smith’s “Meta Discourse: An Investigation into Possibilities of Meta-Fictions in the 21st Century“. Smith’s presentation was one along similar lines with a few others of the conference, referencing older print works as precursors to the forms of electronic literature (or in the context of something like Espen Aarseth’s cybertext, these works all belong to the same class of literature and present in different forms). On the face of it this is pretty basic, right? But it makes me wonder if this analysis gives less weight to digital media as a new art form in itself, sharing in common with writing, visual art, film, but a separate art form in its own right. Do we lose something by considering much of digital media as electronic literature in the first place — and not digital media itself?

Jason Farman (faculty at WSU Tri-Cities) followed up with an entertaining talk on Grand Theft Auto and what he termed the alienation effect, citing Brechtian concerns over an invisible interface between art and the person that experiences it. This follows from recent games (a simple example would be fLOw, but Jason indicated he was talking about AAA titles) where the designers subsume the interface into the game experience, desiring as little between the player and the game as possible. Specifically Jason was talking about concerns over the violent content in GTA: San Andreas.

The premise of the alientation effect is that by fostering distance between the spectator/interactor and the work, you (the designer or producer of a work) promote the possibility of social critique, social (and emotional?) change, but by fostering identification and sympathy, you impede change. The question is whether this effect doesn’t also promote other affects, I don’t know enough of this to say.

This is a very interesting question for IF with its command line interface and the parser, perhaps one of the most mediated interfaces in games, and at the same time an interface with perhaps the most possibility for an ‘invisible’ interface depending on the game (I’m thinking of Jon Ingold’s Fail-Safe in particular). Furthermore players often praise games where the designer ‘thinks of everything’ the player might throw at the game, maintaining mimesis (Lost Pig is a good recent example). What you’re setting yourself up for is a situation where you can violently break the spell you’ve laid on a player. Ideally you would want to do this only for a very good reason — but IF is one of the few genres I can think of where doing this wouldn’t be considered a deal breaker.

The final talk in this track was Jimmy Maher’s “Blending the Crossword with the Narrative: An Examination of the Storygame“. Jimmy (at the University of Texas) along with editing SPAG and writing histories of interactive fiction (and writing IF as well from what I hear) has set out to define the storygame — his term for narrative-oriented games. Jimmy’s storygame has four characteristics: it is interactive, computational (though not necessarily run on a computer), character-based, and has a clear endpoint. He also talked about his ideas of the 3D game, where the text is a window into a world and otherwise gets out of the way, and a 2D game, where the text is more concerned with qualities of itself (perhaps its style, etc.). Of course, and Jimmy makes the point, that the greatest population of readers is interested in 3D reading (and 3D games it seems).

I think at one point Jimmy asked (rhetorically?) what a ‘literature of storygames’ might look like, but I’m not sure if he answered or even asked this question. This was just in my notes. Though since I’m in the camp that thinks literature means not just ‘literary’ writing but a body of writing that is ‘good’, a cut above the rest, what would a literature of storygames look like? Must it have ‘good writing’? It seems like this can’t be as major a criteria it might be for novels or short stories. Do we say it has ‘good interaction’ and ‘good computation’? As funny as it sounds I think we do need to say that, and Jimmy’s four characteristics mentioned above (which don’t mention art assets or writing quality, incidentally) seems like a good criteria to at least start from.

The room then went to Q&A:

Q for Jimmy: Are certain [authoring] systems better suited for certain types of storygames?

Jimmy: you have to decide on the type of story you want to tell.

Q for Jimmy: about Battlechess (Jimmy referenced this in his talk), does the extra ‘fictional’ layer make this game transcend its non-storygame chess 2Dness?

Jimmy: while the animations are initially cool, I think eventually most players turn them off to just play the core game — chess. It’s an example of a mismatch between form and content.

Q for Jimmy: why doesn’t agon (conflict/competition, referenced in his talk) work for storygames?

Jimmy: a storygame can have conflict, but not two people outdoing one another.

(I’m not sure I have this exchange exactly right, as I’m pretty sure agon would be A-OK in a storygame.)

Q for Jason: Is GTA electronic literature? Is it a storygame? Or do gamers tend to focus only on the gameplay (‘Brecht is no obstacle’).

Jason
: yes, but even as you are just experiencing the gameplay, your perspective is in the game environment and you’re participating (tangentially) in the story. Critique is still possible (via the alienation effect). Yes, GTA is e-lit.

Jimmy:
There are different modes of play, experiential, narratological, gaming the system — but good stories will bring more people in.

audience response: Genre novels can have explosions and sex (was he equating this with gameplay?!)

Q for Jason: what about Boal and his Theatre of the Oppressed?

Jason: yes, but in these single-player games you may lack the community (to make this happen).

(I wonder if there really are single-player games anymore — not the first to wonder this obviously).

Here the question and response brought back the topic of serious games, the top-down approach they often dictate, and Jason’s preference for doing this (serious stuff) from the bottom up, as in GTA, or with a game that is not explicitly intended to be a ‘serious game’ (as with GTA).

Q for Jason: (on a player choosing the role of clown/satire in GTA, a topic of Jason’s talk) — what player will actually do this? Further, how serious/bad is it that players are immersed in games such as GTA? This can be a safe way to express a particular discourse.

Jason: I think you’ld be surprised by the number of people who do it, intentionally or not, the result is a distanciation (Jason’s term I think), or hyper-mediation.

response: this could be negative as well, negative satire.

Jason: yes, however part of the process is contrasting the environment with the avatar (creating a deficit of immersion).

Q for Jimmy/Jennifer: if 2D is literary (and 3D is genre?) how does meta-fiction fit — you need to engage the 2D and the 3D, right?

Jimmy: there is a reader preference, but also look at an example like Ulysses. (I think Jimmy was saying here that Ulysses was strictly a 2D work? Though to me Ulysses contains large elements of ‘3D’, i.e. genre-riffic plot)

Jennifer: also literature doesn’t necessarily need to include narrative.

audience response: speaking from a theatre experience, Brecht is ‘dry’. I like to mix in the ‘wet’. Consider the research on mirror neurons (humans mirror experience with action). The Greeks didn’t show violence on stage. Serious games are tied to the military-industrial complex. Brecht would be rolling in his grave (at the idea of Brecht in GTA). Most people playing GTA are not satirizing/being alienated. This is game porn giving pleasure and this is problematic. “I don’t think putting your avatar in a funny hat makes it less problematic.” Do later GTAs try to address this problem (of violent content?).

Jason:
violence is not the primary issue…the issue is the melting of the interface, the invisible interface and the immersion it fosters. The response is to make the interface obvious (alienation effect).

response:
‘make the strange familiar and the familiar strange’. The scariest stuff isn’t shown, if you front-center it you take away from its aesthetic power.

response: “Brechtian’ is known as an excuse for ‘bad’. There are still players who will hack the system (e.g. ARGs). Players are self-organizing. There are ‘answerers’, people who get the answer, and people who hack for answers. There is an ethics of community answering here (I’m majorly condensing this question — I believe it was posed by Jay Bushman).

response (from Mark Marino? My notes say ‘Mark’): regarding violence in games, there often is a subtler thing missing, the question of player assertion of agency. I like ‘critical’ rather than ‘serious’ games. I like the idea of agency. Expected subversions are not really subversive (referencing Jason’s comment that satirizing your avatar in GTA is subversive, but that of course this is designed and therefore somewhat expected).

Jason: I’ll definitely think on this, but cf. with Debord, the Derive, subverting the structure and reascribing signfication.

And on that note, Friday afternoon will have to wait for another day and post.

don’t anthropomorphize the computer — they hate it when you do that

I didn’t know what to expect from ELO 2008, my first foray into the world of academic conferences since 1997, and first sally ever into anything with the label of literary attached — but it was a good sign that at almost any point of this conference I had to give up hearing something to hear something else, and in the end I didn’t get to half of the other stuff that was going on in Vancouver proper and at Clark College (notwithstanding Portland, friends, and free pizza at the Mac store ten minutes away). While the ELO may call itself a literature organization, either they’re defining that term incredibly broadly or the term doesn’t really fit (a point addressed at length in John Cayley’s “Weapons of the Deconstructive Masses” on Sunday, with thoughtful replies, by Scott Rettberg in particular. See also Rettberg’s paper at GTxA). However that doesn’t really matter to me. It was good to see such a breadth of theory and practice.

So, I took extensive notes, but I was really planning to have everything on audio as well…let’s just say a small technological ‘glitch’ has axed the audio. Also did I mention my computer completely crashed in Portland, and was fine when I got back to Seattle (for no apparent reason)? Altitude? Homesickness?



Friday morning:

I made it down in time for David Benin’s “Hors-Categorie: An Embodied, Affective Approach to Interactive Fiction”. The work itself (a .z5 story file) is available at David’s page at UCSD. At this talk I had a realization. If every presenter name-dropped as many artists and philosophers as David Benin, I was in for serious trouble when it came time to make any sense of my notes.

To be fair, name-dropping is the wrong term, but it’s unfortunately sufficient for the condition I’m talking about here (my cluelessness). Despite this conference setting up my reading list then for approximately the next 50 years, I did enjoy David’s talk. At the core of it was his goal to bring ‘actualization’ rather than just ‘realization’ to the interactor’s experience of IF. The basis of this is to promote ‘thought’. I think I have that basically right though I’m hazy on the details (i.e., making sense of my notes is rather difficult — I said extensive notes, I didn’t say clear and unambiguous).

(if you’re interested: Masumi, DeLoos (sp?), Spinoza — OK, I guess that’s not too many, but the list expands exponentially further into the weekend).

Next in the same session (Form and Materiality) was Fox Harrell with Kenny Chow on “Generative Visual Renku: Linked Poetry Generation with the GRIOT System”. I had been intrigued by the article on GRIOT in Second Person so I was curious what this was going to be about (here we had Lakoff & Johnson, Hiragu, Peirce, Ward, Calvino, LeWitt, and Queneau — I hereby absolve myself of all responsibility for spelling any of these names correctly from here on out). At first I wasn’t too interested in the result, as it is non-interactive in the conventional sense. However the purpose of GVR is to focus on the generative text, not the interactive text, so I can’t fault a system for not doing something it isn’t trying to do. The goal is to give the author a means to write a generative text that can offer varied but semantically coherent experiences for the reader. On that count you might be able to say GVR expresses a different form of interactivity, one where the reader is not pushing buttons but reorienting the text in their own mind. Perhaps if you embed the possibility of many reorientations within one text you could say that the text approaches the interactivity of something more conventionally interactive.

The final talk in this track was Damon Loren Baker’s “Cavewriting: Spatial Hypertext Authoring Systems“, and I found it to be quite engaging. Damon’s goal is to make the CAVE system accessible to authors who are not necessarily programmers, and is working on a new engine, SHE (Spatial Hypertext Engine). He also has had a hand in Cavewriting (and is responsible for the title of this post).

After Damon’s talk the room went to Q&A (brutal paraphrasing follows):

Q for David Benin: have you tried to capture the response to your piece?

David: no, it’s brand new.

Q for Fox: is your system inspired by the layout of comics?

Fox: yes, but not just comics, also cinematics and graphic design. Kenny: the icons are annotated on the edges (which help form the relationships to other icons). We also drew from the figure/ground concept of visual art.

Q for Damian (I think this was Jimmy Maher): You mentioned making tools for writers who weren’t programmers, what about training the writers to be programmers?

Damian: yes, but we want to provide multiple ways into the system (not just programming). But this is important.

Q for Damian: Are you talking about the new writer/director relationship? (I think he means where the author dictates what happens but doesn’t necessarily know how it happens)

Damian: Yes, but we want to get away from the implementation split where the writer waits for the programmer to code what they want. Collaboration is good but…we can still make room for the auteur at least.

Q for Fox:
how much do the graphics (of the icons in the generative renku system) constrain the input possibilities? (I think they misunderstood the system here…GVR isn’t really interactive in this way as I understand it).

Fox: the author creates the system (it is generative). At the core are annotation rules for the graphics. The icons aren’t the only possibility, they are just the presentation.

Kenny: perhaps coherence is a lesser concern when so many combinations/generations are possible (cf. Queneau).

OK, I’ll have to pick this up in the next installment. Meta Discourse and Grand Theft Auto on deck.