SAA Seminar: Form, Complexity, and Computation
Posted on: 05 Apr '15

Joseph Loewenstein and I co-organized a seminar entitled Form, Complexity, and Computation at the recent Shakespeare Association of America annual conference in Vancouver. It was a small seminar but elicited exciting papers and led to a very lively discussion that included a large group of auditors. Below is a rather rough draft of our introduction to the seminar, followed by a PDF file of the handout we distributed to seminar auditors (includes paper abstracts and a bibliography). Michael Witmore has posted part of his paper for the seminar as a blog post.


The nature of the material sets the problem to be solved, and the solution is the ordering of the material.
– Cleanth Brooks, The Well Wrought Urn

In this seminar we will ask how we can think about the complexity of literary texts in computational terms. How can we can represent ambiguities of words, nuances of structure, the interaction of motifs, and the intensely intertextual way in which most literary research problems are formulated within the framework of quantification and categorization that computational approaches solicit?

Complex systems have been characterized as producing large scale, often unpredictable variations as the accumulated effects of miniscule changes in input. In other words, they can be thought of as generating structured yet non-deterministic effects based on concretely identifiable but small changes in their initial states. Examples of complexity abound in nature. From the structure of trees to swarms of living organisms, from the shapes of crystals to the flow of wind and water, we can find examples of such complex systems everywhere. Contemporary scientific disciplines have attempted quantitative models of such phenomena that try to accommodate their ambiguity, flexibility and fundamental indeterminacy and yet capture a sense of their overall structure.

In many ways, this notion of complexity has already found its way into our thinking about literary artifacts. We can see its shadow in the effectiveness of Moretti’s “tree” metaphor in tracing the development of the novel as a non-deterministic but structured phenomenon, or in the use of genetic metaphors of propagation and mutation in thinking of the “evolution” of genres, or their effects as caused by structures operating at the level of the sentence. One might argue that even if they are relatively under-theorized in literary studies, such metaphors for thinking about language modulate the ways in which we conceptualize the ambiguity and indeterminacy of literary language in the domain of computation. As we move from the shallow empiricism of verifiable, concrete claims about texts and start to use computational models to think about higher order literary and cultural phenomena, our ability to accommodate ambiguity and indeterminacy, and the ways we can reconcile them with overall accounts of structure will increasingly determine our relationships to texts (themselves digital surrogates for physical artifacts) and their quantitative surrogates – frequency counts, distributions, vectors, visualizations etc.

The literature on statistical and computational theory grapples with the inescapable ambiguity of data by adducing concepts like probability, bias, variance, entropy, information gain etc.,; we hope to elicit papers that share the ambition of our colleagues in these fields, by demonstrating how ambiguity and complexity are accessible to quantitative representation. Papers might reflect on the processes of quantification, how we might rethink ideas of computational complexity in humanistic terms and express humanistic ambiguity within quantitative models, or demonstrate approaches to texts that seek to move beyond scale and account for literary complexity.

The central question that motivates this seminar is the link between the interpretive and what McGann and others have described in the context of textual editing as the procedural. The digital turn, we feel, forces us to confront the mechanics of textual production and meaning-making in newly defamiliarized terms.

We tend to think of literary texts as producing effects that we know and recognize – be it metaphor, or genre, archaism or an authorial “style.” But what are the underlying, relatively more tractable building blocks and how do they combine to produce these higher order effects? If we do nothing more with computation – if we don’t write code, or optimize algorithms, or build databases – it is still worth asking this question about describing our familiar interpretive categories in unfamiliar procedural terms. How do we "operationalize" certain effects? How do we make them tractable to computation? How do we identify and isolate particular textual mechanisms and think about their transmission and evolution across texts and over time?

I would venture that this moment of strange unfamiliarity when we anxiously face a new set of tools and techniques is a particularly good moment to ask these questions – or to revisit them with the kind of critical sharpness that formalism, new criticism and structuralism brought to bear on them. Partially, this is an urgent call to take control of our discipline – to (re)theorize our own methods before we lose the sense of strangeness – before we get really comfortable with our new tools and the tacit assumptions they might impose on us. On the other hand, however, there is the sense of venturing into uncharted territory – of keeping alive the sense of wonder at the possibilities rather than mere awe at the challenges – of being able to ask questions that we either couldn’t have really explored or answered before but also, in many instances, asking questions that we might not have thought to ask before.


A PDF of the handout we created for auditors that includes (along with an extract of the above text) a set of abstracts and a bibliography.