DH2015 -- Digital New Testament Workshop

Table of Contents

Multi-version documents and the digital scholarly edition (Desmond Schmidt)
Phylogenetic analysis of textual variation (Stephen Carlson)
Simulation of textual transmission using statistical techniques (Tim Finney)
Materiality, critical editions and categories of ancient Christian texts (Claire Clivaz)

The 2015 Digital Humanities conference was held in Sydney, Australia, and included the inaugural "Digital New Testament" workshop with presentations by Desmond Schmidt, Stephen Carlson, Tim Finney, and Claire Clivaz. The workshop focussed on the interface between computer science and humanities. The presenters and workshop participants formed a very interesting group at home in such diverse fields as computer science, classics, New Testament, Hebrew Bible, law. Some interesting points were raised during the workshop:

Software library for processing

It would be immensely useful if digital humanists had a library of software tools that could interoperate, perhaps taking inspiration from the pipe and filter architecture enjoyed by *nix users. (An architecture which operates on a graph-based model, where many pipes can flow into and proceed out of each node of the graph, would be a Good Thing.) Many web-based resources published by digital humanists have an opaque back end, perhaps a relational data base or proprietary software. Even if the resource is built from standards-based components (perhaps with TEI XML thrown into the mix), the work required to understand and interface with a particular resource's data and tools is a significant dampener. How to proceed? Standards are one approach, though these often grow out of enterprises which already have working solutions. Perhaps we need to talk to the computer scientists and ask them what are optimal data structures to be used with humanities data (in its multifaceted forms, with its unique challenges)? What standard interfaces can we invent for the pipelines we need? What processing tools and environments will give the greatest return per unit of effort? Related to these questions, here is a video on a linked data platform for digital epigraphy by Hugh Cayless (Duke University). The Digital Classicist list is a good place to find like-minded people: LISTSERV at JISCMAIL.AC.UK. TEI XML can play a useful role as a method for data interchange.

Competing hierarchies

This is a familiar problem for those who have attempted to apply hierarchical data structures (e.g. TEI XML) to things that humans have written, edited, corrected, spilt ink upon, torn pieces off, rearranged, and generally made a hash of. Stand-off markup is one approach to using a hierarchical markup language like TEI XML to represent the kind of complexity found in a typical manuscript.

This session explored the problems faced by those working with textual variation, whether within documents (e.g. auto-corrections) or between documents (e.g. differing renditions of a section of text). Use of XML for representing non-hierarchical variation is problematic. Also, some processing methods throw away information relating to within-document variation. An elegant framework for processing variant texts has been developed by Desmond. It is based on a multi-version document, a set of versions of the one work to facilitate the kinds of things that textual scholars typically want to do.

Desmond's presentation is available here.

Links to resources mentioned during Desmond's session:

After presenting a succinct and helpful summary of New Testament textual criticism, Stephen demonstrated use of phylogenetic analysis of textual variation to better understand relationships between witnesses.[1] His presentation slides can be seen here.

Stephen's presentation included use of Trex online to perform phylogenetic analysis of some real data.

This session revolved around a program to simulate development of a textual tradition through imperfect hand-copying of a text. Instructions on how to download and install the program are located here.

Claire presented a survey of the current state of digital humanities as it relates to the New Testament. Focussing on a number of recent developments in the field, Claire raised the issue of opaque interfaces which make it difficult for researchers to use data collections which stand behind online resources. One way to improve matters would be to make back end data available in standard formats (e.g. TEI XML).

[1] A witness is a possibly fragmentary and lacunose instance of the text, whether as a manuscript, lectionary, quotation, amulet, inscription, ostracon, graffitto, ...