The NWO-sponsored research project ‘Turning over a new leaf’, in the person of Erik Kwakkel, organized the intense and fruitful Colloquium Manuscripts and Digital Humanities on Wednesday 22 April. The Academy Building of Leiden University, just over the Hortus Botanicus, hosted a bunch of digital and non digital humanists, interested in handwritten texts, palaeography, codicology, statistic, biology and software development, from 20 to 90 years old.
After the Colloquium … Erik Kwakkel, Manuscripts of the Latin Classics 800-1200, Leiden: Leiden University Press, 2015.
Each of the four talk deserves special attention. I’ll try to summarize the content and collect some of the issues raised in two posts; the second will follow soon! The abstracts of the papers are online here.
New post on the DiXiT Blog!
Dopo una due giorni a Grenoble per tracciare le linee guida scientifiche del progetto e redazionali del blog, apre FONTEGAIA Blog!
~ Fontegaia ~
Studi italiani nell’era del digitale.
Una biblioteca digitale, un blog come unjournal scientifico e a breve una rivista.
“The Scholarly Digital Edition and the Humanities. Theoretical approaches and alternative tools”
Rome, 3-5 Dec 2014
The workshop ‘The Scholarly Digital Edition and the Humanities. Theoretical approaches and alternative tools’ was held in Rome from 3 to 5 December 2014. The initiative was organized by DigiLab (Centro interdipartimentale di ricerca e servizi of Sapienza University of Rome), with the support of DiXiT Marie Curie Network, of which DigiLab is one of the partners. The scientific project is directed by Domenico Fiormonte; the workshop has been organized by Federico Caria and Isabella Tartaglia. It has been widely attended: students, PhD candidates, professors, researchers; representatives of various projects spread around Europe; philologists, philosophers, conservators and experts of media and communication.
The talks of Domenico Fiormonte, Desmond Schmidt and Paolo Monella are focused on the digital scholarly edition and problematize different aspects of it, addressing its creation and usage. It relies on not simply adhering to some practices, experiences or traditions, or to uncritically use a standard, but to study them closely, identifying their strengths and weaknesses by tracing their history until today. This will enable scholars to make well-informed scientific choices.
In the beginning of July I have attended the workshop “Édition analytique” in Lyon, sponsored by Consortium cahier and Labex Aslan, organized by Alexei Lavrentiev and others from the équipe Lincobato. The workshop was focused on TXM but there were also sessions presenting other tools and projects going on in France (Algone, TEI Critical Edition Toolbox, SynopsX). I will say something about it from my perspective: at the moment I’m more interested in editing tools that in analysis tools. But, as Serge Heiden has clearly pointed out: are editors engaged in producing editions for further analysis?
More info about the workshop can be found here.
TXM is a text/corpus analysis environment following lexicometry and text statistical study, based on CQP and R. Working units are lexical patterns (words and word class information), internal structures (paragraphs, titles, footnotes) and the text with its metadata. A large corpus of ancient French texts (BFM), encoded in TEI, is available online; furthermore it is possible to import one’s own corpus (in a wide variety of formats), to annotate it using an authomatic lemmatizer and a morphosyntactic tagger. One can also publish it on the TXM portal: different layouts (diplomatic, normalized, translation, etc.) and images can be displayed in a multi-panel window; audio and video documents can be easily linked to the texts, for instance in the case of a corpus of interviews.
While creating an edition, analysis may help to develop a deeper comprehension of the text; therefore TXM can be an important tool for editors, even if they don’t publish on this platform.
For TEI compliance have a look here.
Thomas Crombez, Genetic Criticism and the Auto-Saved Document, at DH Benelux.
Focusing on a case study from contemporary theater, Crombez addresses some important questions for textual criticism while dealing with born-digital documents. I will not enter into the details of his project, but will just summarize some of the issues that an editor faces with this kind of material, inspired by Crombez’s presentation, a related chapter of the The Cambridge Companion to Textual Scholarship (2013)1 and including some of my own ideas.
Working on a virtual version of an author’s computer seems the most common approach to born-digital texts (attention to sensible data!). The first complication is the technological obsolescence and the fragility of migration paths (digital dark age), which are common problems in digital preservation projects. Scholars could have a role in assuring that documents remain legible and accessible in the future and they cannot stand out from technical implications: as a medievalist has to be able to “decode” a manuscript, textual scholars working on born-digital documents have to be able to identify the main technological issues involved in such projects. One would also pay attention to automatically generated data (for instance, a computer clock can be wirdly settled) and to the organization of files and folders.
One of the most interesting questions is: what is a version? Is it what the author explicitly marks as a version? Or every saved document? And what about google.doc and softwares in which one cannot save? Should the edito indentify a main version or consider materials in accumulation?
p.s.: have also a look at The .txtual Condition: Digital Humanities, Born-Digital Archives, and the Future Literary by Matthew Kirschenbaum and The Materialities of Close Reading: 1942, 1959, 2009 by David Ciccoricco !
1 Track changes: textual scholarship and the challenge of the born digital Matthew G. Kirschenbaum and Doug Reside, in The Cambridge Companion to Textual Scholarship, ed. by N. Fraistat and J. Flanders, Cambridge 2013.