Jul 25, 2013

Reading in two languages

Yin Liu 

The oldest surviving English translation of any part of the Bible can be found in this manuscript:



The translation consists of the smaller words written above the larger main text. It is not a Bible translation in the sense that we customarily think of – that is, a text that can be read in its own right. Rather, this translation takes the form of an interlinear gloss, intended to help an English speaker read the Latin text.

The image is from a page of the Vespasian Psalter, British Library MS Cotton Vespasian A.1, a copy of the Psalms in Latin. The manuscript was made in the second quarter of the 8th century, probably in Canterbury. The gloss was added just over a century later, in the middle of the 9th century. The image shows the start of the psalm Caeli enarrant, Psalm 18 (Psalm 19 in most modern translations). The script of the main text, in Latin, is in English uncials; the gloss is in an insular pointed minuscule, in a Mercian dialect of Old English. The English gloss provides a word-for-word translation of the Latin: thus in omnem terram is glossed in alle eorðan (‘into all the earth’). This way of presenting a text in two languages probably seems straightforward and self-evident to us now. Think how often students annotate their textbooks in just the same way when learning a new language or reading a text in a language in which they are not fluent; and in modern linguistics, interlinear glosses, laid out in much the same way, are a regular feature to assist readers in understanding examples of speech or text in many different languages.
This bit of parchment shows an example of medieval text being encoded, structured, and presented as data

Nevertheless, that bilingual interlinear glosses are among our earliest surviving examples of English text (and continue to be used through the Middle Ages and into the modern period) should give us pause. Before a gloss is added, the text on the manuscript page can be read as a relatively simple linear transcription of speech. But once the interlinear gloss appears, the reader is challenged to regard the text on the page as a much more complex structure, existing in two dimensions rather than one. No longer is there a single sequence of linguistic units to follow, but two parallel sequences, linked by a one-to-one correspondence between individual elements. Furthermore, the two sequences are not equal in value; the Latin sequence is privileged not only visually (it is written in a larger and more prominent script) but also by dictating the order of elements on which the English gloss depends, even when normal Old English word order would be much different from Latin.

We may notice also that word-division in the two sequences does not always correspond. For example, the Latin text frequently joins the conjunction et (‘and’) to what we would consider the next word, with no space between: etopera, etnox, etipse. The English gloss separates out the conjunction and or ond (abbreviated with a symbol that looks like ‘7’) so that it is recognised as an individual linguistic unit: 7 werc, 7 neht, 7 he. This may seem trivial, except that word-separation by use of white space had only just been developed as an encoding convention by scribes such as these in the British Isles, and it had deep and far-ranging repercussions for reading practices throughout Europe and into the present day (Saenger 1997). Among its effects was a shift in the meaning of the word ‘word’. In this text, Latin verbum and English word mean ‘utterance, something said’. But separated script visually fragmented the stream of language into discrete units, which could then be processed and presented in new, non-linear ways.

This bit of parchment shows an example of medieval text being encoded, structured, and presented as data: tokenised and then arranged so that relationships between the tokens are visually apparent. Medieval English readers, grappling with a text in a foreign language, implemented reading aids such as interlinear glosses that allowed people to receive not only auditory but also visual linguistic information, and so created ways of understanding that depended ever more heavily on technologies of writing and of the book.

Addendum

Some useful qualifications to these remarks, and a much better linguistic analysis, can be found in Alderik H. Blom, Glossing the Psalms: The Emergence of the Written Vernaculars in Western Europe from the Seventh to the Twelfth Centuries (De Gruyter, 2017), 161-173.

Jul 19, 2013

A note on religious engagement with texts


At the Social, Digital, Scholarly Editing conference at the University of Saskatchewan, Paul Eggert began by saying an edition is a transaction between the editor and the reader.  Wendy Phillips-Rodriguez's talk entitled “Social, Digital, Codicological Editing?” discussed how sometimes that transaction can involve the highly specific needs of a group of readers.  The Digital Shikshapatri is an edition of an Indian religious text, and is accessed at the Bodleian Library most often by people who use the text to aid in worship. To accommodate this type of reader, the digital edition has some unique features: readings of the text by Yogi Charan Das, and audio files to accompany each page.  It was also mentioned that this method of display acknowledges the belief that the spoken word realizes the true power of language in a way that text does not.

This type of engagement with text is a common context for medieval manuscripts, and some designs show a great amount of care for the reader experience.  Daniel Connolly discusses the itinerary to Jerusalem in Matthew Paris’ Chronica Majora, which is a large mapped sequence, as an "Imagined Pilgrimage" for clerical readers who were unable to leave their monastery due to religious vows they had taken.  The Chronica Majora uses sophisticated indexing techniques to take the reader quickly from the written text to illustrations of various cities and scenes in the margins, which often take up most of the page.  The sign “φ” is used, and when the connected text or image is oriented sideways, the sign is also rotated to give a visual cue for the reader, suggesting the proper orientation.  Other times, the paraph mark is repeated beside an image, suggesting that an image corresponds to each section of text.  These indexing structures support a quick transition between text and visual content that helps maintain an immersive imagined experience.

 When I first came across the Chronica Majora, I couldn’t help thinking about another imagined pilgrimage I had recently come across, the video game Journey by thatgamecompany. It’s one of the most beautiful games I’ve played, with an amazing soundtrack and not a word of written or spoken text.  In this virtual Hajj, a succession of trials brings the character closer to the mountain’s peak.  Though the game is likely not intended primarily for a religious audience, the experience is highly meditative and serene.  I won’t force this comparison, but I would like to suggest that this type of religious engagement and imagined or virtual experience can be an important part of the creation of text designed to facilitate meaningful interactions with the reader.

Ben Neudorf

Connolly, Daniel K. “Imagined Pilgrimage in the Itinerary Maps of Matthew Paris.” The Art Bulletin 81.4 (1999): 598–622.

Matthew Paris, Chronica majora, ca. 1250, vol. 1, Parker Library, Corpus Christi College, Cambridge, MS 26.

Phillips-Rodriguez, Wendy. "Social, Digital, Codicological Editing?" SDSE Conference. U of Saskatchewan. 11 July 2013. Address.


Jul 6, 2013

Latin and Lock-in

There’s a phenomenon that economists call ‘path dependence’, in which historical conditions and practices constrain future decisions about what technologies to adopt, so that a situation called ‘lock-in’ occurs: institutions and individuals have so much invested — education, funds, credibility — in one technology that competing technologies don’t have a chance. The classic (although controversial) examples cited in the economic literature are the adoption of the QWERTY keyboard rather than the Dvorak keyboard, the victory of the VHS format over Beta for videotapes, and the dominance of Microsoft operating systems for personal computers. With deep apologies to any economists out there, because I am a non-economist oversimplifying and probably misrepresenting a complex idea, let us consider, as a pre-modern case, the adoption of the Roman alphabet as a writing technology for encoding English.

When early medieval English speakers wanted to write, two encoding systems were available to them. The older system, at least as far as English was concerned, was the runic alphabet, the futhorc (as the English version is called; the standard account is Page 1999). Runes were used throughout the early Germanic world and survive mostly in inscriptions on stone, metal, and bone objects. 

Franks Casket, back panel. © Trustees of the British Museum
In some ways, the futhorc was a more ‘natural’ choice for English. It was designed for a Germanic language like English, so that it included, for example, a symbol for the interdental fricative (now usually represented by ‘th’ in standard English spelling), and a slightly wider range of vowel sounds than the five that the Roman alphabet encoded. It also had the advantage of familiarity, since people in England had been using it since the 5th century — that is, it is very likely to have been brought to England when the Anglo-Saxons first settled there. Why, then, did the Roman alphabet win out?

It might at first be thought that the reason was technological or media-related: runes were designed to be carved in hard materials, whereas the Roman alphabet was, by the early Middle Ages, adapted to writing on parchment. But this cannot be the only or even the main reason. Both the Roman and the runic alphabets were used for carved inscriptions, and on coins and other such objects. Runes appear in manuscripts, most notably in some of the Exeter Book poems, and there is no compelling reason why the futhorc could not have been adapted to pen and parchment, as the Roman alphabet was. Neither alphabet was intrinsically superior and it could be argued that the futhorc was better suited to writing English.

The most important reason the Roman alphabet was used to write English was cultural, for the socially dominant model for literacy throughout western Europe in the Middle Ages was Latin literacy. Until King Alfred’s educational reforms in the 9th century (and to some extent even afterwards), English speakers learned to read and write first by learning Latin, which of course was encoded in the Roman alphabet. In the early Middle Ages, the primary institution that offered a training in the technical skills of literacy was the Church, and the Church in western Europe conducted its business in Latin. And Latin literacy connected medieval English people with textual communities all over western Europe, who were similarly invested in the system (economists might call this a ‘network effect’). So the encoding of English with the Roman alphabet seems to be a good example of path dependence.

But a number of issues are worth further investigation. One is the persistence of runes throughout the Anglo-Saxon period, right up to the early 11th century, and usually embedded in (or juxtaposed with) texts otherwise written with the Roman alphabet. The functions of runes in these cases still need further study; Page’s chapter in An Introduction to English Runes on ‘Runic and Roman’ is full of questions that have not yet all been answered satisfactorily.

Early medieval English scribes were not presented with a simple either/or choice. Since Latin, and its encoding system, did not have sounds or symbols for the interdental fricative or the labiovelar (the ‘w’ sound), Anglo-Saxon scribes adapted two runes to fill the gaps in the Roman alphabet: ‘thorn’ and 'wynn'. The latter disappeared after the Norman Conquest (it was replaced by our modern letter ‘w’), but thorn persisted in England right through the Middle Ages. Thus, although the Roman alphabet was originally designed to write a different language, medieval English people found ways to adjust it so that it was a better fit, although still not an ideal one, for their own language. People get by with workarounds.
"although the Roman alphabet was originally designed to write a different language, medieval English people found ways to adjust it"

The relationship of Latin and English throughout the Middle Ages is worth exploring further not only for cultural and linguistic reasons but also for technological ones. When medieval people looked at a text in a manuscript and saw that it was written in English, they most likely formed a different set of expectations, and read it in different ways, than they would have in approaching a Latin text (see O’Keeffe 1990). Much work has been done on the encoding of Latin in medieval manuscripts, and the ways in which scribes writing Latin texts presented, organised, and searched for information. Much less has been done on the adjustments that medieval scribes made for encoding English. Throughout this project, which focusses on English texts, the Latin background of medieval literacy will be crucial.

Did the power and prestige of Latin lock written English into the Roman alphabet? Yes, it did, and that commitment is now so deep that today it is hardly questioned or even noticed. But not all path dependence is negative or even inefficient, as some have argued. The history of English writing hints at the social factors that influence technological decisions, but also at the flexibility, adaptability, and ingenuity of humans who use technology.

Yin Liu