Digital Image Processing and the Beowulf Manuscript
Kevin S. Kiernan
Over the past dozen years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. With their talents combined, an electronic camera designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts. The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific image processing programs can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.
For the most part the Beowulf manuscript is surprisingly well preserved and easy to read. Even the 2,000 or so letters that eventually crumbled from the edges after the Cottonian Library fire in 1731 are usually saved or restored in one way or another by the later eighteenth-century Thorkelin transcripts (Malone 1951; Kiernan 1986). Hundreds of the letters we thought were gone, moreover, are in fact only covered by the nineteenth-century paper frames individually made to prevent further deterioration of the edges of each leaf (Kiernan 1984). It is remarkable that after a thousand years the most illegible section of the Beowulf manuscript, folio 179 in the manuscript foliation, may well have been equally illegible in Anglo-Saxon times. Both recto and verso of folio 179 are severely damaged, with letters and words and whole passages appearing in facsimile as obliterated or faded beyond recognition. In the manuscript itself these sections have likewise remained obscure to the naked eye as well as to the sometimes more penetrating gaze of an ultraviolet lamp. Digital image processing, however, while providing no miraculous sightings, does help us see some things we have and have not been looking for.
Julius Zupitza, who published the first photographic facsimile of Beowulf in 1882, asserted without explanation that 'all that is distinct in the FS. in fol. 179 has been freshened up by a later hand in the MS.' (Zupitza 1882, 102). For nearly a hundred years scholars either accepted or doubted Zupitza's big assertion in desultory fashion before Tilman Westphalen demonstrated by an exhaustive analysis of the script that we are indeed dealing with a slightly different, 'later', handwriting on this folio (Westphalen 1967). Some day digital image processing may help us decide, with pattern-recognition programs, whether the later hand more likely belongs to the original scribe at a later time (Westphalen 1967; Kiernan 1981), to Laurence Nowell in the sixteenth century (Berkhout 1986), or to some other body not yet located. Right now it can help us study some of the illegible passages on this folio by furnishing more revealing black-and-white facsimilies than the ones currently available.
Norman Davis and Kemp Malone used the same photographs, taken in 1957 by the British Museum photographer, in their two facsimile editions (Davis 1959, v; Malone 1963, 120). The photographer took ultraviolet photographs of fourteen faded pages, and both Davis and Malone consequently ended up with ultraviolet photographs of this folio. When I first examined the Beowulf manuscript in 1977 I was surprised to find out that their facsimiles did not adequately represent important aspects of these two pages, especially of the recto. They failed to show, in particular, the overall scraped look of the palimpsest, and they totally concealed the clear signs of further erasing in the filmy grey discoloration confined to the illegible passages (rect 5, 8-12, 14, 18-21, verso 1-2). There was, in addition, noticeably more contrast and less pervasive fading on the recto than the facsimiles indicated. When I examined the page under an ultraviolet lamp the reasons for these shortcomings in the facsimile became at least partly apparent. The only fluorescent effects came from the greyish film itself, making the obscure letters even more obscure by weakening the contrast. Davis is right about this folio in particular when he observes that 'the present ultra-violet photographs in general do not add to what Zupitza could read in the manuscript, and in one or two places they even show less ...' (Davis 1959, xiii). For folio 179, however, his observation says more about the inappropriateness of ultraviolet photographs in this context than about the reliability of Zupitza or the condition of the manuscript.
One appropriate technique for folio 179 turned out to be color photography, which reveals the localized grey discoloration as well as other features obscured by the mix of ultraviolet fluorescence with black-and-white photographs. There is an excellent color reproduction available of the recto, achieved without digital image processing, in the frontispiece to my book on the manuscript (Kiernan 1981). It give scholars without access to the manuscript an accurate record of what the human eye can see on this page with the aid of a stable, high-intensity light source.
Another appropriate technique, digital image processing, suddenly emerged as a result of the space program. Enhancing its own image, the National Aeronautics and Space Administration (NASA) had set up a fund at the California Institute of Technology 'for the application of space technology to terrestrial uses.' In 'Digital Image-Processing Applied to the Photography of Manuscripts', John F. Benton, J. M. Soha, and A. R. Gillespie explained this new technology and convincingly applied it to a formerly intractable manuscript problem. Using 'an electronic camera and the image-processing techniques developed for space photography', they were able to decipher most of an erased ex-libris from a fourteenth-century manuscript (Benton et al. 1978). The image-processing techniques as they described them seemed ideally suited to decipher the illegible text on folio 179 too:
Contrast enhancement or optimization is the key technique; where fading has been a problem, this can include equalization of contrast in different areas of the image. Spatial filtering can be used to removed shading and permit even greater contrast enhancement. Other filtering techniques which 'detect' and enhance edges are useful in emphasizing the strokes of a written text. Additional processing can then be employed to differentiate between actual character strokes and random background noise, erasing much of the latter (Benton et al. 1978, 47).
Their study convinced me at the time that digital image processing was 'very likely to enhance the ink traces and firmly establish some currently disputable readings' on folio 179 (Kiernan 1981, 233).
Digital image processing, no matter what it sounds like, is not related to prestidigitation and does not in other ways depend on sleight-of-hand. A computer's digits are not fingers, but numbers. The 'digital' part begins when the computer systematically converts a black-and-white 'analogue' image (a conventional photograph, for example) into a vast matrix of tiny, discrete points the computer can individually number by brightness. When there are enough of these tiny points, called 'pixels' (a nickname for picture elements), the naked eye can scarcely distinguish between the analogue display of the digitized image and its black-and-white source. A digitized image of anything, even a single letter, thus holds a staggering amount of computable 'information', whereas a conventional black-and-white photograph presents the same information in continuous, but incalculable gradations of tone across a two-dimensional surface of film (Cannon and Hunt 1981, 214). Once digitized, the information is limited by the number of pixels and then by the number of grey levels that can be related to the pixels. The equipment I used provides a grid of 512 x 512 pixels, or well over a quarter of a million pixels, and distinguishes 256 grey levels, far more, at any rate, than the honest human eye can detect. The 'image processing' part comes in when the computer goes on to enhance this original image with programs designed, for example, to achieve the full scale of contrast of black to white where the original murky image provides only a small range of greys.
In addition to help from the space program, manuscript studies have also benefited from police work, especially forensic science (Ellen 1989). As early as 1979 the Conservation Studio at the British Library acquired an electronic camera, the Video Spectral Comparator, specifically designed to detect in documents infra-red and ultraviolet wavelengths outside the visible range of the human eye. The unit consists of a highly sensitive video camera, a powerful zoom lens, and a 9-inch video monitor with over 600 scan lines to provide good resolution. There are appropriate lamps and controls and filters to allow the user to switch easily from ultraviolet to infra-red or to shut off all but the visible range of wavelengths. The results can be permanently stored on disk, or displayed in a photograph or other analogue representation. One of the 'accessories' not included in the British Library model, unfortunately, but listed in the manufacturer's brochure, is a Digital Image Processing System.
Experimenting in 1982 and 1983 with the Video Spectral Comparator without the computer but with Chief Conservation Officer Anthony Parker, I achieved marginal enhancement in contrast on the video monitor by using a blue-green light, with a combination of blue-green lamp filters, a blue-green slide filter, and reversed polarity, a negative image. Although the results were far from spectacular, they presumably would provide productive data for a digital computer. In a 1983 paper for the International Society of Anglo-Saxonists on the Beowulf manuscript, I therefore mentioned in passing that 'continuing technological advances, in particular electronic photography and digital image processing, give us some legitimate hope for future discoveries in this manuscript' (Kiernan 1984, 24). In the meantime I tried to persuade John Benton (now deceased), who was interested in the problem and had sent me a prepublication copy of a paper on 'The Electronic Subtraction of the Superior Writing of a Palimpsest', to work on the Beowulf palimpsest next.
Benton's paper, first delivered at an international symposium on techniques for deciphering effaced texts, was supposed to be published during the 1980s, but has not yet appeared. His conclusions are worth paraphrasing. After nearly nine years of 'evaluating the application of electronic cameras and digital image processing to manuscript problems, attempting to decide which of its benefits could have been obtained by other means and whether the results are worth the cost', Benton and his team reached four conclusions (Benton forthcoming, 101-2): (1) every major centre for the study of manuscripts should invest in an electronic camera like the Video Spectral Comparator, which has a sufficient signal-to-noise ratio to render acceptable service for the purpose; (2) skilled conventional photography, sometimes with the aid of an electronic camera, will in most cases obviate the need for digital image processing, because the high contrast it achieves is rarely decisive in solving manuscript problems; (3) some manuscript problems can be solved only with digital techniques, but the data can come from digitizing a conventional photograph as well as directly from an electronic camera; and (4) digital image processing is not yet commercially viable for manuscript studies.
These conclusions are less enthusiastic about the merits of digital image processing than the team's earlier work. Indeed, as recommendations they discourage manuscript repositories from purchasing digital computers on the grounds that conventional photographs taken at the repositories can later be digitized at research facilities owning the computers. My own research did not undermine Benton's first two conclusions, for the Video Spectral Comparator had produced a better black-and-white image of folio 179 than the published black-and-white facsimiles, and skilled conventional color photography had produced the best image of all. It seemed too that folio 179 contained manuscript problems that only digital techniques might solve, but by the time I asked for Benton's help it was too late. Counseling 'long-term patience', Benton replied that the image-processing expert J. M. Soha 'has left JPL [NASA's Jet Propulsion Laboratory], our team has broken up, the Lab has a new director, and it is moving into other areas. I am sorry, but I can see no way for you to accomplish at JPL the difficult task you propose' (letter of 1 February 1984). Without access to JPL, my days as an incipient digital image processor seemed, well, numbered. Benton's fourth conclusion, that digital computers were not commercially viable for manuscript studies, seemed quite justified.
None the less, the terrestrial uses of this technology were proliferating rapidily. In the opening paragraph of his 1985 book, An Introduction to Digital Image Processing, Wayne Niblack encouragingly cites its use, among other applications, 'to control a sausage slicing machine to get equal slices from the irregularly shaped sausage' (Niblack 1985, 13). In fact its applications in the field of medicine had already made available to me a powerful digital computer well adapted for the study of manuscripts. In 1984 Dr Steven Nissen, Director of the Coronary Care Unit at the University of Kentucky Medical Center, acquired the Mipron-D microcodable array-processor by Kontron Electronics to enhance X-rays in the Cardiology Laboratory. The system converts from analogue to digital and the workstation, which a user controls by keyboard or mouse, includes two monitors for separate displays of data and images. At any time the results can be printed out, filmed, or saved on disk. Although menu-driven for medical applications, the Mipron-D is freely programmable and incorporates in its software nearly all standard image-processing algorithms.
During the summer of 1985, in an effort to provide the most effective input for the Mipron-D, I received the help of Clive Izard, Media Resources Officer in the British Library, who videotaped the illegible passages on folio 179 directly from the Video Spectral Comparator at the British Library. Like a photograph, a videotape is also an analogue representation, but is already partly digitized in horizontal scan lines. The changes in brightness along each scan line, however, are transmitted as a continuous electrical signal, which the computer must divide into discrete points numbered by their relative brightness to set up the same grid of pixels and the same range of grey levels as the photograph (Cannon and Hunt 1981, 214). The theoretical superiority of a videotape is its iterative representation of information frame by frame, which should improve the signal-to-noise ratio of the data (Adams et al. 1984, 315).
In fact there turned out to be a serious drawback in relying on a videotape. When the original videotape was converted from the British to the American standard to make it compatible with the Mipron-D, the quality of the analogue image was considerably degraded by 'background noise' associated with the video equipment. However, as Fig. 1 illustrates, the image-processing algorithms of the Mipron-D were remarkably successful at enhancing the contrast of the degraded image.
[Fig. 1: Folio 179r5 video before and after processing]
The disappointing results in both the input and output of this image turned out to be instructive and helpful. First, it demonstrated that it was not yet a routine matter to acquire a videotape in one country and digitize it in another. It also showed that, even under adverse conditions, digital image processing yielded better contrast in the illegible passages than the published black-and-white facsimiles. Finally, the results were good enough to persuade the officials at the British Library and the management of Kontron Electronics in Watford, England, to try for better results by hooking the digital computer directly to the Video Spectral Comparator in the British Library.
During the summer of 1987 Kontron's manager in Watford, Leslie Stump, and Specialist Engineer John Richardson set up a complete image-processing station in the Conservation Studio, where we digitized the results from the Video Spectral Comparator and saved them on the system's 10-megabyte backup cartridge. Later, with the guidance of Richardson at Kontron Electronics in Watford, I tested a full range of enhancement functions on the digitized images. The image processing began with the computer calculating and displaying in a histogram the overall grey scale of each image, Fig.2.
[Fig. 2: Initial histogram of folio 179r5]
The initial histogram is continually recalculated and redisplayed as new algorithms are tried. The results are displayed on the data and image monitors and can at any time be printed or photographed by the system's printer and hardcopy devices.
The most relevant system software includes many image-enhancement functions that rescale the grey-level distribution of the digitized image and enhance its quality using suitable convolution filters (Release 1986, 2). For example, a low-pass filter can reduce the high-frequency components of an image while preserving the low-frequency ones. This function usually enhances a degraded image because the image signal tends to be concentrated in the low frequencies while background noise is spread across all frequencies. Although a low-pass filter gets rid of a lot of noise while losing only a little signal, the part of the signal it loses will include the high frequencies from the details, such as the edges of letters, making the image look blurred (Lim 1984, 11-14). A high-pass filter can sometimes correct this problem, but it will also strengthen the background noise.
In the initial histograms (see Fig. 2 above) the levels of brightness of the pixels were narrowly clustered in the middle of the scale rather than covering the full scale of 256 levels from black to white. In other words, the images lacked contrast. Two enhancement functions proved especially useful in enhancing the contrast of the illegible passages on folio 179. The first function is said to achieve a 'normalization of the grey level histogram' by stretching it to the full dynamic range of 0 to 255 levels of brightness (Release 1986, 101) Fig. 3.
[Fig. 3: Normalization of grey level histogram]
The second function is said to achieve a 'linearization of the grey level histogram' by generating a uniform distribution of this full range of grey levels (Release 1986, 102) Fig. 4.
[Fig. 4: Linearization of grey level histogram]
Together these functions significantly enhanced the contrast in all of the illegible passages on folio 179. In some cases the increased contrast made a difference, in some cases it did not.
I have chosen from the results two passages from folio 179 recto to illustrate the ways in which digital image processing can and cannot help us interpret the illegible passages in this part of the Beowulf manuscript.
The first illustration comes from line 5 of the recto. In the late eighteenth century Thorkelin and his copyist record 'seðe on hea ... hord beweotode', restoring for us the now lost letters, ode, at the end of the line, but leaving a large gap between hea and hord. Since neither copyist even attempted a conjectural reading, we have no reason to think that the passage was any more legible in the eighteenth century than it is today. To be sure, many subsequent scholars over the past 200 years have confidently supplied words to fill this gap, but the cumulative results more eloquently reinforce the need for something like digital image processing to serve as an arbiter.
John Mitchell Kemble owns the first published attempt at a restoration, on hea(pe), 'on the heap', in his 1833 first English edition of Beowulf (Kemble 1833 and 1835). Kemble had studied the manuscript the year before, but his restoration supplies only two letters in a space that easily fits six. In his 1861 Danish edition, N. F. S. Grundtvig, who had studied the manuscript at first hand in 1829 (Grundtvig 1861, xx), points out that Kemble's reading is not supported by the manuscript, gives a more plausible transcription in 'on heaw... h...ðe', and conjectually restores h...ðe as hæðe, 'heath' (Grundtvig 1861, 76, 166). Grundtvig's restoration, with medial ð for þ, as was his habit, thus anticipates by ten years the result of Eduard Siever's 1870-1871 collation, on heaum hæþe, 'on the high heath', by far the most complete and accurate restoration that had yet appeared (Sievers 1910, 418).
In 1882 Zupitza put this reading out of focus with his transcription, on hea[ðo]-hlæwe, 'on the war-mound'. According to Zuptiza (102), 'what is left of the two letters after hea justifies us in reading them ðo (the stroke under the line must be accidental)'. Defending the rest of his transcription, he says 'very little of hlæwe freshened up; the h indistinct, læwe pretty certain, but the w may be easily mistaken for þ in consequence of the h of hwylc on fol. 179v being visible through the parchment ...' (Zupitza 1882, 102). Two diplomatic editions appearing at about the same time as Zupitza's facsimile and transliteration offer different readings. The year before, in 1881, Alfred Holder had somewhat uncertainly recorded on heáure hæþe, 'on the grey heath' (Holder , 64), the right shade for image processing at any rate. Holder was strangely dubious about the u but not about the re, and like Zupitza thought the þ might be a w. Similarly, in 1883, Richard Wülker saw either hea:re h:þe or hea:pe h:we or some combination of them (Wülker 1883, 104).
It is curious that these nineteenth-century scholars and their predecessors all had difficulty reading the letters um immediately following hea. Perhaps, as with many palimpsests, the underwriting has become clearer in the course of time (Benton et al. 1978, 40-1 and note 3). In any case, apart from Zupitza's [ðo], which is still prevalent because of the facsimile, all modern sources agree that the letters are clearly um. In 1920 Chambers printed on heaum hæþe, but argued for on heaum hope, 'on the high hollow', in his note: 'The word might be hæþe or hope'. Chambers says, 'Sedgefield [1910 and 1913] reads heaum hæþe, "on the high heath." Indeed hæþe was also read by Sievers in 1870-1 ..., so this is probably to be taken as the MS. reading. However to me it looks more like heaum hope, "on the high hollow"' (Chambers 1920, 109). In 1922, without recourse to the manuscript, Ernst Kock suggested on heaum hofe, ' in the high abode', on the model of Genesis 1489, of þam hean hofe (Kock 1922, 176-7). The new reading gained the aura of authority, however, when Sedgefield, for some reason thinking that ultraviolet supported it, printed hofe instead of hæþe in his third edition (Sedgefield 1935).
Only two of these various restorations have currency in modern editions. Klaeber, von Schaubert, and Nickel all endorse on heaum hæþe. In his facsimile edition, Davis observes that 'the reading heaum hæþe, instead of [Zupitza's] hea[ðo]-hlæwe, is certainly possible, even probable' (Davis 1959, xiii). My 1977 transcription with the help of high-intensity lighting was on hea(um hæþe) too (Kiernan 1981, 235). But on heaum hofe, supposedly confirmed by ultraviolet, has much momentum, appearing as it does in the editions of Dobbie, Wrenn, Bolton, Malone, Chickering, and Swanton.
Leaning heavily against this momentum, however, digital image processing shows that heþe, presumably a spelling variant of hæþe, is the most plausible manuscript reading, Fig. 5.
[Fig. 5: Images of folio 179r5 before and after processing]
The top of the photograph displays the image and its grey-level distribution before processing; the bottom stretched across the full dynamic range from 0 to 255 and with a uniform distribution. By strongly enhancing the shape of the entire letter e, including the middle bar and tongue, digital image processing decisively shows that the first vowel is not an o, eliminating hofe (and hope). It also clearly shows a continuous, if somewhat crooked, ascender for the following consonant, thus identifying it as þ. Carrying through to the lower minim line, the crookedness of the vertical bar is not a feature of the ascender alone and should not cast doubt on the identity of þ. Digital image processing cannot, however, adjudicate Zupitza's claim that the ascender is shine-through from an h on the opposite side of the leaf. Here, as always, one must return to the manuscript itself for the last analysis. A close look at the manuscript with high-intensity light reveals two coinciding strokes with different ink from recto and verso.
Line 14 recto presents an interesting problem, with and without digital image processing. The earliest transcripts do not record anything at all between Nealles and weoldu. Having studied the manuscript in 1832, Kemble in the first English edition confidently prints mid and ge, both without his conjectural brackets (Kemble 1833). Benjamin Thorpe, who collated the manuscript in 1830, is not so confident, and mildy demurs by placing mid in square brackets (Thorpe 1855, 149). Grundtvig, who used the manuscript the year before, is even less secure, printing mid in italics, attributing it to Kemble in a footnote, and suggesting instead the emendation 'to ge-wealdanne?' (Grundtvig 1861, 76). In his textual notes at the end, moreover, he represents the manuscript as reading only '..ge-weoldum' (167). Wülker's 'mid kaum zu erkennen' sounds properly dubious too (Wülker 1883, 105).
These doubts notwithstanding, Zupitza like Kemble presents mid as if it were perfectly clear in the manuscript, and every modern editor has quietly followed his lead. No modern editor ever again brackets the word nor is it included in Birte Kelly's lists of conjectures intended to restore readings lost through damage to the manuscript (Kelly 1983a, 253-6). In a rare and perhaps unique allusion to it in this century, Malone asserts that 'midge is faded but still legible' (Malone 1963, 85).
Despite its faded looks, this conjectural restoration has somehow attained the stature of an inviolable manuscript reading, actually taking precedence over undamaged manuscript readings following it. Rather than emending or rejecting it, editors have for some reason preferred to emend the exceptionally legible manuscript reading wyrmhorda cræft, 'the art or craft of dragon hoards', to wyrmhord a[b]ræ[c], 'he broke into the dragon hoard', to provide a missing verb for the main clause. There must be easier ways to find a verb. This radical emendation, which takes a genitive plural inflection from one word and uses it as a prefix on another, and changes c to b and ft to c, has no manuscript support at all. Birte Kelly's lists indicate that it was first used in Holthausen's second edition (1908) and that all subsequent editions of the poem have now accepted it (1983b, 249).
Digital image processing helps us decontruct this awful emendation by showing that the word mid is by no means 'legible' in the manuscript. In 1977, using only high-intensity light, I noted the faint outline of a high caroline s following the ink traces Kemble and the others identify as mid (Kiernan 1981, 237). This barely visible s, which closely resembles the caroline s in Nealles at the beginning of the line, shows up only faintly in the color reproduction, but it is now easy for anyone to see with the superior contrast provided by digital image processing, Fig. 6.
[Fig. 6: Images of folio 179r14 before and after processing]
Taking into account the s and looking for a third-person singular verb form instead of a preposition, I have reinterpreted these obscure ink traces as næs, the contracted form of ne wæs, 'was not':
The n is distinct. What Zupitza mistook for the third minim of an m is the a-bow of an ash, which is also distinct, and may be fruitfully compared with the ash in wæs immediately above it in r13. What Zupitza took for an i is the main bar of the digraph: the a-loop to the left is triangular, and the e-loop to the right is large and unruly, like many of the e-heads on the folio. A millimeter to the right of this e-head (which Zupitza mistook for the tail of a d) there is a high caroline s, faintly visible even in the FS. In the MS, some rusty brown ink traces are still intact on the s, especially at the top and at the slight jag at the base of the curve (upper minim line) (Kiernan 1981, 237).
If we accept næs as the missing verb among these obscure ink traces, the principal clause, Nealles [næs] geweoldum wyrmhorda cræft, sylfes willum, means something like 'the craft of dragon hoards was not at all in his power or under his own control'. The man breaks into the dragon's den a few lines later (2225b), removing any imagined need to break wyrmhorda cræft into wrymhord a[b]ræ[c]. Or we can continue to look for another verb with a high s among the more obscure and ambiguous ink traces that remain for us to ponder.
Ten years ago, electronic photography and digital image processing seemed on the verge of solving lots of problems on folio 179. 'Until these methods have been tried', I argued then, 'it is undesirable to pretend we have an established text for this folio' (Kiernan 1981, 233). Now that these methods have been tested, we still lack an established text for this part of the poem. At this stage, digital image processing cannot subtract the top text of a palimpsest and enhance the bottom text if, as in this case, the same kind of ink is used for both texts. The s in line 14 that destroys the conjectural restoration mid exemplifies this problem and it is not an isolated aberration. There are far more troubling cases to face, for example, in the commingled letters obscuring one another in lines 8-10 on the recto. By providing much better contrast, digital image processing sometimes intensifies these problems for us, forcing us to see them but not necessarily helping us solve them. The value of digital image processing in the specific case of this manuscript is not that it suddenly reveals things not already visible in one way or another to palaeographers, but that it markedly improves the legibility of facsimiles for everyone else. Rather than settling the text, however, digital image processing may ultimately oblige us to admit that we do not and probably cannot have an established text of Beowulf.
Adams, J. R., Driscoll, E. C. Jr., and Reader, C., 'Image Processing Systems', in M. P. Ekstrom, ed., Digital Image Processing Techniques (New York: Academic Press, 1984), 289-360.
Benton, J. F., Gillespie, A. R., and Soha, J. M., 'Digital Image-Processing Applied to the Photography of Manuscripts, with Examples Drawn from the Pincus MS of Arnald of Villanova', Scriptorium 33 (1978), 40-55 and plates 9-13.
Benton, J. F., 'Electronic Subtraction of the Superior Writing of a Palimpsest', forthcoming in J. Irigoin, ed., Techniques de déchiffrement des écritures effacées (Paris: Centre National de la Recherche Scientifique, 199?), 101-6.
Berkhout, C., 'Abstract of Oral Presentation on "Anglo-Saxon Studies in the Age of Shakespeare"', Old English Newletter 19 (1986), A-28.
Cannon, T. M. and Hunt, B. R., 'Image Processing by Computer', Scientific American 245 (October 1981), 214-25.
Chambers, R. W., ed., revised edition of Beowulf with the Finnsburg Fragment, edited by A. J. Wyatt (Cambridge, England: Cambridge University Press, 1920).
Chickering, H. D., ed., Beowulf: A Dual-Language Edition (New York: W. W. Norton, 1977).
Davis, N., ed., Beowulf: Reproduced in Facsimile from the Unique Manuscript, British Museum MS. Cotton Vitellius A. xv. With a Transliteration and Notes by Julius Zupitza, 2nd edition, Containing a New Reproduction of the Manuscript with an Introductory Note, Early English Text Society 245 (London: Oxford University Press, 1959).
Dobbie, E. van K., ed., Beowulf and Judith, Anglo-Saxon Poetic Records 4 (New York: Columbia University Press, 1953).
Ellen, D., The Scientific Examination of Documents: Methods and Techniques (Chichester, England: Ellis Horwood, 1989).
Grundtvig, N. F. S., ed., Beowulfes Beorh eller Bjovulfs-Drapen det Old-Angelske Heltedigt (Copenhagen: Karl Schönberg, 1861).
Holder, A., ed., Beowulf. I. Abdruck der Handschrift im Britischen Museum (Freiburg: J. C. B. Mohr, ).
Hunt, B. R., 'Image Restoration', in M. P. Ekstrom, ed., Digital Image Processing Techniques (New York: Academic Press, 1984), 53-76.
'The "IPS" Measuring Program'. Operators' Manual, Release 4.4 (Munich: Kontron Bildanalyse, 1986).
Kelly, B., 'The Formative Stages of Beowulf Textual Scholarship: Part I', Anglo-Saxon England 11 (1983a), 247-74.
Kelly, B., 'The Formative Stages of Beowulf Textual Scholarship: Part II', Anglo-Saxon England 12 (1983b), 239-765.
Kemble, J. M., ed., The Anglo-Saxon Poems of Beowulf, The Travellers Song, and the Battle of Finnes-burh (London: William Pickering, 1833).
Kiernan, K. S., Beowulf and the Beowulf Manuscript (New Brunswick, New Jersey: Rutgers University Press, 1981).
Kiernan, K. S., 'The State of the Beowulf Manuscript 1882-1983', Anglo-Saxon England 13 (1984), 23-42.
Kiernan, K. S., The Thorkelin Transcripts of Beowulf, Anglistica 25 (Copenhagen: Rosenkilde and Bagger, 1986).
Klaeber, F., ed., Beowulf and the Fight at Finnsburg, 3rd edition (Boston: D. C. Heath, 1950).
Kock, E, 'Interpretations and Emendations of Early English Texts. X', Anglia 46 (1922), 173-90.
Krapp, G. P., ed., The Junius Manuscript, Anglo-Saxon Poetic Records 1 (New York: Columbia University Press, 1931).
Lim, J. S., 'Image Enhancement', in M. P. Ekstrom, ed., Digital Image Processing Techniques (New York: Academic Press, 1984), 1-51.
Malone, K., ed., The Thorkelin Transcripts of Beowulf, Early English Manuscripts in Facsimile 1 (Copenhagen: Rosenkilde and Bagger, 1951).
Malone, K., ed., The Nowell Codex (British Museum Cotton Vitellius A. xv, Second Manuscript), Early English Manuscripts in Facsimile 12 (Copenhagen: Rosenkilde and Bagger, 1963).
Niblack, W., An Introduction to Digital Image Processing (Englewood Cliffs, New Jersey: Prentice-Hall International, 1985).
Nickel, G. et al, eds., Beowulf und die kleineren Denkmäler der altenglischen Heldensage Waldere und Finnsburg (Heidelburg: Carl Winter, 1976).
Schaubert, E. von, ed., Heyne-Schückings Beowulf (Text), 17th edition (Paderborn: Ferdinand Schöningh, 1963).
Sedgefield, W., ed., Beowulf, 2nd edition (Manchester, England: Manchester University Press, 1913).
Sedgefield, W., ed., Beowulf, 3rd edition (Manchester, England: Manchester University Press, 1935).
Sievers, E., 'Gegenbemerkungen zum Beowulf', Beiträge zur Geschichte der deutschen Sprache und Literatur 36 (Tübingen, 1910), 397-434.
Swanton, M., ed., Beowulf: Edited with an Introduction, Notes and New Prose Translation (Manchester, England: Manchester University Press, 1978).
Thorpe, B., ed., The Anglo-Saxon Poems of Beowulf, the Scôp or Gleeman's Tale, and the Fight at Finnesburg (Oxford: Clarendon Press, 1855).
Westphalen, T., Beowulf 3150-55: Textkritik und Editions-geschichte (Munich: Wilhelm Fink, 1967).
Wrenn, C. L., ed., Beowulf with the Finnesburg Fragment, 3rd edition, revised by Whitney F. Bolton (New York: St Martin's Press, 1973).
Wülker, R. P., ed., Das Beowulfslied nebst den kleineren epischen, lyrischen, didaktischen und geschichtlichen Stücken, in Bibliothek der angelsächsischen Poesie, vol. 1 (Kassel: Georg Wigand, 1883).
Zupitza, J., ed., Beowulf: Autotypes of the Unique Cotton MS Vitellius A. xv in the British Museum, with a Transliteration and Notes, Early English Text Society 77 (London: Oxford University Press, 1882).