Anne R. Kenney is Associate Director of the Department of Preservation and Conservation at Cornell University. She is past president of the Society of American Archivists, has served on the International Council on Archives Imaging Committee and as director or co-director of a series of digital imaging projects for preservation and access to research materials.

Benchmarking Image Quality: From Conversion to Presentation

Anne R. Kenney

There is much to be said for capturing the best possible digital image you can. From a preservation perspective, the advantages are obvious--an "archival" digital master can be created to replace rapidly deteriorating originals and used to produce high quality paper facsimiles or computer output microfilm that meets national standards for quality and permanence. It may also make economic sense, as Michael Lesk has noted, to "turn the pages once" and produce a sufficiently high level image so as to avoid the expense of reconverting at a later date when technological advances require or can effectively utilize a richer digital file. This economic justification is particularly compelling as the costs of scanning and storage continue to decline, narrowing the gap between high quality and low quality digital images. Once captured, the archival master can be used to create derivatives to meet current, but varied user needs: high resolution may be required for printed facsimiles, moderate resolution for OCRing, and lower resolution for on-screen display and browsing. The quality of all these derivatives may be directly affected by the quality of the initial scan.

If there are compelling reasons for creating the best possible image, there is also much to be said for not capturing more than you need. At some point, adding more resolution will not result in greater quality, just higher costs. At Cornell, we've been investigating digital imaging in a preservation context for over 5 years. For the first three years, we concentrated on what was technologically possible--on determining the best image capture we could secure. For the last two years, we've been striving to define the minimal requirements for satisfying informational capture needs. No more, no less.

To help us determine what's minimally acceptable, we've been developing a methodology, called benchmarking, which is designed as a means to define requirements systematically. Benchmarking is an approach, not a prescription, and for many aspects of digital imaging, benchmarking is still unchartered territory. Much work remains to be able to define conversion requirements for certain document types, e.g., photographs or newspapers; for conveying color information; for evaluating the effects of compression; and for providing access on a mass scale to a digital database of material representing a wide range of document types and document characteristics.

We began benchmarking with the conversion of printed text. We anticipate that within 2 years, quality benchmarks for image capture and presentation of the broad range of paper and film based research materials--including manuscripts, graphic art, halftones, and photographs--will be well defined through a number of projects currently underway. In general, these projects are designed to be system independent and are based increasingly on assessing the attributes and functionality characteristic of the source documents themselves, coupled with an understanding of user perceptions and requirements.

Why benchmarking?

Because there are no standards for image quality, because different document types require different scanning processes, there is no "silver bullet" for conversion. This frustrates many librarians and archivists who are seeking a simple solution to a complex issue. I suppose if there really were the need for a silver bullet, I'd recommend that everything be scanned at a minimum of 600 dpi with 24 bit color, but that would result in tremendously large file sizes, and a hefty conversion cost. One would also be left with the problems of transmitting and displaying those images.

We may have begun benchmarking with conversion, but are now moving on to a consideration of the difficulties associated with presenting information on screen. The number of variables that govern display are many, and it will come as no surprise that they preclude the establishment of a single best method for presenting digital images. But here too the urge is strong to seek a single solution. If display requirements paralleled conversion requirements--that is, if a 600 dpi, 24 bit image had to be presented on screen, then AT BEST, with the highest resolution monitors commerically available, only documents whose physical dimensions did not exceed 2.7" x 2.13" could be displayed--and they could not be displayed at their native size. Now most of us are interested in converting and displaying items that are larger than postage stamps, so these "simple solutions" are for most purposes impractical, and compromises will have to be made.

What is benchmarking?

Digital benchmarking is a means to forecast a likely outcome. It begins with an assessment of the source documents and an identification of the relevant variables associated with whatever you are trying to project (conversion requirements, storage requirements, display requirements). Some of the variables are objective measurements (such as "dpi") and some are subjective (such as level of quality achieved or identification of significant detail). A formula is used to determine the relationship amongst those variables and to project a likely outcome. The final step in benchmarking is to confirm or modify those projections via implementation. If the benchmarking formula does not consistently predict the outcome, it may not contain the relevant variables or reflect their proper relationship--and it should be revised.

The object of benchmarking is to make informed decisions about those compromises and to understand in advance the consequences of such decisions. The benchmarking approach can be applied across the full continuum of the digitization chain, from conversion to presentation. Our belief is that benchmarking must be approached holistically, that it is essential to understand at the point of selection what the consequences downstream for conversion and presentation will be. This is especially important as institutions consider inaugurating large scale conversion projects. Towards this end, the advantages of benchmarking are several in number:

1. Benchmarking is first and foremost a management tool, designed to lead to informed decision-making. It offers a starting point and a means for narrowing the range of choices to something that's managable. Although clearly benchmarking decisions must be judged through actual implementations, the time spent in experimentation can be reduced. Take conversion for example. If you have a document that you want to scan, you can buy a scanner and start experimenting with a variety of resolutions, bit depths, and enhancement capabilities. You will learn a lot in the process, but it may take you a long time to reach an informed decision on how to capture the necessary information, and you may be tempted to overstate your scanning requirements. With the benchmarking approach, you can determine how best to capture that document beforehand--by calculating the document's resolution and tonal reproduction requirements--and then evaluate that choice by scanning the document at those settings.

2. Benchmarking provides a means for interpreting vendor claims. If you have spent any time reading product literature, you may have become convinced as I have that the sole aim of any company is to sell its product. Technical information will be presented in the most favorable light, which is often incomplete and intended to discourage product comparisons. One film scanner for instance may be advertised as having a resolution of 7500 dpi; another may claim 400 dpi. In fact, these two scanners could provide the very same capabilities but it may be difficult to reach that conclusion without additional information. You may end up spending considerable time on the phone, first getting past the marketing reps and then questioning closely those with a technical understanding of the product's capabilities.

3. Benchmarking can assist you in negotiating with vendors for services and products. I've spent many years advocating the use of 600 dpi bitonal scanning for printed text and invariably when I begin a discussion with a representative of an imaging service bureau, he will try to talk me out of that high a resolution, claiming that I do not need it or that it will be exhorbitantly expensive. I suspect he is in part motivated to make those claims because he believes them, and in part because his company may not provide that service and he wants my business. If I had not benchmarked my resolution requirements, I might be pursuaded by what this salesperson has to say.

4. Benchmarking can also lead to careful management of resources. If you know up front what your requirements are likely to be and the consequences of those requirements, you can develop a budget that reflects the actual costs, identify prerequisites for meeting those needs, and, perhaps most important, avoid costly mistakes. Nothing will doom an imaging project more quickly than buying the wrong equipment or having to manage image files that are not supported by your institution's technical infrastructure.

5. Benchmarking can also allow you to be realistic about what you can deliver under specific conditions. It is important to understand that an imaging project may break at the weakest link in the digitization chain. For instance, if your institution is considering scanning its map collection, one should be realistic about what ultimately can be delivered to the user at her desktop. Benchmarking lets you predict how much of the image and what level of detail contained therein can be presented on-screen for various monitors. Even with the highest quality monitor available today, presenting oversize material with small detail is impractical given the current state of monitor technology.

How Does It Work?

Having spent some time extolling the virtues of digital benchmarking, I'd like to turn next to describing this methodology as it applies to conversion, and then to move to a discussion of on-screen presentation.

Objective Evaluation:

Determining what constitutes informational content becomes the first step in the conversion benchmarking process. This can be done objectively or subjectively. Let's consider an objective approach first. One way to do this would be to peg conversion requirements to the process used to create the original document. Take resolution, for instance. Film resolution can be measured by the size of the silver grains suspended in an emulsion, whose distinct characteristics are appreciated only under microscopic examination. Should we aim for capturing the properties of the chemical process used to create the original? Or should we peg resolution requirements at the recording capability of the camera used?

There are objective scientific tests that can measure the overall information carrying capacity of an imaging system, such as the Modulation Transfer Function, but such tests require expensive equipment and are still beyond the capabilities of all outside industry or research labs. In practical applications, the resolving power of a microfilm camera is measured by means of a technical test chart where the distinct number of black and white lines discerned is multiplied by the reduction ratio used to determine the number of line pairs per millimeter. A system resolution of 120 line pairs per millimeter is considered good; above 120 is considered excellent. To capture digitally all the information present on a 35mm frame of film with a resolution of 120 lppm would take a bitonal film scanner with a pixel array of 12,240. There is no such beast on the market today.

How far down this path should we go? It may be appropriate to require that the digital image accurately depict the gouges of a wood cut or the scoops of a stipple engraving, but what about the exact dot pattern and screen ruling of a halftone? the strokes and acid bite of an etching? the black lace of an aquatint that only becomes visible at a magnification above 25x? Offset publications are printed at 1200 dpi--should we chose that resoltuion as our starting point for scanning text?

Significant information may well be present at that level in some cases, as may be argued for medical x-rays, but in other cases, attempting to capture all possible information will far exceed the inherent properties of the image as distinct from the medium and process used to create it. Consider for instance a 4 x 5 negative of a badly blurred photograph. The negative is incredibly information dense, but the information it conveys is not significant.

Obviously, any practical application of digital conversion would be overwhelmed by the recording, computing, and storage requirements that would be needed to support capture at the structure or process level. Although offset printing may be produced at 1200 dpi, most individuals would not be able to discern the difference between a 600 dpi and a 1,000 dpi digital image of that page. In choosing the higher resolution you would be adding more bits, increasing the file size, but with little to no appreciable gain. The difference between 300 dpi and 600 dpi, however, can be easily observed, and, in my opinion, is worth the extra time and expense to obtain. The relationship between resolution and image quality is not linear: at some point as resolution increases, the gain in image quality will level off. Benchmarking will help you to determine where the leveling begins.

Subjective Evaluation:

I would argue, then, that determining what constitutes informational content is best done subjectively. It should be based on an assessment of the attributes of the document rather than the process used to create that document. Reformatting via digital--or analog--techniques presumes that the essential meaning of an original can somehow be captured and presented in another format. There is always some loss of information when an object is copied. The key is to determine whether that informational loss is significant or not. Obviously for some items, particularly those of intrinsic value, a copy can only serve as a surrogate, not as a replacement. This determination should be made by those with curatorial responsibility and a good understanding of the nature and signficance of the material. Those with a trained eye should consider the attributes of the document itself as well as the potential uses that researchers will make of its informational content.

So we begin benchmarking with a careful assessment of the informational content of the material at hand. To illustrate benchmarking for conversion, let's consider the brittle book, since I happen to know more about printed text than graphic material and because we've had a number of years to confirm our findings. Benchmarking for the conversion of text-based material is covered in a tutorial on image quality that Stephen Chapman and I co- authored for the Commission on Preservation and Access, so I'll just include a brief description here.

For brittle books published during the last century and a half, detail has come to represent the size of the smallest significant character in the text, usually the lower case "e." To capture this information--which consists of black ink on a light background-- resolution is the key determinant of image quality.

Determining Scanning Resolution Requirements For Replacement Purposes:

The means for benchmarking resolution requirements in a digital world has its roots in micrographics, where standards for predicting image quality are based on the Quality Index (QI). QI provides a means for relating system resolution and text legibility. It is based on multiplying the height of the smallest significant character "h" by the smallest line pair pattern resolved by a camera on a technical test target, "p." QI=h x p. The resulting number is called the Quality Index, and it is used to forecast levels of image quality--marginal (3.6), medium (5.0) or high (8.0)--that will be achieved on the film. This approach can be used in the digital world, but a number of adjustments must be made to account for the differences in the ways in which microfilm cameras and scanners capture detail. Specifically, it is necessary to:

1. establish levels of image quality for digitally rendered characters that are analogous to those established for microfilming (note differences in quality degradation).

2. rationalize system measurements. Digital resolution is measured in dots per inch; classic resolution is measured in line pairs per millimeter. To calculate QI based on scanning resolution, you must convert from one to the other. One millimeter equals .039 inches, so to determine the number of dots per millimeter, you will need to multiply the DPI by .039.

3. equate line pairs to dots. Again, classic resolution refers to line pairs per millimeter (one black line and one white line), and since a dot occupies the same space as a line, two dots must be used to represent one line pair. This means the dpi must be divided by two to be made eqivalent to "p."

With these adjustments, we can modify the QI formula to create a digital equivalent. From QI= p x h, we now have QI = .039dpi x h/2 which can be simplified to .0195dpi x h. For bitonal scanning, we would also want to adjust for possible misregistration due to sampling errors brought about in the thresholding process in which all dots are reduced to either black or white. To be on the conservative side, the authors of AIIM TR26-1993 advise increasing the input scanning resolution by at least 50% to compensate for possible image detector mis-alignment. The formula would then be QI = .039dpi x h/3 which can be simplified to .013dpi x h.

So how does all this work?

Well, consider a printed page that contains characters that measure 2mm high and above. If you were to scan the page at 300 dpi, what level of quality would you expect to obtain? By plugging in the dpi and the character height, you can solve for QI, and you would discover that you can expect a QI of 8, or excellent rendering.

You can also solve the equation for the other variables. Consider for example if your scanner had a dpi of 400. You can benchmark the size of the smallest character that you could capture with medium quality (a QI of 5), which would be .96mm high. Or you can calculate the input scanning resolution required to achieve excellent rendering of a character that is 3 mm high (200 dpi).

With this formula, and an understanding of the nature of your source documents, you can benchmark the scanning resolution needs for printed material. We took this knowledge and applied it to the types of documents we were scanning--brittle books published from 1850-1950. We reviewed printers' type sizes commonly used by publishers during this period, and discovered that virtually none utilized type fonts smaller than 1 mm in height, which, according to our benchmarking formula, could be captured with excellent quality using 600 dpi bitonal scanning. We then tested these benchmarks by conducting an extensive on-screen and in print examination of digital facsimiles for the smallest font-sized Roman and non-Roman type scripts used during this period. This verification process confirmed that an input scanning resolution of 600 dpi was indeed sufficient to capture the monochrome information contained in virtually all books published during the period of paper's greatest brittleness. Although many of those books do not contain text that is as small as 1 mm in height, a sufficient number of them do. To avoid the labor and expense of performing item by item review, we currently scan all books at 600 dpi resolution.

Although we've conducted most of our experiments on printed text, we are beginning to benchmark resolution requirements for non- textual documents as well. In the case of graphic material, we let the "h" in the formulas represent the smallest complete part that is considered essential to an understanding of the entire page. A detail is usually comprised of smaller parts--such as the width of the lines used to render a letter--and its size is measurable. For benchmarking purposes using these formulas, the "h" would be the height (or diameter) in millimeters of the complete detail, not its subunits.

In a relief block, for instance, signficant detail can be a small- scaled part or a background pattern; in a photograph, it can be a street sign in the distance or a face in a group shot; in a papyrus manuscript, it can be an "ancient pen-stroke, even small dots of ink." Detail represents the smallest complete part that lends meaning to the document. In addition to the size of the detail, its clarity and complexity should be taken into account. If a detail is slightly fuzzy, an excellent rendering of that detail (QI=8) can be obtained, but it too will exhibit those same "fuzzy" characteristics.

Benchmarking for conversion can be extended beyond resolution to tonal reproduction (both grayscale and color), to the capture of depth, overlay, and translucency, to assessing the effects of compression techniques and levels of compression used on image quality, to evaluating the capabilities of a particular scanning methodology, such as the Kodak Photo CD format, or for evaluating quality requirements for a particular category of materials, e.g., halftones, or to the relationship between the size of the document and the size of its signifcant details, a very challenging relationship which affects both the conversion and the presentation of maps, newspapers, architectural drawings, and other oversized, highly detailed source documents.

Benchmarking involves both subjective and objective components. There must be the means to establish levels of quality (through technical targets, samples of acceptable materials), the means to identify and measure significant information present in the document, the means to relate one to another via a formula, and the means to judge results on-screen and in print for a sample group of documents. Armed with this information, benchmarking enables informed decision making--which often leads to a balancing act involving tradeoffs between quality and cost, between quality and completeness, between completeness and size, or quality and speed.


Quality assessments can be extended beyond capture requirements to the presentation and timeliness of delivery options. Let's consider how benchmarking would work in the area of display.

We begin our benchmarking for conversion with the attributes of the source documents. We begin our benchmarking for display with the attributes of the digital images. And while I will come back to those characteristics, I'd like to turn first to some basic assumptions about network access and image display.

I believe that all researchers in their heart of hearts expect three things from displayed digital images: They want the full size image to be presented on screen; they expect legibility and adequate color rendering, and they want images to be displayed quickly. Of course they want lots of other things, too, such as the means to manipulate, annotate, and compare images, and for text- based material, they want to be able to conduct key word searches across the images. But for the moment, let's just consider those three requirements: full image, full detail and tonal reproduction, quick display.

Unfortunately, for many categories of documents, satisfying all three criteria at once will be a problem, given the limitations of screen design, computing capabilities, and network speeds. Benchmarking screen display must take all these variables into consideration and the attributes of the digital images themselves as user expectations are weighed one against the other. We are just beginning to investigate this interrelationship at Cornell, and I want to acknowledge the work of my colleague, Stephen Chapman, who has spearheaded our efforts in this area. Although our findings are still tentative and not broadly confirmed through experimentation, I'm convinced that display benchmarking will offer the same advantages as conversion benchmarking to research institutions that are beginning to make their materials available electronically.

Now for the good news: it's easy to display the complete image and it's possible to display it quickly. It's easy to ensure screen legibility--in fact intensive scrutiny of highly detailed information is facilitated on screen. Color fidelity is a little more difficult to deliver, but progress is occuring on that front.

Now for the not so good news: given common desktop computer configurations, it may not be possible to deliver full 24-bit color to the screen--the monitor may have the native capability but not enough controller memory. The complete image that is quickly displayed may not be legible. A highly detailed image may take a long time to deliver and only a small percent of it will be seen at any given time--you may call up a photograph of Yul Brenner only to discover you've landed somewhere on his bald pate.

Benchmarking will allow you to predict in advance the pros and cons of digital image display. Conflicts between legibility and completeness, between timeliness and detail, can be identified and compromises developed. Benchmarking can allow you to predetermine a set process for delivering images of uniform size and content, and to assess how well that process will accommodate other document types. Scaling to 72 dpi and adding 3 bits of gray may be a good choice for technical reports produced at 12 point type and above, but will be totally inadequate for delivering newspapers.

To illustrate benchmarking as it applies to display, let's consider the first two user expectations: complete display and legibility. We expect printed facsimiles produced from digital images to look very similar to the original--they should be the same size, preserve the layout, and convey detail and tonal information that is faithful to the original. Many readers assume that the digital image on screen can also be the same--that if the page were correctly converted, it could be brought up at approximately the same size and with the same level of detail as the original. It is certainly possible to scale the image to be the same size as the original document, but chances are information contained therein will not be legible.

If the scanned image's dpi does not equal the screen dpi, then the image on-screen will either appear larger or smaller than the original document's size. Because scanning dpi most often exceeds the screen dpi, the image will appear larger on the screen--and chances are not all of it will be represented at once. This is because monitors have a limited number of pixels that can be displayed both horizontally and vertically. If the number of pixels in the image exceed those of the screen and the scanning dpi is higher, the image will be enlarged on the screen and not completed presented.

The problems of presenting completeness, detail, and native size is more pronounced in display than in printing. In the latter, industry is capable of very high printing dpi's and the total number of dots that can be laid down for a given image is great, enabling the creation of facsimiles that are the same size-and often with the same detail--as the original.

The limited pixel dimensions and dpi of monitors can be both a strength and a weakness. On the plus side, detail can be presented more legibly and without the aid of an eye loupe--which for those conducting extensive textual analysis--such as papyrologists--is a major improvement over reviewing the fragments themselves. On the down side, because the screen dpi is often exceeded by the scanning dpi, and screens have very limited pixel dimensions, many documents can not be fully displayed if legibility must be conveyed. This conflict between overall size and level of detail is most apparent when dealing with oversized material, but it also affects a surprisingly large percentage of normal-sized documents as well.

Let's begin with the physical limitations of commonly available monitors:

Typical monitors offer resolutions from 640 x 480 at the low end to 1600 x 1280 at the high end. The lowest level SVGA monitor offers the possibility of displaying material at 1024 x 768. These numbers, known as the pixel matrix, refer to the number of horizontal by vertical pixels painted on the screen when an image appears.

In product literature, monitor resolutions are often given in dpi which can range from 60 to 120, depending on the screen width and horizontal pixel dimension. The screen dpi which can be a misleading representation of a monitor's quality and performance. For example, when the same pixel matrix is used on a 14", 17", and 21" monitor, the dpi resolution decreases as screen size increases. We might intuitively expect image resolution to increase with the size of the monitor, not decrease. In reality the same amount of an image--and level of detail--would be displayed on all three monitors set to the same pixel dimensions. The only difference would be that the image displayed on the 21 inch monitor would appear enlarged compared to the same image displayed on the 17 and 14 inch monitors.

The pixel matrix of a monitor limits the number of pixels of a digital image that can be displayed at any one time. And, if there isn't sufficient memory associated with the controller for the monitor, you will also be limited to how much gray or color information that can be supported at any pixel dimension. For instance, while the two-year old 14" SVGA monitor on my desk supports a 1024-768 display resolution, it came bundled with half a megabyte of video memory. It can not display an 8-bit grayscale image at that resolution and it can not display a 24 bit color image at all, even if it's set at the lowest resolution of 640 x 480. It is not coincidental that while the most basic SVGA monitors can support a pixel matrix of 1024 x 768, most of them come packaged with the monitor set at a resolution of 800 x 600.

Let's consider our old friend the brittle book and how best to display it. Recall that it may contain font sizes at 1 mm and above, so we've scanned each page at 600 dpi, bitonal mode. Let's assume that the typical page averages 4" x 6" in size--the pixel matrix of this image will be: 4 x 600 by 6 x 600, or 2400 x 3600-- far above any monitor pixel matrix currently available. Now if I want to display that image at its full scanning resolution on my monitor, set to the default resolution of 800 x 600, it should be obvious to many of you that I will be showing only a small portion of that image--approximately 5% of it will appear on the screen. Let's suppose I went out and purchased an $8,000 monitor that offered a resolution of 1600 x 1280. I'd still only be able to display 24% of that image at any one time.

Obviously for most access purposes, this display would be unacceptable--it requires too much scrolling or zooming to study the image. If it is an absolute requirement that the full image be displayed with all details fully rendered, I'd suggest converting only items whose smallest significant detail represents nothing smaller than one tenth of 1% of the total document surface. This means that if you had a document with a one millimeter high character that was scanned at 600 dpi and you wanted to display the full document at its scanning resolution on a 1024 x 768 monitor, the document's physical dimensions could not exceed 1.7" x 1.3". This works well for items such as papyri which are relatively small, at least as they have survived to the present. It also works well for items that are physically large and contain large sized features, such as posters that are meant to be viewed from a distance. If the smallest detail on the poster measured one inch, the poster could be as large as 42" x 32" and still be fully displayed with all detail intact.

The formula for calculating the maximum percentage of a digital image that will be displayed at a given monitor pixel matrix is: (pd =pixel dimension)

% = 1st monitor pd x 2nd monitor pd x 100 1st digital image pd x 2nd digital image pd

Most images will have to be scaled down from their scanning resolutions for on screen access, and this can occur a number of ways. Let's first consider full display on the monitor, and then consider legibility. In order to display the full image on a given monitor, the image pixel matrix must be reduced to fit within the monitor's pixel matrix. The image is scaled by setting one of its pixel matrixes to the corresponding pixel dimension of the monitor.

The formula for scaling images for full display is:

If document width less than or equal document height, scale digital image by setting 2nd image pd to 2nd monitor pd.

If document width greater than document height, scale by setting 1st image pd to 1st monitor pd.

Let's consider our brittle book again. The width is smaller than the height, so we would scale our image to 600 on our monitor set at 800 x 600. With the vertical dimension set to 600, the horizontal dimension would be 400 to preserve the aspect ratio of the original. By reducing the 2400 x 3600 pixel image to 400 x 600, we will have discarded 97% of the information in the original. The advantages to doing this are several: it facilitates browsing by displaying the full image, it decreases file size which in turn decreases the transmission time. The down side should also be obvious: there will be a significant decrease in image quality as a significant number of pixels are discarded. In other words, the image can be fully displayed, but the information contained in that image may not be legible. To determine whether that information will be useful, we can turn to the use of benchmarking formulas for display:

Digital QI formulas for images scaled for on-screen display:

For these formulas, it is necessary to consider four variables: pd-the pixel dimension to which the image is scaled QI-the Quality desired on screen (again there must be reference established for this: 8 = excellent, 5=good, 3.6=legible, 3=poor.

if document width less than or equal to doc length, QI = 2nd pd x .026h doc. length

if document width greater than doc. length,

QI = 1st pd x .026h doc. width

These formulas can be used to solve for any one of the four variables: QI, pd, h, and doc. dimension.

Let's return to the example of our 4 x 6 brittle page. If we assume we need to be able to read the 1 mm high character, but that it doesn't have to be fully rendered, then we set our QI requirement at 3.6, which should ensure legibility. We can use the benchmarking formula to predict the scaled pixel dimension that will result in a QI of 3.6 for a 1 mm high character. The formula would be 2nd pd = QI x doc. length .026 h

or 2nd pd = 3.6 x 6 .026 x 1, which equals 830 pixels.

This full image could be viewed on top end monitors, such as those which support a 1600 x 1280 pixel matrix; over 90% of the image could be viewed on a 1024 x 768 monitor; but only a little over 70% of it could be viewed on my monitor set at 800 x 600.

We can also use this formula to determine a preset pixel matrix for a group of documents to be conveyed to a particular clientele. Consider a scenario where your primary users have access to monitors that can support effectively a 800 x 600 resolution. We could decide whether the user population would be satisfied with receiving only 70% of the document if it meant that they could read the smallest type, which may occur only in footnotes. If your users are more interested in quick browsing, you might want to benchmark against the body of the text, rather than the smallest typed character. For instance, if the main text were in 12 point type and the smallest "e' measured 1.6 mm in height, then our sample page could be sent to the screen with a QI of 3.6 at a pixel dimension of 519, well within the capabilities of the 800 x 600 monitor.

One can also benchmark the time it will take to deliver this image to the screen--if your clientele are connected via ethernet, this image (with 4 bits of gray added to smooth out rough edges of characters and improve legibility) could be sent to the desktop in under a second--providing readers with full display of the document, legibility of the main text, and a timely delivery. If the footnotes must be readable, the full text can not be delivered at once and the time it will take to retrieve the image will increase. Benchmarking allows you to identify these variables and consider the tradeoffs/compromises associated with optimizing any one of them.

need conclusion: (still to be written)

Other uses of benchmarking could include:

displaying photographs, where color fidelity and tonal range requirements can also be benchmarked against the orginal source documents, and where user requirements and computing capabilities can be evaluated to reach compromises on how best to present the information.

Determining the number of multiple versions of an image that must be derived--to meet browsing needs as well as detailed on-screen inspection of a document's smallest attributes. For source documents containing small detail that must be presented to the viewer, it will cost the user speed of delivery, and the ability to view the image over common browsers that won't support such large files.

conclusions: imaging works for highly detailed materials that aren't too big (e.g., papyri pieces where "Most papyri are relatively small, at least as they have survived to the present and there is no need to create photographic versions; digital images created with current equipment will meet the group's specifications with no difficulty." It's limited for color oversize maps. It will work with oversize objects where significant detail is not too small (e.g., posters) where the ratio between physical dimensions and feature dimensions do not exceed a certain ratio (1/10 of 1 percent)