Please Support the Bible Translation Work of the Updated American Standard Version (UASV)
The Importance of Understanding Manuscript Relationships
The text of the New Testament has undergone a lengthy and detailed transmission process, with copies multiplied across centuries. Even from the earliest days, believers viewed these writings with reverence, believing that the Christian scriptures were “God-breathed.” Yet to preserve them for use in congregations throughout the Mediterranean and beyond, Christians created new copies whenever older ones began to deteriorate, or when fresh demand arose in newly formed communities. Colossians 4:16 shows that the apostle Paul’s letters were to be shared and read in multiple congregations, implying that faithful copyists played a significant role in making the message available. As centuries passed, some scribes made unintentional errors, while others might have smoothed awkward wording or merged parallel texts. A knowledge of how scribes typically copied can help textual critics sift among variations to detect which might be older or more faithful to the original.
Knowledge of scribal behavior is thus fundamental not only for establishing the best readings but also for writing a continuous narrative of how the text changed. Westcott and Hort insisted that “knowledge of documents should precede final judgment upon readings,” calling upon critics to weigh rather than simply tally manuscripts. But their approach was also shaped by the comparatively small pool of evidence in their era and by their reliance on certain text-types. Scholars since have found that if one is ignorant of how scribes normally worked—whether they expanded texts, omitted lines, or harmonized parallels—one’s textual evaluations may be flawed. This matter is all the more pressing because each scribe or scriptorium might display its own peculiarities, some introducing recurring mistakes, others demonstrating surprisingly careful fidelity. The question remains: how do we systematically determine these scribal tendencies and incorporate them into a robust textual analysis?
Historical Efforts to Classify and Compare Manuscripts
Since the dawn of printing, textual critics have recognized the existence of variants in the New Testament text. John Mill’s work in the early eighteenth century drew attention to thousands of variant readings. Ever since, an overriding aim has been to locate genealogical relationships among manuscripts. Early critics, lacking modern technology or widespread access to the oldest codices, resorted to partial methods, often investigating only selected variant passages or comparing new manuscripts to a standardized text, once widely the “Textus Receptus.” In the late nineteenth century, Westcott and Hort propelled textual scholarship forward, focusing on broad text-types—Alexandrian, Western, and so forth—yet they themselves performed limited direct manuscript collation. They deduced textual alignments based on smaller groups and an overall sense of genealogical branching.
By the twentieth century, critics like Hermann von Soden tried more extensive collations, though at times in ways that left issues of accuracy. Meanwhile, others (e.g., F. C. Burkitt, Adolf von Harnack, or more recently Kirsopp Lake) recognized that manuscripts must be weighed rather than counted, a method requiring a thorough or at least representative collation to detect subtle differences. The basic deficiency was that scholars typically sampled only a handful of passages, or they measured agreements solely based on differences from the Textus Receptus. Yet, as was pointed out decades ago, comparing readings to the Textus Receptus says little about genealogical relationships if the newly discovered reading is absent from that standard. If two manuscripts happen to unite with or differ from the Textus Receptus in only half the variant units, how is one to glean the closeness of their actual text?
An era of methodological refinement emerged around the mid-twentieth century, aided by the research of E. C. Colwell, Gordon D. Fee, and other specialists. They recognized that sampling could be done effectively if at each sample the entire range of variation was recorded, not just divergences from a single reference text. They also realized that measuring how often two manuscripts agreed at a variation was more helpful than measuring how often they disagreed from the Textus Receptus. This impetus aimed at a truly genealogical perspective, seeking to see how manuscripts relate to each other, not just how they all line up or break from a single editorial standard.
Colwell’s Quantitative Method: A Turning Point
- C. Colwell stands out as an architect of a more precise methodology. In a series of writings, he argued that textual critics must compare each manuscript’s readings against every other manuscript’s readings at all points of variation. The quantity of variants might be formidable, but partial measures (like sampling only a few dozen or a few hundred sites) risk missing crucial evidence. Colwell recognized that truly exhaustive collation was too laborious for manual methods, but he anticipated that technology could lighten the load. The following pillars undergirded his approach:
- a) One must consider every place of variation where at least two witnesses differ from the rest, to avoid ignoring more minor or sporadic differences that might prove genealogically revealing.
- b) One must not rely on a single standard text—like the Textus Receptus or any other editorial product—but rather measure how manuscripts differ or agree in relation to each other.
- c) One must separate genuinely genealogical or significant variants (like transpositions with real semantic effect, additions or omissions of phrases, or conflations) from trivial ones (like mere spelling differences or nonsense errors).
- d) One must produce, for each manuscript, a numeric representation of how many times it agrees with known textual clusters, typically the Alexandrian (Egyptian), Byzantine, and “Western” or other recognized groupings, often including texts like Codex Bezae or Codex Vaticanus as anchor points.
Such a method opens the door to full quantitative analysis. If the data show that a certain minuscule aligns with the Byzantine group in 92 percent of the variants but with the Alexandrian group in only 60 percent, one can confidently place it in the Byzantine tradition. Meanwhile, if another minuscule stands at 72 percent with Alexandrian and 70 percent with Byzantine, more nuance is needed. Perhaps it is a “mixed” text, adopting Alexandrian readings in some chapters and Byzantine readings in others. This scenario is not unusual. It underscores that classification may need to be done by chapters or sections, because a scribe might have used multiple exemplars for different parts of a manuscript.
Progress in Grouping Manuscripts: Examples and Limitations
When textual critics apply quantitative analysis to a single book—like Acts, the Gospels, or the Pauline epistles—the result can be an impressive demonstration of how minutely the manuscripts differ or coincide. A known example is how thorough studies of Acts reveal certain manuscripts (e.g., 1175) aligning strongly with Codex Vaticanus in that book, while others (e.g., 614 or 105) stand close to the Byzantine uncials or minuscules. By comparing how frequently they concur in the entire set of variant units, one identifies stable textual families or recurring patterns.
This method might highlight block mixture, as with Codex 33 in Acts, which shifts alignment between chapters 1–11 and 12–28. Full collation reveals that it shares an Egyptian alignment early on but changes to a more Byzantine alignment in the latter portion, or vice versa. The ability to spot such shifts depends on subdividing the text, ensuring that a single aggregated statistic for the entire book does not obscure transitional points. Without that approach, one might wrongly conclude that Codex 33 belongs to a single textual family.
However, not all stands can be resolved by the pure numeric breakdown. Some manuscripts remain in a borderline range, say 65–70 percent agreement with two different major traditions. In such cases, the textual critic must examine the type of variants themselves. Are the shared readings mostly trivial, or do they represent distinctive expansions or omissions that rarely occur outside a known group? That question can be answered by the next methodological step: “weighing” variants.
Weighing Variants: Beyond Simple Counting
Not all variants equally signify genealogical closeness. For instance, whether a manuscript spells John’s name as Ἰωάννης or Ἰωάνης might reflect a widespread interchange of double-n forms or standardization. Meanwhile, a large phrase missing in one tradition but present in another might convey strong genealogical significance. The missing phrase is not easily explained as a minor scribal slip but might reflect a distinct line of textual transmission. Gordon Fee’s approach, building on Colwell, called for starting with a full quantitative analysis, then “weighing” each variant based on its probable genealogical significance. If a variant is a trifling word-order shift or an easily made slip, it yields little genealogical traction. If it is a large conflation or unique phrase found only in a handful of manuscripts, it reveals a deeper shared ancestry among those manuscripts.
Hence a numeric alignment of 75 percent might not say much if half those agreements are trivial or purely orthographic variants. But if in 20 high-value variants—like expansions in Mark, unusual additions in Luke, or major omissions in John—the same manuscripts concur, their genealogical alignment is far more robust. This step addresses the shortcoming that raw percentages treat expansions and minor spelling changes equally.
The Claremont Profile Method and Test-Passages
Because full collation across thousands of manuscripts is daunting, certain partial methods have been introduced. One such approach is the Claremont Profile Method, developed by Frederick Wisse and colleagues. It involves identifying characteristic variations in selected sample chapters—Luke 1, 10, and 20—and constructing profiles for major textual families. Then, an unidentified manuscript is tested at those sample points to see which family’s profile it most closely matches. If it matches none, it might represent a new or mixed text type.
This method allows a quick classification, especially valuable in dealing with large clusters of minuscule manuscripts known to be largely Byzantine. However, it can fail to detect block mixture that does not occur in the sample chapters, or it might misclassify a manuscript that only partially belongs to a known family. A further snag arises if the sample chapters do not contain enough genealogically significant variants to yield a robust classification. In some cases, the method lumps together manuscripts that are not truly textually related. The method’s limitations reemphasize that partial approaches, while efficient, must not be the final word for deep genealogical work.
The Aland Approach: Eliminating the Byzantine Cluster
Kurt and Barbara Aland’s major text-critical projects used standardized “test passages” to differentiate manuscripts that adhere to the Byzantine tradition from those that preserve older text forms. They collated about one thousand known points where the Byzantine text differs from the “original text,” as recognized by their own editorial decisions. Then they tested each manuscript at those points. If a manuscript consistently sides with the Byzantine reading, it is swiftly categorized as a Byzantine witness, and thus deemed of less significance for reconstructing the earliest text. If it diverges enough times, the manuscript qualifies as potentially valuable and might then receive closer scrutiny. This approach partially meets the pressing need to streamline the classification of numerous manuscripts. It can rapidly isolate the manuscripts worth deeper collation.
Nevertheless, the Aland approach does not yield refined genealogical connections among non-Byzantine witnesses or among subgroups within the Byzantine tradition. Nor does it typically address partial mixing. Moreover, using an “original text” from an edition arguably begs the question, since that text itself might need revision based on newly identified manuscripts. Still, the series “Text und Textwert” provides raw collation data in the Catholic Epistles, Pauline Epistles, and Acts, representing a large step forward. Researchers can glean partial genealogical relationships from that data. But it remains an incremental, partial collation approach rather than a comprehensive analysis.
Toward a More Unified and Exhaustive Classification
The most thorough method, reminiscent of Colwell’s original dream, is to collate each manuscript in full. That means comparing each verse’s variants among a chosen set of manuscripts. A complete database of variation emerges, from which one can produce a fully quantitative analysis, identify genealogically significant variants, and weigh them accordingly. Subdivisions of the text (like Acts 1–11, 12–28) can be used to detect block mixture, while a final profile of expansions, omissions, or distinctive variants can reveal deeper sub-groupings. Such an approach has become technologically feasible through computer collations, though still labor-intensive. The International Greek New Testament Project’s endeavors in Luke, for instance, or certain projects in John’s Gospel, illustrate how large the collation task can be.
Critics sometimes question whether full collation surpasses the law of diminishing returns. But to produce a truly reliable genealogical mapping, partial or test-passage approaches might inadvertently mask important relationships or mixing. A single overlooked block mixture can lead to erroneous classification for entire chapters. Romans 10:2 says of the Jews that they had a zeal for God but not necessarily accurate knowledge. Similarly, a partial approach can yield a zeal for classification but insufficient knowledge to do so infallibly. The impetus for a more exhaustive approach extends not only to the potential for new discoveries but also to the thorough refinement of known data.
Relevance for Determining Original Readings and Textual History
Once a more robust genealogical framework is in place, it can become easier to approach variants from a position of knowledge rather than guesswork. If a certain minuscule is known to align mostly with Codex Vaticanus but sometimes to adopt Byzantine expansions in blocks, then a variant at Romans 1:7, found in that minuscule but absent in Vaticanus, might be suspected to come from the minuscule’s Byzantine block. By contrast, if the minuscule in a certain block merges with a “Western” text in Paul’s epistles, textual critics can factor that in. This capacity to hone in on which genealogical line influences each variant is the practical payoff of meticulous classification.
It also enhances the broader history of the text. As manuscripts are integrated into genealogical trees, one sees how older lines gave birth to sub-branches, how scribes in certain scriptoria or localities introduced characteristic expansions or omissions. The distribution of these lines clarifies how the text spread across the Mediterranean, from Antioch to Alexandria, from Rome to Constantinople. If a certain line of text emerges in multiple manuscripts from the eighth century, not found in earlier centuries, one suspects a localized or newly revised tradition. The big picture is that scribal tendencies—some universal, some local—shaped the text’s growth. The scriptures promise that “the saying of Jehovah endures,” as 1 Peter 1:25 states, yet the route from the original autographs to modern forms is a network of scribal interactions.
Conclusion: Colwell Re-examined and the Ongoing Pursuit
Decades after Colwell advocated a comprehensive comparison of manuscripts at all points of variation, textual critics have recognized the soundness of his vision, yet achieving it across thousands of manuscripts remains an unfinished enterprise. Projects such as the IGNTP or the Münster Institute’s “Text und Textwert” volumes reflect partial steps. The Claremont Profile Method or the Alands’ approach expedite classification but inevitably omit details that might reveal new genealogical twists. Meanwhile, the availability of computer collation tools encourages renewed fervor for thorough coverage. Each newly published dataset encourages more refined analyses of scribes’ expansions, parablepsis, and occasional theological motivations.
How do scribal tendencies, as identified through these advanced classification methods, inform the final quest for reconstructing the original text? They help textual critics discriminate probable from improbable changes, clarifying which readings likely resulted from typical scribal patterns. They direct attention to the distribution of expansions or omissions in genealogical lines, thereby illuminating deeper textual relationships. And they mitigate guesswork about whether certain variants could plausibly have come from scribal meddling.
The discipline continues to refine canons like lectio brevior potior or lectio difficilior potior with fresh empirical data. Investigations of papyri consistently show a scribal tilt toward omission, refining the older notion that expansions were more common. Further research is needed into how scribes harmonized parallel texts or introduced expansions for clarity. Over time, the broader textual community might fully integrate these findings into revised canons of internal evidence. Meanwhile, genealogical classification, anchored in the carefully studied scribal behaviors, fosters a textual criticism that sees not only the immediate variants but also how each variant arises within a dynamic historical network of copying and recopying.
John 16:12 says that the apostles could not bear all the truths at once, implying that learning is progressive. The same might be said of the textual critic’s knowledge of scribal tendencies: it unfolds step by step, year by year, as new collations are completed and new analyses performed. While some might find the tasks daunting, the result is a more secure foundation for both textual decisions and the historical narrative of the New Testament’s journey through the centuries. Colwell’s appeal for thoroughness remains as relevant now as ever: treat the manuscripts as living artifacts, collate them thoroughly, compare them systematically, and let the data speak. By continuing that process, the discipline can increasingly refine its portrait of the scribes who ensured that the message of the apostolic writings would endure, even through the variations that continue to challenge us to this day.
You May Also Enjoy
Old and New Testament Textual Criticism: Similarities, Differences, and Prospects for Cooperation
About the Author
EDWARD D. ANDREWS (AS in Criminal Justice, BS in Religion, MA in Biblical Studies, and MDiv in Theology) is CEO and President of Christian Publishing House. He has authored over 220+ books. In addition, Andrews is the Chief Translator of the Updated American Standard Version (UASV).
Online Guided Bible Study Courses
SCROLL THROUGH THE DIFFERENT CATEGORIES BELOW
BIBLE TRANSLATION AND TEXTUAL CRITICISM
BIBLICAL STUDIES / BIBLE BACKGROUND / HISTORY OF THE BIBLE/ INTERPRETATION
EARLY CHRISTIANITY
HISTORY OF CHRISTIANITY
CHRISTIAN APOLOGETIC EVANGELISM
TECHNOLOGY AND THE CHRISTIAN
CHRISTIAN THEOLOGY
CHILDREN’S BOOKS
HOW TO PRAY AND PRAYER LIFE
TEENS-YOUTH-ADOLESCENCE-JUVENILE
CHRISTIAN LIVING—SPIRITUAL GROWTH—SELF-HELP
APOLOGETIC BIBLE BACKGROUND EXPOSITION BIBLE COMMENTARIES
CHRISTIAN DEVOTIONALS
CHURCH HEALTH, GROWTH, AND HISTORY
Apocalyptic-Eschatology [End Times]
CHRISTIAN FICTION
Like this:
Like Loading...
Leave a Reply