Behind The Name

In 1907, Francis Galton conducted an experiment where he had attendees at a country fair compete to guess the weight of an ox. Galton found that the average of everyone’s guesses — approximately 800 of them — was closest to the ox’s actual weight, and closer than any individual guess, including those from people with more familiarity with oxen. This experiment birthed the notion of the wisdom of crowds, but was dependent on independent judgments. The more people influenced each other’s guesses, the less accurate their guesses became, a product of misplaced bias

Galton’s experiment is the inspiration behind Ox, which seeks to collect data on individual human judgments to generate insights based on collective wisdom. Our mission is to leverage these insights to optimize human judgment in intelligence analysis, ensuring that analytical frameworks are based on data-proven approaches. Ultimately, better understanding how humans make judgments will help improve them, and will allow AI to empower, rather than replace, the human-led analytical process.

Untested Assumptions Underpin Existing Methods

As the volume and diversity of intelligence data increase in the digital age, intelligence practitioners face the daunting task of analyzing such data in a consistent fashion. In particular, evaluating the reliability of different types of information is a practice ill-suited for scaling, requiring consideration of multiple subjective and objective factors.

How do you trust the information a source is providing? What variables do you consider? Our research has shown that each analyst, collector, and consumer has their own perspective and process. For some, trustworthiness is the most important variable, albeit a highly subjective one. For others, it’s more important to look at the consistency of information with other sources, or the information’s plausibility. Some just go with their gut, believing source evaluation is an art that cannot be taught.

Intelligence analysis involves creativity and subjectivity, and human judgment is essential to this process — machines cannot replicate the power of the human mind to decipher nuance, learn from experience, and manage interpersonal relationships. However, not all human judgment is created equal. How do you separate biased analysis from genuine insight? Assumptions about the qualities of good judgments have long stood unchallenged due to the lack of a feedback mechanism.

Failure to proactively collect important data related to source performance and analytical judgments comes at a price, compromising intelligence collection, analysis, and decision-making in the long run. We need to understand what types of judgments are consistent predictors of reliability, and can only do that by collecting the right kinds of data. National security decision-making is too important to base on untested assumptions.

Ox’s Approach

Our solution challenges assumptions, datifying both quantitative and qualitative judgments to extract insights at scale. Ox’s proprietary software platform allows for the collection, processing, and analysis of user inputs for judgments of reliability for intelligence sources. In an intuitive interface, users construct analytical frameworks built around essential variables for consideration, weights, and scores. Ox collects both quantitative and qualitative user inputs as structured data, enabling dynamic data analysis and instantaneous auditing. Additionally, the inclusion of a feedback mechanism tracks source performance over time and helps identify frameworks that produce optimal judgments. Beyond encouraging analytic rigor, Ox builds confidence in assessments, identifies essential indicators of reliability, and helps determine collections priorities.

Ox harnesses the wisdom of the crowd: extracting insights about human judgments by collecting and analyzing them at scale. By helping users learn from past judgments to optimize future ones, Ox transforms the art of intelligence analysis by powering it with data.