Multiple Hypotheses

Original publication date:

2025-09-13

Introduction

This page is the first of a series that covers the idea behind a decision making algorithm using probability as described by E. T. Jaynes and G. L. Bretthorst. [1] These methods are a departure from that of well-known authors of R. Mises in [2], J. M. Keynes in [3], A. N. Kolmogorov in [4], and from standard textbooks such as [5]. The page discusses the application of multiple hypotheses in practice. Other pages will discuss the theoretical foundation of this method. Another shall discuss and demonstrate how to use this process in a computer algorithm.

The following example is discussed in detail in [6].

Evidence-Based Decision Making

Example: Testing an Electrical Component

Background information

I ≡ you’ve ordered some components that now have arrived and need to be tested. The test requires that you supply an electrical voltage across the components leads and then measure the resultant current. This is early in the production stage of the product and so there is expected some flaws in the manufacturing. An acceptable batch of components consists of at worst, one in six components to pass acceptability testing. There is suspicion that there are batches which have a one in three components which are out of specification due to poor quality control of a machine.

Data

This is a measurable test, and not in any way subjective.

D ≡ the part you are testing does not pass the test: the current does not meet performance specifications.

Hypotheses

This is what you believe caused the data to manifest.

H1 ≡ you’ve received a “bad” package from the bad machine (1/3 components meet specifications)

H2 ≡ you’ve received a “good” package in which the quality of the machine is acceptable (1/6 components meet specifications)

H3 ≡ there is a problem with your testing equipment because 99/100 of the components you are testing are out of specification

Probability of your hypothesis given the background information

Your hypothesis before you start measuring

P(H1 | I) = 1/11 ≈ 0.090909

P(H2 | I) = 10/11 ≈ 0.909090

P(H3 | I) = 0.000001

Probability of data given each hypothesis and background information

Your belief of what caused the data to occur

P(D | H1 I) = 1/3 = 0.333333

P(D | H2 I) = 1/6 = 0.166667

P(D | H3 I) = 99/100 = 0.99000

Figure 1 shows the graph which informs you how to maintain consistent belief for the cause of the data. The higher the data supports the hypothesis, that is the hypothesis you should choose to believe. It is using this procedure we shall the power of dynamic technical writing. For now, we shall see a static graph.

Figure 1: Testing Electrical Components

References

[1] E. T. Jaynes and G. L. Bretthorst, end of section 2.4 and all of 2.5 and in Appendix B, "Probability Theory the Logic of Science", Cambridge University Press, 2003

[2] R. Mises, translated by J. Neyman, D. Scholl, and E. Rabinowitsch, "Probability, Statistics and Truth", William Hodge and Co., 1939

[3] J. M. Keynes, "A Treatise on Probability", MaCmillan and Co. Limited, 1921

[4] A. N. Kolomogorov, "Foundations of the Theory of Probability", Chelsea Publishing Company, 1950

[5] Wackerly, Mendenhall, and Scheaffer, "Mathematical Statistics with Applications", 7th ed, pp. 13 sec. 1.4, chapter 1

[6] E. T. Jaynes and G. L. Bretthorst, ch 4, pp. 103, "Probability Theory the Logic of Science", Cambridge University Press, 2003