It is also appropriate at this point to discuss the role of the selection rules for picking the selected subset, S. So far we have only discussed two such rules.
The first was Blind Pick (BP) which has the advantage of simplicity. Nothing needs to be known about the problem. While BP is certainly not very good, it is surprisingly robust and can often serve as a lower bound. The second rule we have discussed is the Horse Race (HR) strategy. We evaluate N sample performances using a surrogate or simplified model however approximately. The universal alignment probability curves we calculated and showed before can be used effectively to narrow down the search.
But many other selection rules are possible. Suppose one wishes to know the best 16 tennis players in the world. One way to select them is to see who reaches the quarter final in the U.S. Open tournament. Those who reach the quarter final may not always be the top 16 seeded players but we know a strong correlation exists. Historically, the quarter final players always include a good sampling of the top-16 seeded players. In principle, we can calculate the alignment probabilities in this case just as we have done for the HR method. We intuitively know and can prove that such a tournament selection rule is less computationally intensive than the HR rule but will yield a lower alignment probability.
Similarly, the Round Robin (RR) method used in divisional championship play for baseball teams is another selection rule. Intuitively, we know it will have high alignment probability but requires more computation. The tournament literature discuss many of these issues.
Finally, other selection rules based on heuristics and other knowledge can be
used. All are awaiting analysis.