Rather than juries, committees, judges, or recruiters, can we use calculation techniques to select the best candidates? Some proponents of these tools consider human decisions to be so burdened by prejudices, false expectations, and confirmation biases that it is much wiser and fairer to trust a calculation (Kleinberg et al., 2019). On the other hand, an abundant literature shows that others are concerned about the risk that implementing decision automation could lead to systemic discrimination (Pasquale, 2015; O’Neil, 2016; Eubanks, 2017; Noble, 2018). While algorithmic calculations are increasingly used in widely different contexts (music recommendations, targeted advertising, information categorization, etc.), this question takes a very specific turn when calculations are introduced into highly particular spaces in our societies: the devices used to select and rank candidates in view of obtaining a qualification or a rare resource. Following the terminology of Luc Boltanski and Laurent Thévenot (1991), these selection situations constitute a particular form of reality tests. Backed by institutions or organizations granting them a certain degree of legitimacy, a disparate set of classification tests has become widespread in our societies with the development of procedures of individualization, auditing, and competitive comparison (Espeland, Saunder, 2016; Power, 1997). Some of these situations are extremely formalized and ritualized, whereas others are hidden in flows of requests and applications addressed to the government, companies, or a variety of other organizations. With the bureaucratization of society (Hibou, 2012), we are filling out an increasing number of forms and files in order to access rights or resources, and subsequently to await a decision which, in many social spaces, is now based on a calculation. The stability of these selection devices is still uncertain. The way that choices are made, the principles justifying them, the possibility of being granted a privilege, the relevance of the categories selected to organize files, respect for candidate diversity, or monitoring of the equality of applicants are constantly used to fuel criticism that challenges tests and condemns their lack of fairness. In this reflective text, we draw on a document-based survey of the various selection devices employed by the French government or companies to offer a conceptual interpretation of the impact that machine learning techniques have on selection tests. The hypothesis that we wish to explore is that we are witnessing a displacement of the format of classification tests made possible by a spectacular expansion in the candidate comparison space and the implementation of machine learning techniques. However, this displacement is more than just a consequence of the introduction of technological innovation contributed by big data and artificial intelligence. Its justification in the eyes of the institutions and organizations that order selection tests is based on the claim that this new test format to takes into account the multiple criticisms that our societies have constantly raised against previous generations of tests. This is why we propose to interpret the interest in and development of these automated procedures as a technocratic response to the development of criticism of the categorical representation of society, which is developing as a result of the individualization and subjectification processes throughout our societies (Cardon, 2019). The conceptual framework outlined in this text links four parallel lines of analysis. The first is related to the transformation dynamics of selection tests in our societies. To shine light on it, we propose a primitive formalization of four types of selection test, which we call performance tests, form tests, file tests, and continuous tests. The progressive algorithmization of these tests – with continuous tests constituting the horizon promised by artificial intelligence – is based on an expansion in the space of the data used in decision making which, through successive linkages, involves actors from increasingly diverse and distant spatialities and temporalities. To allay the criticism levelled at previous tests, the expansion of the comparison space allows candidates’ files to have a different form, by increasing the number of data points in the hopes of better conveying their singularities. The justifications provided to contain criticism increase in generality and are formalized through the implementation of a new test that groups these new entities together in the “technical folds” (Latour, 1999) formed by the computation of machine learning. These folds themselves then become standardized and integrated into a new comparison space. However, new criticism can rapidly emerge, once again leading to a dynamic of expansion of candidates’ comparison space. This dynamic process of test displacement under the effect of criticism is applicable to many types of reality test (Boltanski, Chiapello, 1999), but in this article we will pay attention to “selection” tests during which a device must choose and rank candidates within an initial population. Closely related to the first, the second line of analysis in this article is related to the process of spatial-temporal expansion of the data space used to automate decisions (the comparison space). The development of algorithmic calculation and, more generally, artificial intelligence, enables the emergence of a continuous expansion of devices allowing for the collection of data necessary for calculation (Crawford, 2021). This process has two components: it is first and foremost spatial, spanning a network of sensors that increasingly continuously cling to people’s life paths by means of various affordances; secondly, it is temporal, given that a new type of selection test has the particularity of being supported by the probability of the occurrence of a future event, to organize the contributions of past data. The displacement of selection tests thus constantly expands the comparison space, not only by increasing the number of variables and diversifying them, but also by re-agencing the temporal structure of the calculation around the optimization of a future objective. The third line of analysis studied here looks at the change in calculation methods and, more specifically, the use of deep learning techniques – previously called artificial intelligence – to model an objective based on multidimensional data (Cardon & al., 2018). This displacement within statistical methods, that is, the transition from linear models to non-linear machine learning techniques like deep learning, is the instrument of the test displacement dynamic. By radically shifting calculation towards inductive methods, it transforms the possibility of using initial variables to make sense of and explain test decisions. The fourth and more general line of analysis looks at the way of justifying (through principles) and legitimizing (through institutional stabilization authorities) the principles on which calculations base selection. The dynamic that this text seeks to highlight reveals a displacement in the legitimization method required if tests are to become widely accepted. Whereas traditional selection tests draw on government resources and a compromise between meritocracy and egalitarianism, the new tests that orient classifications use an increasing amount of data that is not certified by institutions; rather, this data is collected by private actors, and is therefore the responsibility of individuals and contingent on their behaviour. Hence, the justification of tests is based less on the preservation of the current state of society (maintaining equality in a population, distributing in accordance with categories recognized by all, rewarding merit) than on an undertaking to continuously transform society, which is one of the features of the competition between individuals that neoliberalism fosters.