Co-auteur
  • COINTET Jean-Philippe (9)
  • CRÉPEL Maxime (8)
  • RICCI Donato (3)
  • OOGHE Benjamin (3)
  • Voir plus
Type de Document
  • Communication non publiée (11)
  • Article (10)
  • Partie ou chapitre de livre (10)
  • Livre (3)
  • Voir plus
Ce site présente une partie des résultats de travaux réalisés dans le cadre du projet Algoglitch au médialab de Sciences Po. Il articule des analyses descriptives et des visualisations de données, portant sur un corpus d’articles de presse sur le thème de l’IA et des algorithmes couvrant une période de 5 années aux États-Unis et au Royaume-Uni. A partir de méthodes de traitement automatique du langage sur un corpus de presse, cette recherche vise à explorer les discours critiques sur l’IA dans la sphère médiatique. A partir de méthode de traitement automatique du langage sur un corpus de presse, nous montrons que le thème de l’IA occupe un espace de plus en plus important dans la presse depuis 5 ans. L’espace médiatique se structure de manière thématique autour de différentes technologies de calcul et domaines d’applications et peut être divisé en deux sous-ensembles sémantiques. L’analyse comparée de ces deux espaces sémantiques rend compte de deux régimes de critique dominants, mobilisant une variété d’entités techniques et humaines, ainsi que des temporalités et des enjeux différents. Le premier est fondé sur les injustices produites par les algorithmes qui façonnent nos environnements de calcul quotidiens et sont associés à un discours critique portant sur les biais, les discriminations, la surveillance et la censure dont des populations spécifiques sont victimes. Le second porte sur les peurs de l’autonomie de l’IA et des robots qui, en tant qu’entités techniques autonome et incarnées, sont associées à un discours prophétique, alertant sur notre capacité à contrôler ces agents simulant ou dépassant nos capacités physiques et cognitives, mettant ainsi en péril notre sécurité physique, notre modèle économique et menaçant ainsi l’humanité tout entière.

Publié en 2021-10 Nom de la conférence Society for Social Studies of Science - 4S annual meeting
5
vues

0
téléchargements
Using various methods of natural language processing on a large corpus of press articles covering a period of 5 years, we show two dominant regimes of criticism of artificial intelligence that coexist within the media sphere, involving different technological and human entities, but also distinct temporality and issues. We observe a shift between articles featuring algorithmic calculation techniques incorporated into the user's environment to guide, orient or calculate his behaviors, towards articles characterized by a personification of AI in an embodied and autonomous entity. This topology can be interpreted as a process of progressive independence of AI that constitutes a constant polarity in the history of the relationship between computer technology and society. On the one hand, robots and AI, which refer to autonomous and embodied technical entities, are associated with a prophetic discourse alerting on the capacity to control these agents that simulate our physical and cognitive capacities and threaten our physical security or our economic model. This regime of critical enunciation is organized around the feeling of fear regarding the autonomy of robots, which thrives on science fiction, religious concerns and contains a prophetic dimension on the future of humanity. On the other hand, the technical entities are represented by algorithms that shape our daily computing environments. They are associated with a short term discourse, focusing on populations characterised by their age, ethnicity, political or sexual orientation, and mobilizing a social justice criticism that denounce bias, discrimination, surveillance, censorship and amplification in the diffusion of inappropriate content.

This website presents some of the results of research carried out under Sciences Po Médialab’s Algoglitch project. It combines descriptive analyses with data graphics on a corpus of press articles on the topic of AI and algorithms, spanning a 5-year period in the United States and United Kingdom. Using natural language processing applied to a press corpus, this research explores critical discourses on AI in the media sphere. Using natural language processing on a press corpus, we show that the subject of AI has been occupying an increasingly larger space in the press over the past five years. The media space is structured thematically around different calculation technologies and fields of application, and can be divided into two semantic subsets. A comparative analysis of these two semantic spaces reveals two dominant regimes of criticism involving a variety of technical and human entities, as well as different time scales and issues. The first is structured around the injustices produced by the algorithms that shape our everyday calculation environments, which are associated with criticism of the biases, discrimination, monitoring, and censorship of which specific populations are the victims. The second space is structured around fears of the autonomy of AI and robots which, as autonomous and embodied technical entities, are associated with a prophetic discourse drawing attention to our capacity to control these entities capable of simulating or surpassing our physical and cognitive abilities. The threat to our physical safety and economic model, and consequently to all of humanity, is thus highlighted.

Le refus de se positionner sur l’axe droite-gauche caractérise le mouvement des Gilets jaunes renvoyant sans cesse dos à dos les formations politiques plutôt que de prendre parti pour l’une d’entre elles. Pourtant les Gilets jaunes, lorsqu’ils font leur apparition en France, s’expriment dans un espace public déjà nourri de tensions et de structures idéologiques préexistantes. À ce titre, leur action est nécessairement située, elle s’inscrit dans cet espace et en hérite certaines propriétés. Il est dès lors légitime de s’intéresser à la place qu’occupe le mouvement, notamment dans sa déclinaison numérique sur Facebook. Comment les pratiques de citation en ligne trahissent-elles non pas la couleur politique du mouvement, mais l’espace politique dont ils se nourrissent et qu’ils alimentent ? Cet article répond à cette question en introduisant un cadre méthodologique original qui permet d’étendre un plongement idéologique d’utilisateurs sur Twitter vers des posts publiés sur Facebook. Nous faisons d’abord appel à une analyse de correspondance pour réduire la matrice d’adjacence qui lie les parlementaires français à leurs followers sur Twitter. Cette première étape nous permet d’identifier deux axes latents qui sont déterminants pour expliquer la structure du réseau. La première dimension distribue les individus selon leur positionnement sur l’axe droite-gauche de l’espace politique. Nous interprétons la seconde dimension comme une mesure de la distance au pouvoir. Ces deux dimensions sous-tendent un espace dans lequel nous positionnons successivement des centaines de milliers d’utilisateurs de Twitter, les URLs et les médias cités sur cette plateforme et, par extension, les publications de près de 1000 groupes Facebook parmi les plus actifs associés au mouvement des Gilets jaunes. Nous quantifions finalement l’évolution des publications de ces groupes dans l’espace idéologique latent pour donner à la fois un sens et une réponse à la question de l’inclinaison politique du mouvement. Les dynamiques observées renforcent l’interprétation d’un mouvement qui, d’abord positionné très à droite, a rapidement opéré un glissement vers la gauche tout en restant fidèle à une attitude contestataire. Cette description par l’usage que les Gilets jaunes font des médias sur Facebook illustre parfaitement l’idée d’un populisme polyvalent.

This study provides a large-scale mapping of the French media space using digital methods to estimate political polarization and to study information circuits. We collect data about the production and circulation of online news stories in France over the course of one year, adopting a multi-layer perspective on the media ecosystem. We source our data from websites, Twitter and Facebook. We also identify a certain number of important structural features. A stochastic block model of the hyperlinks structure shows the systematic rejection of counter-informational press in a separate cluster which hardly receives any attention from the mainstream media. Counter-informational sub-spaces are also peripheral on the consumption side. We measure their respective audiences on Twitter and Facebook and do not observe a large discrepancy between both social networks, with counter-information space, far right and far left media gathering limited audiences. Finally, we also measure the ideological distribution of news stories using Twitter data, which also suggests that the French media landscape is quite balanced. We therefore conclude that the French media ecosystem does not suffer from the same level of polarization as the US media ecosystem. The comparison with the American situation also allows us to consolidate a result from studies on disinformation: the polarization of the journalistic space and the circulation of fake news are phenomena that only become more widespread when dominant and influential actors in the political or journalistic space spread topics and dubious content originally circulating in the fringe of the information space.

Rather than juries, committees, judges, or recruiters, can we use calculation techniques to select the best candidates? Some proponents of these tools consider human decisions to be so burdened by prejudices, false expectations, and confirmation biases that it is much wiser and fairer to trust a calculation (Kleinberg et al., 2019). On the other hand, an abundant literature shows that others are concerned about the risk that implementing decision automation could lead to systemic discrimination (Pasquale, 2015; O’Neil, 2016; Eubanks, 2017; Noble, 2018). While algorithmic calculations are increasingly used in widely different contexts (music recommendations, targeted advertising, information categorization, etc.), this question takes a very specific turn when calculations are introduced into highly particular spaces in our societies: the devices used to select and rank candidates in view of obtaining a qualification or a rare resource. Following the terminology of Luc Boltanski and Laurent Thévenot (1991), these selection situations constitute a particular form of reality tests. Backed by institutions or organizations granting them a certain degree of legitimacy, a disparate set of classification tests has become widespread in our societies with the development of procedures of individualization, auditing, and competitive comparison (Espeland, Saunder, 2016; Power, 1997). Some of these situations are extremely formalized and ritualized, whereas others are hidden in flows of requests and applications addressed to the government, companies, or a variety of other organizations. With the bureaucratization of society (Hibou, 2012), we are filling out an increasing number of forms and files in order to access rights or resources, and subsequently to await a decision which, in many social spaces, is now based on a calculation. The stability of these selection devices is still uncertain. The way that choices are made, the principles justifying them, the possibility of being granted a privilege, the relevance of the categories selected to organize files, respect for candidate diversity, or monitoring of the equality of applicants are constantly used to fuel criticism that challenges tests and condemns their lack of fairness. In this reflective text, we draw on a document-based survey of the various selection devices employed by the French government or companies to offer a conceptual interpretation of the impact that machine learning techniques have on selection tests. The hypothesis that we wish to explore is that we are witnessing a displacement of the format of classification tests made possible by a spectacular expansion in the candidate comparison space and the implementation of machine learning techniques. However, this displacement is more than just a consequence of the introduction of technological innovation contributed by big data and artificial intelligence. Its justification in the eyes of the institutions and organizations that order selection tests is based on the claim that this new test format to takes into account the multiple criticisms that our societies have constantly raised against previous generations of tests. This is why we propose to interpret the interest in and development of these automated procedures as a technocratic response to the development of criticism of the categorical representation of society, which is developing as a result of the individualization and subjectification processes throughout our societies (Cardon, 2019). The conceptual framework outlined in this text links four parallel lines of analysis. The first is related to the transformation dynamics of selection tests in our societies. To shine light on it, we propose a primitive formalization of four types of selection test, which we call performance tests, form tests, file tests, and continuous tests. The progressive algorithmization of these tests – with continuous tests constituting the horizon promised by artificial intelligence – is based on an expansion in the space of the data used in decision making which, through successive linkages, involves actors from increasingly diverse and distant spatialities and temporalities. To allay the criticism levelled at previous tests, the expansion of the comparison space allows candidates’ files to have a different form, by increasing the number of data points in the hopes of better conveying their singularities. The justifications provided to contain criticism increase in generality and are formalized through the implementation of a new test that groups these new entities together in the “technical folds” (Latour, 1999) formed by the computation of machine learning. These folds themselves then become standardized and integrated into a new comparison space. However, new criticism can rapidly emerge, once again leading to a dynamic of expansion of candidates’ comparison space. This dynamic process of test displacement under the effect of criticism is applicable to many types of reality test (Boltanski, Chiapello, 1999), but in this article we will pay attention to “selection” tests during which a device must choose and rank candidates within an initial population. Closely related to the first, the second line of analysis in this article is related to the process of spatial-temporal expansion of the data space used to automate decisions (the comparison space). The development of algorithmic calculation and, more generally, artificial intelligence, enables the emergence of a continuous expansion of devices allowing for the collection of data necessary for calculation (Crawford, 2021). This process has two components: it is first and foremost spatial, spanning a network of sensors that increasingly continuously cling to people’s life paths by means of various affordances; secondly, it is temporal, given that a new type of selection test has the particularity of being supported by the probability of the occurrence of a future event, to organize the contributions of past data. The displacement of selection tests thus constantly expands the comparison space, not only by increasing the number of variables and diversifying them, but also by re-agencing the temporal structure of the calculation around the optimization of a future objective. The third line of analysis studied here looks at the change in calculation methods and, more specifically, the use of deep learning techniques – previously called artificial intelligence – to model an objective based on multidimensional data (Cardon & al., 2018). This displacement within statistical methods, that is, the transition from linear models to non-linear machine learning techniques like deep learning, is the instrument of the test displacement dynamic. By radically shifting calculation towards inductive methods, it transforms the possibility of using initial variables to make sense of and explain test decisions. The fourth and more general line of analysis looks at the way of justifying (through principles) and legitimizing (through institutional stabilization authorities) the principles on which calculations base selection. The dynamic that this text seeks to highlight reveals a displacement in the legitimization method required if tests are to become widely accepted. Whereas traditional selection tests draw on government resources and a compromise between meritocracy and egalitarianism, the new tests that orient classifications use an increasing amount of data that is not certified by institutions; rather, this data is collected by private actors, and is therefore the responsibility of individuals and contingent on their behaviour. Hence, the justification of tests is based less on the preservation of the current state of society (maintaining equality in a population, distributing in accordance with categories recognized by all, rewarding merit) than on an undertaking to continuously transform society, which is one of the features of the competition between individuals that neoliberalism fosters.

in The Performance Complex. Competition and Competitions in Social Life Publié en 2020-12
0
vues

0
téléchargements
The notion of digital reputation has continuously and insistently accompanied the rise of social media during the 2005-2012 period. Supported by a convergent set of discourses, interests and devices, the ambition that fed the development of the social web was to record the circulation of influence by considering that each node of the social network (i.e. the “digital identities” that appear on Facebook, Twitter or LinkedIn pages) have a different reputation and that these differences can be measured. The digital reputation of Internet users has appeared as a new and specific way of providing value to people and digital information. The aim of this article is to set up an overview of this promise ten years after its birth. Reputation meters has appeared so contextual, relational and manipulable that it has not been possible to provide interpretive conventions that are sufficiently stable to coordinate and redefine actors’ activities. The e-reputation is local and reluctant to any standardization operation. In this article, we would like to show that when faced with the multiple uncertainties encountered by fragile reputation measures, service designers tend increasingly to neglect this mode of valuation in favor of popularity and personalized prediction.

in L'identité. Publié en 2020-01
0
vues

0
téléchargements

Cette cartographie, sans prétendre à l’exhaustivité, tente de rendre compte de la composition et de la structure relationnelle des différents acteurs de l’IA en France sur le Web. Elle laisse apercevoir une segmentation entre plusieurs communautés d’acteurs majeures que sont les acteurs économiques (startups, incubateurs, etc.), les laboratoires et équipes de recherche en intelligence artificielle, ainsi que les communautés de développeurs qui se retrouvent autour d'événements (meetup) et les repository (Github) qui dessinent un réseau socio-technique d’acteurs variés (code logiciel, page de développeurs, de projet, d’équipe ou d’entreprise) interconnectés entre eux.

in Gouverner la ville numérique Publié en 2019-12
18
vues

0
téléchargements
Uber, Waze, Airbnb… The algorithms that control these platforms are based on an optimisation of the service provided to the user rather than any collective, political or moral norms. The accusations against these algorithms expose the way technical architectures implicitly govern our lives.

Suivant