Full Text
  • No (153)
  • Yes (69)
Document Type
  • Conference contribution (56)
  • Article (53)
  • Part or chapter of a book (28)
  • Web site (15)
  • Show more
Research Center
  • Médialab (212)
  • médialab (MEDIALAB) (13)
  • Centre d'histoire de Sciences Po (7)
  • Centre de recherches internationales (6)
  • Show more
Discipline
  • Sociology (102)
  • Library and information sciences (66)
  • Methods and statistics (61)
  • Web (41)
  • Show more
Language
  • English (125)
  • French (100)
  • Spanish (2)
  • Portuguese (2)
  • Show more
Project
  • AIME (10)
Since its foundation in May 2009, the médialab Sciences Po works to foster the use of digital methods and tools in social sciences. With the help of existing tools and methods, we experienced the use of web mining techniques to extract data on collective phenomena. We also attended the symposiums organised by the two institutions responsible of web archiving in France: BnF and INA where we learnt about the difficulties posed to social scientists by the use of web archives. Actually our own experience in mining the live web wasn’t easier. Such difficulties, we believe, can be explained by the lack of tools allowing scholars to build themselves the highly specialized corpora they need from the wide heterogeneity of the web. The web isn’t a well-known document space for scholars or librarians. Its hyperlinked and heterogeneous nature requires to envision new ways of conceiving and building web corpora. And this notion of web corpus is a necessity for both live and archived web. If methods are not appropriate enough for analysing the live web, the problem will not be easier on an archive where the time dimension adds complexity.

Publication date 2011-07
DIMINESCU Dana
BOURGEOIS Mehdi
RENAULT Matthieu
4
views

0
downloads

Publication date 2015-01
PLIQUE Guillaume
GUIDO Daniele
11
views

0
downloads
Bruno Latour wrote a book about philosophy (an inquiry into modes of existence). He decided that the paper book was no place for the numerous footnotes, documentation or glossary, instead giving access to all this information surrounding the book through a web application which would present itself as a reading companion. He also offered to the community of readers to submit their contributions to his inquiry by writing new documents to be added to the platform. The first version of our web application was built on PHP Yiii and MySQL on the server side. This soon proved to be a nightmare to maintain because of the ultra-relational nature of our data. We refactored it completely to use node.js and Neo4J. We went from a tree system with internal links modelized inside a relational database to a graph of paragraphs included into documents, subchapters etc. all sharing links between them. On the way, we've learned Neo4J thoroughly, from graph data modeling to cypher tricks and developped our custom cypher query graphical monitor using sigma.js in order to check our data trans-modeling consistency. During this journey, we've stumbled upon data model questions : ordered links, sub items grouping necessity, data output constraints from Neo4J, and finally the limitations of Neo4J community edition. Finally we feel much more confortable as developers in our new system. Reasoning about our data has become much easier and, moreover, our users are also happier since the platform's performance has never been better. Our intention is, therefore, to share our experience with the community: - our application's data needs - our shift from a MySQL data model to a Neo4J graph model - our feedbacks in using a graph database and more precisely Neo4J including our custom admin tool [Agent Smith](https://github.com/Yomguithereal/agent-smith) - a very quick description of the admin tools we built to let the researchers write or modify contents (a markdown web editor) The research has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013) / erc Grant ‘IDEAS’ 2010 n° 269567” Authors : Guillaume Plique A graduate student from Sciences-Po Lille and Waseda University, Guillaume Plique now offers the médialab his backend development skills as well as his profile in social sciences. He has been working since June 2013 on several projects such as IPCC mapping, AIME and develops scrapers aimed at social sciences researchers. https://github.com/Yomguithereal Paul Girard Paul Girard is an Information Technology engineer specialized in driving collaborations between technology and non-technical domains. He graduated from the cultural industry engineering specialisation in Université de Technologie de Compiègne in 2004 where he studied the relationships between digital technologies and society and the mechanisms of collaborations. He worked in the research laboratories federation CITU (Paris 1 and Paris 8 universities) from 2005 to 2009 where he participated in research and creation projects, collaborations between artists and engineers working with interactivity, digital pictures, virtual and augmented reality. He joined the médialab laboratory at Sciences Po at its foundation during the spring of 2009, as the digital manager of this digital research laboratory dedicated to fostering the use of digital methods and tools in Social Sciences. Since then he oversees the technical direction of the many research projects as collaborations between social sciences, knowledge engineering and information design. His present research fields are digital methods for social sciences, exploratory data analysis and enhanced publication though digital story telling. https://github.com/paulgirard Daniele Guido Daniele Guido is a visual interaction designer interested in data mining applications, text analysis and network tools. He collaborates with researchers in History and Social Science, designers and engineer to conceive and develop digital tools for the humanities. He recently joined the DIgital Humanities lab at CVCE team in Luxembourg after several years working at the Sciences-Po Medialab team in Paris, where he was engaged in the FORCCAST project (forccast.hypotheses.org) and in the AIME project (modesofexistence.org) https://github.com/danieleguido

Publication date 2010
4
views

0
downloads
A python library to exchange webcorpus format

69
views

69
downloads
Un rattachement ou une formation disciplinaire, pas plus que la relation à des espaces de travail et des instruments spécialisés, ne suffisent seuls à définir une culture scientifique. Ce sur quoi des chercheurs issus d’horizons divers travaillent, construit un mode singulier de conception de leurs activités, pratiques et rapport au monde. Leur réussite est irrémédiablement liée à un sujet, à la fortune que rencontre celui-ci comme innovation dans un contexte social qui le borne et qu’il crée simultanément. Comment s’organise cette (re)conversion vers une nouvelle thématique, alors que l’évolution des modes de financement privilégie aujourd’hui précisément ce cadrage ? Cette thèse propose une enquête sur la notion de « domaine de recherche », que nous définissons a priori comme le cadre des interactions entre l’activité professionnelle de chercheurs et la société autour d’un thème partagé ; elle défend sa dimension épistémique. Ce manuscrit décrit en parallèle le développement des bioénergies, une des principales formes d’énergie dites renouvelables ou encore durables, issue de la biomasse, ses acteurs et leurs jeux d’actions, dans un contexte de forte incitation à conduire une transition énergétique globale, mais aussi de controverses sociales vives. Les deux objectifs de cette thèse convergent : décrire le style de pensée inhérent à un domaine de recherche particulier est nécessaire à l’appréhension, au-delà des seuls discours et promesses, des modes effectifs de développement d’une innovation (ici la mobilisation à grande échelle de végétaux, microorganismes ou déchets pour produire des biocarburants) et donc in fine, à l’évaluation par tout un chacun, de sa pertinence.

Publication date 2017-09
PLIQUE Guillaume
CHARLES Loïc
JACOMY Alexis
TIBLE Grégory
2
views

0
downloads
TOFLIT18 is a project dedicated to French trade statistics from 1716 to 1821. It combines a historical trade database that covers French external trade comprising more than 500,000 flows at the level of partners and individual products with a range of tools that allow the exploration of the material world of the Early Modern period. TOFLIT18 is the result of the collaboration of data scientists, economists and historians. It started as a project funded by the Agence Nationale de la Recherche in 2014. http://toflit18.hypotheses.org

7
views

7
downloads
Voluntary return is one of the pillars of durable solutions proposed for refugees and internally displaced persons (IDPs) under the international normative framework and human rights instruments. The Fukushima Daiichi Nuclear Accident that occurred in March 2011 following the Great East Japan Earthquake and Tsunami, displaced more than 150,000 persons as a large amount of radioactive materials were released into the sea and the atmosphere from crippled reactors. Four years later, many of these evacuees remain displaced, unable or hesitant to return home, due to radiological and social consequences caused by the disaster. This policy brief seeks to examine the case of Fukushima evacuees with a special focus on the question of return and attempts to make policy recommendations, specifically tailored to deal with the nuclear displacement. It explores ways in which genuine durable solutions can be found for their case in line with international protection guidelines for IDPs.

11
views

0
downloads
France started to compile statistics about its trade in 1716. The "Bureau de la Balance du Commerce" (Balance of Trade's Office) centralized local reports of imports/exports by commodities produced by french tax regions. Many statistical manuscript volumes produced by this process have been preserved in French archives. This communication will relate how and why we used network technologies to create a research instrument based on the transcriptions of those archives in the TOFLIT18 research project. Our corpus composed of more than 500k yearly trade transactions of one commodity between a French local tax region or a foreign country between 1718 and 1838. We used a graph database to modelize it as a trade network where trade flows are edges between trade partners. We will explain why we had to design a classification system to reduce the heterogeneity of the commodity names and how such a system introduce the need for hyperedges. Our research instruments aiming at providing exploratory data analysis means to researchers, we will present the web application we've built on top of the neo4j database using JavaScript technologies (Decypher, Express, React, Baobab, SigmaJS). We will finally show how graph model was not only a convenient way to store and query our data but also a poweful visual object to explore trade geographical structures and trade products' specialization patterns. Project funded by the French Agence Nationale de la Recherche (TOFLIT18)

in Actes des 23èmes Journées francophones d'Ingénierie des Connaissances (IC 2012) Publication date 2012-06-25
DECLERCK Gunnar
AIMÉ Xavier
CHARLET Jean
2
views

0
downloads
Ce texte se propose de discuter l'idée que les ontologies fondationnelles sont utiles, voire nécessaires, au bon fonctionnement des systèmes de traitement de contenus, en particulier à leur interopérabilité sémantique. Après un rappel des principales caractéristiques des ontologies, nous proposerons une liste des grandes fonctions aujourd'hui attribuées ou au moins attendues des OF, pour en discuter ensuite le bien-fondé. Nous montrerons que la possibilité de réaliser une ontologie intégrant des primitives et définitions formelles suffisamment générales et génériques (universelles) pour décrire la sémantique des concepts de domaines spécialisés du savoir est loin d'être acquise ; et que (ii) même si une telle ontologie s'avérait réalisable, il n'est pas certain qu'elle permettrait d'assurer l'IS entre systèmes, c'est-à-dire un échange de données en préservant le sens.

in COGITO, research newletter Publication date 2018-11
6
views

0
downloads
In a study of sermons made by English preachers in the 17th and 18th centuries, medialab researcher Jean-Philippe Cointet and four sociologists and historians from American and German universities mapped biblical references used by priests from Anglican and dissident churches*. This work, presented in a Poetics journal article entitled “The (Protestant) Bible, the (printed) sermon, and the word(s): The semantic structure of the Conformist and Dissenting Bible, 1660–1780”, sheds lights on a whole swath of the history of faiths, and is notable for deploying new methods of textual analysis based on the quantitative and qualitative processing of empirical data. Awarded a prize from the American Sociological Association, this study is representative of a recent research trend in the humanities and social sciences: computational hermeneutics, which analyze and interpret cultural phenomena by drawing on quantitative methods and empirical data. (First paragraph)

in Scientometrics Publication date 2017-11
RAINHO BRÁS Oriana
DAVID Leonor
ARRISCADO NUNES João
CARDOSO Fátima
JERÓNIMO Carmen
1
views

0
downloads
This paper analyses the developmental dynamics of oncology research in Portugal during the second half of the twentieth century and early twenty first century. Grounding its conclusions in a scientometric analysis of a database of publications covering the period 1976–2015, the paper shows how the expansion of oncology research from the end of the 1990s through the 2000s is closely related to science and technology policy decisions in the country. The main actors of the institutional evolution of the field are public organizations, both hospital and academia/research-based, frequently working together. Portuguese oncology research focused especially on organ-based cancers, underlining the strong link between the laboratory and the clinic. Accordingly, translational research is a major trend in oncology research, as evidenced by the analysis of publications in major journals and inter-citation maps. Net...

in COGITO, la lettre de la recherche à Sciences Po Publication date 2018-11
8
views

0
downloads
Dans une étude sur les sermons prononcés par des prêcheurs anglais aux 17e et 18e siècles, Jean-Philippe Cointet, chercheur au médialab, et quatre sociologues et historiens d’universités américaines et allemandes, ont établi une cartographie des références bibliques utilisées par les prêtres des églises anglicanes et dissidentes*. Ce travail, exposé dans un article de la revue Poetics “The (Protestant) Bible, the (printed) sermon, and the word(s): The semantic structure of the Conformist and Dissenting Bible, 1660–1780”, outre qu’il éclaire un pan entier de l’histoire des croyances, est notable en ce qu’il mobilise de nouvelles méthodes d’analyse textuelle basées sur un traitement quantitatif et qualitatif de données empiriques. Reconnue par un prix de l’American Sociological Association, cette étude est représentative d’un courant récent de recherches en sciences humaines et sociales : l’herméneutique computationnelle qui analyse et interprète les phénomènes culturels en s’appuyant sur des méthodes quantitatives et des données empiriques. (Premier paragraphe)

40
views

0
downloads
Dans un contexte de division autour de la réforme constitutionnelle qui pose la question de son rôle, le Parlement a ouvert une commission d’enquête suite à l’affaire Benalla. Si Emmanuel Macron a souvent réaffirmé leur importance, qu'en est-il du pouvoir d'agir des parlementaires ?

in Le Monde Publication date 2012-02-03
OOGHE Benjamin
LAROUSSERIE David
4
views

0
downloads

Notion centrale de la recherche en SHS, le corpus voit ses contours redéfinis alors que les éléments qui le constituent sont aujourd’hui le plus souvent des contenus ou données issus du web. Quelles sont les possibilités offertes par le contexte numérique, pour constituer et traiter des corpus, les méthodes de recueil de données et d’observation en sont-elles modifiées ? La taille et la dimension de représentativité d’un corpus sont-elle revisitées quand les données accessibles s’inscrivent dans des flux et se mesurent en Giga ou Tera octets, quelles unités retenir quand les données sont hétérogènes et instables ? Quels sont les instruments à disposition du chercheur pour constituer, traiter et analyser ces corpus ?

Publication date 2018
BRILLI Agata
TASSI Roberta
2
views

2
downloads
From logs and information left in online spaces to data points self-generated by connected devices, digital traces have become more diffused over the past years, prompting an expansion of Human-Centered Design methods. Along with some bigdata approaches, Digital Methods of research – treating the actual content of digital users’ manifestation on-line (i.e. tweets, Instagram pictures, comments) – offer the opportunity to better understand users through their online activities. This paper investigates how Digital Methods can be repurposed as a full-fledged approach for Human-Centered Design. Grafting on the NATURPRADI project – a research aimed at describing the debate raised by the re-vegetation of the city of Paris by analysing Twitter posts – in the paper we will explain how we have identified and described a set of personas characterized by different approaches towards the evolution of the urban nature issue. The final objective of the paper is to provide a first methodological tool created at the intersection of Digital Methods and Human-Centered Design discussing its opportunities and criticalities: Data-driven Personas.

in Diseña Publication date 2019-01
8
views

0
downloads
The essay tries to unfold the specificities of some design approaches developed at the SciencesPo médialab. Instead of proposing a generalizable set of methods, this experiential account is a tentative systematization of some techniques that have been tested in the lab. Describing them is like annotating an anthology of thoughts and experiments that revolve around the questions of the ‘public’ and its ‘issues’. The techniques are aimed at exploring the social, technical and political issues, collecting their traces, their descriptions and their partial stories, bringing them into a space where they can be questioned. The different techniques are aligned into two epistemic movements, complementing, supporting and expanding the digital methods traditionally used in the lab. The first movement tries to produce a localized representation of the issue. The second one invites the public to get as close as possible to it.

in Progetto Grafico Publication date 2013
3
views

3
downloads
Un progetto collaborativo diretto da Bruno Latour costruisce un repertorio multimediale per la ricerca scientifica e filosofica, nel segno di una nuova simbiosi tra database e narrazione

in The Graphic Design Reader Publication date 2019-04
GRAFFIETI Michele
SCAGNETTI Gaia
MASUD Luca
18
views

0
downloads
This paper is part of a research about the visualization of complex systems. More specifically, it focuses on the emerging need for a narrative approach in the understanding of complex networks. A listener plays a key role in any narration process. Likewise, in every visual representation, the observer has the same role: narrators evoke whereas observers interpret through their imaginary. Why should the designer use a narrative mode of thought? Why should he give to the audience a good story more than a sound argument?. To answer these questions, we present the Map of the Future we designed for Wired Italy.

in Revue Thaêtre Publication date 2018
PRÉVOT Géraldine
FRODON Jean-Michel
RIOUAL Quentin
5
views

0
downloads
Qui êtes-vous ? Comment, dans votre parcours, avez-vous rencontré la question de la recherche-création ? Je ne suis pas une personne, mais un programme pédagogique, créé au sein de Sciences Po par Bruno Latour en 2010. Je m’appelle SPEAP (pour Sciences Po, École des arts politiques). Bien que je n’utilise pas le terme « recherche-création », il me semble que ce qu’il désigne correspond beaucoup à ce que je fais.

The RICardo website (http://ricardo.medialab.sciences-po.fr) provides interactive data visualizations to explore 19th century World International Trade. This exploratory data analysis tool aims at letting scholars discover the richness but complexity of this dataset by providing : 1- a documentation under the form of an interactive data visualization tool which reveals the heterogeneity of the dataset that compiles archives from different sources through a century; 2- a progressive exploration path from the more aggregated to the most precise view: world total trade, specific country bilateral trade, pair of trade partners mirror flows discrepancies; 3- a custom graphic semiology which emphasizes the data uncertainty of the dataset. RICardo is meant for studying and discovering the history of trade and trade globalization at three level of details and with the possibility to focus on some specific country or areas by only using a web browser.

Publication date 2016-07
TIBLE Grégory
DU Mengying
2
views

0
downloads
RICardo (Research on International Commerce) est un projet dédié au commerce entre nations sur une période allant des débuts de la Révolution industrielle à la veille de la Seconde Guerre mondiale. Il allie une base historique de données commerciales couvrant tous les pays du monde à un site web qui propose une exploration de l’histoire du commerce international à partir de visualisations.

Initié en 2012-2014, au croisement de l'informatique parlementaire et des sciences sociales numériques, La Fabrique de la Loi est un projet mené en partenariat entre deux laboratoires de recherche à Sciences Po, le médialab et le CEE, et l'association citoyenne et bénévole Regards Citoyens. Mis en ligne pour la première fois en 2014 avec un échantillon réduit de lois, le site www.LaFabriqueDeLaLoi.fr permet désormais de suivre chaque étape de la procédure législative pour plus de 800 lois promulguées depuis 2008. L'outil permet d'analyser dans le temps et quantitativement toutes ces lois, d'observer le degré de modification du texte de leurs articles via un code couleur ou encore d'explorer les discours et amendements relatifs à tel article ou à tel élu. Sont ainsi offertes différentes formes d'aperçus des transformations des textes de loi par le parlement ignorées par l'approche statistique classique.

in Revue française de science politique Publication date 2015-10
17
views

17
downloads
Une troisième génération de sciences sociales doit voir le jour pour assumer la spécificité du monde de don-nées et de traces créées par les réseaux numériques, sans se contenter de prolonger les acquis des sciences de la « société » et de l'« opinion ». Ces entités ont été construites dans une époque précise dont la généa-logie est restituée pour être comparée avec le travail des agences qui exploitent les traces numériques et qui peuvent produire toute la réflexivité nécessaire en devenant prédictives. Il est proposé de penser les traces numériques en tant que « répliques » que les sciences sociales doivent suivre avec des méthodes adaptées car elles constituent désormais un nouveau continent du social.

Next