Marc Faddoul is a transdisciplinary technologist and AI researcher, expert on recommendation systems and algorithmic audit.
He is the director and co-founder of AI Forensics, a digital-rights non-profit which investigates opaque and influential algorithms.
He regularly advises regulators with technical perspectives on AI ethics and platform accountability,
including though expert committees of the EU Commission, the Digital Council (CNNum) and media regulator (ARCOM), and the French General States on Information.
He also testified to the French Senate and European Parliament.
Marc Faddoul is often called upon to comment on AI and society by WSJ, The Guardian, Le Monde and more.
He gained experience in algorithmic auditing in academia (UC Berkeley), big tech (research collaboration with Facebook AI), and start-ups (Bloom, Jalgos).
Education
UC Berkeley School of Information - Master of Science
MIMS: Transdisciplinary program on societal, legal and ethical impacts of technology on society.
Télécom Paris, Institut Polytechnique de Paris - Diplome d’Ingénieur
Télécom is France’s top Computer Science school. The engineering degree (undergrad + MS) covers a wide technical scope on computational systems and networks.
Double-degree MS in Data Science at Eurecom.
Professional
Co-Founder & Director - AI Forensics (2021-): AI Forensics is a non profit specialized in algorithmic investigation. It was co-founded with Claudio Agosti, expanding the activities of Tracking Exposed.
Co-Director - Tracking Exposed (2021-2022): Leading the development, strategy and funding of the organization.
Associate Researcher - UC Berkeley (2019-2020): Development of novel algorithmic approaches to detect disinformation and borderline content, with Professor Hany Farid.
Research Scientist - Facebook AI (2020): Research collaboration with UC Berkeley to improve disinformation classification algorithms.
Algorithm Designer - Bloom (2018): Design an influence ranking model for social media posts.
Algorithm Designer - Jalgos (2016): Design and Implementation of data-intensive algorithmic solutions for Fortune 500 companies.
Freelance Developer (2013-17): Several projects, involving discrete optimisation, resource allocation and web semantics.
Affiliations
- Mozilla Fellow, Mozilla Technology Fund (2022 & 2023)
- Affiliated Scholar, Tech Policy Lab, CITRIS and Banatao Institute (2020-2022)
- Algorithmic Fairness and Opacity research group, School of Information, UC Berkeley (2018-2021)
- Visiting Researcher, Human Rights Center, UC Berkeley (2019-2021)
Selected News Featuring
Center for Humane Technology
Podcast: TikTok’s Transparency Problem with Marc Faddoul and Tristan Harris
The Guardian
Series: TikTok has become a global giant. The US is threatening to rein it in
The Guardian
Series: What TikTok does to your mental health: ‘It’s embarrassing we know so little’
Wall Street Journal
TikTok’s Pullback in Russia Leaves More Space for Pro-Kremlin Propaganda
Le Monde
Facebook, YouTube… les grandes plates-formes d’Internet face au défi du coronavirus
FranceTV Info
Tiktok fait-il passer ses intérêts commerciaux avant les intérêts politiques?
Le Monde
Facebook, YouTube… les grandes plates-formes d’Internet face au défi du coronavirus
Brookings Institution
COVID-19 is triggering a massive experiment in algorithmic content moderation. by Marc Faddoul
O'Reilly Media
Toward Algorithmic Humility an essay by Marc Faddoul on the design of algorithmic sentencing tools and their alarming false positive rates.
Public Interventions
EDMO Annual Conference - May 24: Panel: AI as a contributor to disinformation
EU-US Technology Trade Council - Apr 24: Expert working group: Data access and technology mediated gender-based violence
European Parliament - Mar 24: Audition (video): Protecting the 2024 Elections: Tackling Disinformation and Polarisation
Nobel Prize Dialogue - Mar 24: Roundtable: Fact & Fiction: The Future of Democracy
EU Commission - Feb 24: Keynote: AI literacy to protect elections
DSA Stakeholder Event - Panel (video): Conducting DSA risk assessments - algorithms in the spotlight!
Grand Continent Summit - Dec 23: Panel: Social and Economic impact of AI
United Nations B-Tech - Oct 23: Workshop: 1 AI, Human Rights, and the Evolving Regulatory Environment
EU Disinfo Lab Conference - Oct 23: Keynote: TikTok. Who’s there?
EU Commission - Jun 23: Plenary: Code of Practice on Disinformation
Nordic Media Days - May 23: Keynote speaker: What to do when algorithms rule the world?
French Senat - Mar 23: Audition (video): Special Commission on TikTok
International Cybersecurity Forum (FIC) - Apr 23: Auditing TikTok’s recommender system
Mozilla Festival - Mar 23: Panel: Navigating the open-source algorithm audit tooling landscape
EU Disinfo Lab Conference - September 2022: Aversarial Algorithmic Audits
ARCOM - May 22: Amplification and Demotion of election related content on social media. Presentation of results to the French media regulator
Mozilla Festival - Apr 22: Gain control back on your YouTube recommendations
Webminar with EU Members of Parliment - Nov 21: Alternative Recommender Systems in the DSA: how to protect free expression, create competition and empower users
Harvard University - Oct 21: Faculty working group on Social Media Recommendation Algorithms.
Università di Milano - Pre-COP 2024 - Sept 21: The impact of Recommendation Algorithms in the Climate Crisis as part of a lecture series on climate and technology.
Université Paris Nanterre - Nov 21: Guest Lecture for Rémy Demichelis’s class Théories de l’information
Université Paris 2 Panthéon-Assas - Dec 2020, Sept 2021: Guest Lecture: Algorithmic Fairness in Digital Administrations
Conseil National du Numérique (French Digital Council) - Jun 2021: Part of the panel of contributing experts to the report Itinéraire des fausses informations en ligne and following debates.
Université Paris 2 Panthéon-Assas - Dec 2020, Sept 2021: Guest Lecturer, Primavera de Filippi’s class Digital Administrations.
US State Department - July 2020: Recommender Systems and Power Dynamics 1h30 lecture for members of the State Department and US Cyber Command, in a series hosted by Clint Watts.
UC Berkeley - GEESE Panel: The Role and Impact of Auditing Algorithms
UC Berkeley - Fall 2019: Guest Lecturer, Steve Weber’s Applied Behavioral Economics
Northeastern University - Spring 2019: Information Ethics Roundtable, paper presentation.
Selected Projects and Investigations
See our work at AI Forensics for more recent projects.
Investigating TikTok’s policy response to the Ukraine war in Russia
We monitored the evolution of TikTok’s policy in Russia as the war in Ukraine unfolded. We uncovered
that TikTok blocked access to international content for its Russian users without declaring it.
In a follow-up investigation,
we exposed how loopholes in TikTok’s upload ban policy led the platform to be flooded by pro-Kremlin propaganda.
Our report lead 6 US Senators to summon
TikTok’s CEO for explanations.
YouChoose.ai: Alternative recommender systems for YouTube
YouChoose.ai is plug-in enabling users and content creators to choose and control their recommendations on YouTube.com. The current release (beta) allows content creators to choose the recommendations on their own content.
The long term goal for YouChoose is to be a platform for recommender systems, empowering users to pick an accountable algorithm, aligned with their own interests.
An analysis of YouTube’s promotion of conspiratorial content
Lead author of an audit of YouTube’s recommendation engine.
The study was published with the New York Times,
and was cited by US Congress in a formal letter to the CEO’s of Google and YouTube.
Interviews featured in le Monde
and the BBC.
The analysis relied on a monitoring infrastructure paired with a machine learning classifier to detect conspiratorial content.
Uncovering physiognomic filter-bubbles on TikTok An experiment which showed how race and appearance impact recommendability on TikTok. Featured in BuzzFeed, Vox, Wired UK, Forbes…
AlgoTransparency AlgoTransparency is aimed at monitoring the channels most promoted by YouTube’s recommendation engine to logged-out users.
Investigating Sniper Ad Targeting A master thesis project investigating whether and how an malign ads can be tailored and sent to a single individual. Advised by Professor Deirdre Mulligan. Introductory video here.
Auditing a Judicial Algorithm In depth-analysis of a Pretrial Risk-Assessment Tool used in the U.S to determine whether a defendant should be placed in detention before their trial. Research presented at the Information Ethics Roundtable at Northeastern University.
Cybersecurity Consulting Consulting within the Citizen Clinic for a civil-rights NGO from central-America, to help them defend against cyber-threats and state-surveillance.