As more and more Americans start to express their frustration with the tragedies and overall conduct of the Immigration and Customs Enforcement agency, the climate of social media has become more volatile regarding the subject matter. Platforms like TikTok, Facebook, and Instagram resort to the use of “shadow banning,” in which the algorithms suppress content related to the subject by blacklisting them from search suggestions or algorithm explore pages. On TikTok, some users have reported noticing their content receiving no traction or getting a notification saying, “ineligible for recommendation”. Most of this content is focused on recent ICE incidents, such as the killing of V.A nurse Alex Pretti in Minneapolis.
Meanwhile, on Meta platforms, users have noticed that links featuring websites such as “ICE List,” a volunteer-run database containing the information and identities of ICE agents, are either taken down or reported by the algorithm. Users claim that many of these links are either spam or violate Meta’s community guidelines. This has resulted in content creators utilizing alternate methods to inform their audiences on ICE-related subjects. One recently relevant trend is the re-emergence of clickbait. In which the creator will post a video or slides of posts with a misleading thumbnail. This will then proceed into slides or the other part of the video, which contains information on how to deal with ICE. Notable music influencer and reviewer Anthony Fantano was one who participated in this trend.
This has led to a response from state governments such as California to launch investigations into tech and social media companies on whether they are or are not suppressing anti-ICE content. Along with tech giants censoring their platforms, ICE has begun to expand its own use of algorithms. Monitoring social media for their case work and operations. This includes contracting surveillance teams that will scrape social media for leads on deportation targets, utilizing AI-driven vetting with tools such as “Hurricane Scores” to determine a lead’s likelihood to not have “proper legal credentials” and determining the hostility of certain Visa applicants who might be current political activists.
The organization has also begun to deploy Palantir technology into its surveillance protocols. Such as using the Palantir software “Investigative case management,” which cross-references social media posts with license numbers, and commercial records to create easily accessible dossiers on suspected targets. Palantir has slowly started to acquire large swaths of public social media data to put into its algorithms. Palantir Technologies also has a large international presence overseas, as it’s a close business partner with the Israeli government, France, Germany, and the United Kingdom. Selling their technologies and services primarily for espionage and military operations, mostly recently in the genocide of Gaza. Much of the data collected for their social media surveillance algorithms is based in their Tel Aviv office in the Israeli capital.
As the discourse of ICE on social media becomes more divisive, and online censorship becomes more of a public issue subject to scrutiny, many questions come to mind. What are the ethical limits of monitoring social media for the sake of law enforcement? How much can tech corporations censor their platforms before it violates constitutional rights? And maybe most importantly, what kind of implications and precedent does this conversation about ICE allude to for future discussions of controversial subjects?



