The technology they like, no matter the social cost

“I was struck by how many of the wealthiest and post powerful figures in Silicon Valley — including some I knew — were now loudly backing Mr. Trump. ... Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost.”
Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money. New York Times Opinion Guest Essay by Chris Hughes, co-founder of Facebook and chair of the Economic Security Project. September 25, 2024.

The moral test of a society

It is time to put a surgeon general's warning on social media platforms, stating that social media is associated with significant mental health harms for adolescents. […]

Last fall, I gathered with students to talk about mental health and loneliness. As often happens in such gatherings, they raised the issue of social media.

After they talked about what they liked about social media — a way to stay in touch with old friends, find communities of shared interests and express themselves creatively — a young woman named Tina raised her hand. “I just don’t feel good when I use social media,” she said softly, a hint of embarrassment in her voice. One by one, they spoke about their experiences with social media: the endless comparison with other people that shredded their self-esteem, the feeling of being addicted and unable to set limits and the difficulty having real conversations on platforms that too often fostered outrage and bullying. There was a sadness in their voices, as if they knew what was happening to them but felt powerless to change it. […]

The moral test of any society is how well it protect its children. Students like Tina and mothers like Lori do not want to be told that change takes time, that the issue is too complicated or that the status quo is too hard to alter.

One of the most important lessons I learned in medical school was that in an emergency you don't have the luxury to wait for perfect information. You assess the available facts, you use your best judgment, and you act quickly.

Surgeon General: Why I'm Calling for a Warning Label on Social Media Platforms, by Vivek H. Murthy, the surgeon general of the United States. New York Times, 17 June 2024
This has been a striking repudiation of the idea that there is an online and an offline world, and that what is said online is in some way kept online. I hope that this eliminates the conception from people’s minds.
Renee DiResta, Stanford Internet Observatory, as quoted in Twitter and Facebook Lock Trump’s Accounts After Violence on Capitol Hill, by Kate Conger, Mike Isaac and Sheera Frenkel, New York Times, 6 January 2021

A referendum on reality itself

There is perhaps no better place to witness what the culture of disinformation has already wrought in America than a Trump campaign rally.

Tony Willnow, a 34-year-old maintenance worker who had an American flag wrapped around his head, observed that Trump had won because he said things no other politician would say. When I asked him if it mattered whether those things were true, he thought for a moment before answering. “He tells you what you want to hear,” Willnow said. “And I don’t know if it’s true or not — but it sounds good, so fuck it.”

The political theorist Hannah Arendt once wrote that the most successful totalitarian leaders of the 20th century instilled in their followers “a mixture of gullibility and cynicism.” When they were lied to, they chose to believe it. When a lie was debunked, they claimed they’d known all along — and would then “admire the leaders for their superior tactical cleverness.” Over time, Arendt wrote, the onslaught of propaganda conditioned people to “believe everything and nothing, think that everything was possible and that nothing was true.”

Leaving the rally, I thought about Arendt, and the swaths of the country that are already gripped by the ethos she described. Should it prevail in 2020, the election’s legacy will be clear — not a choice between parties or candidates or policy platforms, but a referendum on reality itself.

The Billion-Dollar Disinformation Campaign to Reelect the President, by McKay Coppins, The Atlantic, March 2020
He tells you what you want to hear, and I don’t know if it’s true or not—but it sounds good, so fuck it.
— Trump rally attendee Tony Willnow, from The Billion-Dollar Disinformation Campaign to Reelect the President, by McKay Coppins, The Atlantic, March 2020

“In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true.” Hannah Arendt, The Origins of Totalitarianism, 1951

Facebook’s Frankenstein Moment

“But there may not be enough guardrails in the world to prevent bad outcomes on Facebook, whose scale is nearly inconceivable. Alex Stamos, Facebook’s security chief, said last month that the company shuts down more than a million user accounts every day for violating Facebook’s community standards. Even if only 1 percent of Facebook’s daily active users misbehaved, it would still mean 13 million rule breakers…”
Is This Facebook’s Frankenstein Moment? by Kevin Roose, 21 September 2017

Facebook, Ferguson, and the Ice Bucket Challenge

On the evening of August 13 [2014], the police appeared on the streets of Ferguson in armored vehicles and wearing military gear, with snipers poised in position and pointing guns at the protesters. That is when I first noticed the news of Ferguson on Twitter—and was startled at such a massive overuse of police force in a suburban area in the United States.

On Twitter, among about a thousand people around the world that I follow, and which was still sorted chronologically at the time, the topic became dominant.

On Facebook's algorithmically controlled news feed, however, it was as if nothing had happened.

As I inquired more broadly, it appeared that Facebook’s algorithm may have decided that the Ferguson stories were lower priority to show to many users than other, more algorithm-friendly ones.

Instead of news of the Ferguson protests, my own Facebook's news feed was dominated by the “ice-bucket challenge,” a worthy cause in which people poured buckets of cold water over their heads and, in some cases, donated to an amyotrophic lateral sclerosis (ALS) charity. Many other people were reporting a similar phenomenon.

Facebook's algorithm was not prioritizing posts about the “Ice Bucket Challenge” rather than Ferguson posts because of a nefarious plot by Facebook's programmers or marketing department to bury the nascent social movement. The algorithm they designed and whose priorities they set, combined with the signals they allowed users on the platform to send, created that result.

From Zeynep Tufecki's Twitter and Tear Gas (2017), page 155.

Hurting people at scale

Selected passages and quotes from Ryan Mac and Craig Silverman’s outstanding piece in Buzzfeed News, Hurting People  At Scale: Facebook’s Employees Reckon With The Social Network They’ve Built

On July 1, Max Wang, a Boston-based software engineer who was leaving Facebook after more than seven years, shared a video on the company’s internal discussion board that was meant to serve as a warning.

“I think Facebook is hurting people at scale,” he wrote in a note accompanying the video. “If you think so too, maybe give this a watch.”

Most employees on their way out of the “Mark Zuckerberg production” typically post photos of their company badges along with farewell notes thanking their colleagues. Wang opted for a clip of himself speaking directly to the camera. What followed was a 24-minute clear-eyed hammering of Facebook’s leadership and decision-making over the previous year.

What the departing engineer said echoed what civil rights groups such as Color of Change have been saying since at least 2015: Facebook is more concerned with appearing unbiased than making internal adjustments or correcting policies that permit or enable real-world harm.

Yaël Eisenstat, Facebook's former election ads integrity lead, said the employees’ concerns reflect her experience at the company, which she believes is on a dangerous path heading into the election.

“All of these steps are leading up to a situation where, come November, a portion of Facebook users will not trust the outcome of the election because they have been bombarded with messages on Facebook preparing them to not trust it,” she told BuzzFeed News.

She said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned.

“These were very clear examples that didn't just upset me, they upset Facebook’s employees, they upset the entire civil rights community, they upset Facebook’s advertisers. If you still refuse to listen to all those voices, then you're proving that your decision-making is being guided by some other voice,” she said.

“[Zuckerberg] uses ‘diverse perspective’ as essentially a cover for right-wing thinking when the real problem is dangerous ideologies,” Brandi Collins-Dexter, a senior campaign director at Color of Change, told BuzzFeed News after reading excerpts of Zuckerberg’s comments. “If you are conflating conservatives with white nationalists, that seems like a far deeper problem because that’s what we’re talking about. We’re talking about hate groups and really specific dangerous ideologies and behavior.”
“Facebook is getting trapped by the ideology of free expression. It causes us to lose sight of other important premises, like how free expression is supposed to serve human needs.” — Max Wang

Replying to Wang’s video and comments, Facebook’s head of artificial intelligence Yann LeCun wrote,

“American Democracy is threatened and closer to collapse than most people realize. I would submit that a better underlying principle to content policy is the promotion and defense of liberal democracy.”

Other employees, like [engineer Dan Abramov], the engineer, have seized the moment to argue that Facebook has never been neutral, despite leadership’s repeated attempts to convince employees otherwise, and as such needed to make decisions to limit harm. Facebook has proactively taken down nudity, hate speech, and extremist content, while also encouraging people to participate in elections — an act that favors democracy, he wrote.

“As employees, we can’t entertain this illusion,” he said in his June 26 memo titled “Facebook Is Not Neutral.” “There is nothing neutral about connecting people together. It’s literally the opposite of the status quo.”

Zuckerberg seems to disagree. On June 5, he wrote that Facebook errs on the “side of free expression” and made a series of promises that his company would push for racial justice and fight for voter engagement.

The sentiment, while encouraging, arrived unaccompanied by any concrete plans. On Facebook’s internal discussion board, the replies rolled in.

Brave New Workplace

1980:

The computerized control of work has become so pervasive in Bell Telephone's clerical sector that management now has the capacity to measure how many times a phone rings before it is answered, how long a customer is put on hold, how long it takes a clerk to complete a call. …Each morning, workers receive computer printouts listing their break and lunch times based on the anticipated traffic patterns of the day. …Before computerization, a worker's morning break normally came about two hours after the beginning of the shift; now, it can come as early as fifteen minutes into the working day. Workers cannot go to the bathroom unless they find someone to take their place. If you close your terminal, right away the computer starts clacking away and starts ringing a bell.
From Brave New Workplace by Robert Howard, in Working Papers for a New Society, Cambridge Policy Studies Institute, November-December 1980 (As cited in New Information Technology: For What by Tom Athanisou, Processed World, April 1981)

The essay ends with, “In a world where everything and everyone is treated as an object to be bought and sold, the new technologies — and most of the old ones for that matter — will inevitably create hardship and human misery. […] The ease with which computers are used as instruments of social control cannot be allowed to obscure their liberatory potential.”

20-cents off a can of corn

Most large companies doing business in California are required by the state’s new privacy law to disclose what they know about customers and how that information is used.

This resulted in fairly straightforward announcements by many businesses.

Then there’s Ralphs, the supermarket chain owned by Kroger.

…As part of signing up for a rewards card, Ralphs “may collect” information such as “your level of education, type of employment, information about your health and information about insurance coverage you might carry.”

It says Ralphs may pry into “financial and payment information like your bank account, credit and debit card numbers, and your credit history.” […]

Ralphs says it’s gathering “behavioral information” such as “your purchase and transaction histories” and “geolocation data,” which could mean the specific Ralphs aisles you browse or could mean the places you go when not shopping for groceries, thanks to the tracking capability of your smartphone.

Ralphs also reserves the right to go after “information about what you do online” and says it will make “inferences” about your interests “based on analysis of other information we have collected.”

Other information? This can include files from “consumer research firms” — read: professional data brokers — and “public databases,” such as property records and bankruptcy filings.

[The article also notes that Ralphs' parent company Kroger also owns a company 'devoted solely to using customer data as a business resource' by aggregating data about its customers and selling it on the open market.]

“This level of intrusiveness seems like a very unfair bargain in return for, say, 20 cents off a can of corn,” Fordham’s Reidenberg said.

Is a supermarket discount coupon worth giving away your privacy?, by David Lazarus, Los Angeles Times, 21 January 2020

As Zuck prattles on

“As Zuck prattles on in revisionist blog posts about how he intended [Facebook] to ‘Give people a voice’, he consistently misses this point: harassment of this sort *silences* voices. It deters counterspeech to terrible ideas by making the reputational, time, and sanity cost too high…”
From a thread by Renee DiResta (@noUpside), of the Stanford Internet Observatory, regarding anti-vaxers harassing and threatening physicians’ over vaccination-related content.

A dystopian future or something

“I’ve come to the conclusion that because information constantly increases there’s never going to be privacy. Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”
Clearview investor, founder of Kirenaga Partners, and intellectual heavyweight David Scalzo, who “dismissed concerns about Clearview making the internet searchable by face.” From The Secretive Company That Might End Privacy as We Know It, by Kashmir Hill, New York Times, 18 January 2020. Peter Theil is also an investor. I might add that you most certainly *can* ban it.

Determining the trustworthiness and compatibility of a person

“Airbnb has a patent for AI that crawls and scrapes everything it can find on you, then judges whether you are conscientious & open or show signs of ‘neuroticism, involvement in crimes, narcissism, Machiavellianism, or psychopathy.’ Good luck challenging these judgments, too!”

Trooly unctuous

“…and now we’re looking at groups of historically marginalized people being denied involvement in mainstream economic, political, cultural and social activities — at scale.”

Trooly (a play on ‘truly’, ugh), crawls social media, news sites, police and court registries, credit bureaus and similar sites and uses AI to determine whether, say, an AirBnB renter, is likely to be trustworthy, in their opinion.

It does this on-demand in about 30 seconds, for a cost of about $1.

The quote in full context, below.

Trooly — [now used by] Airbnb — is combining social credit scores with predictive policing. Tools like PredPol use AI that combines data points and historical events, factors like race and location, digital footprints and crime statistics, to predict likelihood of when and where crimes will occur (as well as victims and perpetrators). It’s no secret that predictive policing replicates and perpetuates discrimination.

Combine this with companies like Instagram, Facebook, YouTube, and yes, Airbnb deciding what legal behaviors are acceptable for service, and now we’re looking at groups of historically marginalized people being denied involvement in mainstream economic, political, cultural and social activities — at scale.

Our ads are always accurate so it’s good that Facebook won’t limit political messages because it encourages more Americans to be involved in the process. This is much better than the approaches from Twitter and Google, which will lead to voter suppression.
— Trump campaign spokesman Tim Murtaugh, as quoted in Facebook Says It Won’t Back Down From Allowing Lies in Political Ads, by Mike Isaac and Cecilia Kang, New York Times, 9 January 2020

Unusable

Whereas a social movement has to persuade people to act, a government or a powerful group defending the status quo only has to create enough confusion to paralyze people into inaction. The internet's relatively chaotic nature, with too much information and weak gatekeepers, can asymmetrically empower governments by allowing them to develop new forms of censorship based not on blocking information, but on making available information unusable.

Twitter was made for trouble

On Twitter…teens saw the street code in the workings of the site. “Whoever made Twitter,” said Tiana, in September 2010, “designed Twitter for trouble.”

She explained that she could see her friends’ confrontations with people she didn’t follow. Tiana was prepared to “jump into” these conflicts and expected her friends to do the same. In the context of the [street] code, Twitter seemed provocative. It placed users before a stream of other people’s conversations, with the prompt “What’s happening?”

Tiana, from The Digital Street by Jeffrey Lane, 2018, p. 72

Et tu, Instagram?

“Facebook is notorious for allowing anti-vaxxers and other conspiracy theorists to organize and spread their messages to millions—the two most-shared news stories on Facebook in 2019 so far are both false.

“But Facebook, Twitter, and YouTube are not where young people go to socialize. Instagram is.

“[Instagram] is likely where the next great battle against misinformation will be fought…”