The Real Problems with Big Tech
With big tech companies like Google, Apple, and Amazon banning Parler we are once again debating whether these companies have too much power. Unfortunately, the discussion is focused around the wrong arguments. People treat it is as a first amendment issue or act like any form of censorship will inevitably lead to a dystopian, Orwellian state. This obscures important questions on public discourse, information harvesting, and algorithm bias.
The Bad Arguments
The First Amendment
The first amendment notes “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” Nowhere does it say that private tech companies must allow anyone to say whatever they want with their products. Case closed.
Where will the Censorship End?
The oft-used silly slippery slope argument makes the case that if any censorship occurs it will open the door for incredibly strict regulation of speech. This makes the false assumption that a step in a certain direction always leads towards the extreme. Reality doesn’t bear this out.
For example, Germany has had strict hate speech laws since the 1950’s. It is illegal to deliberately incite hatred against national, racial, ethnic, or religious groups. It has not devolved into a totalitarian state. In fact, many countries such as Canada, Switzerland, and the UK have anti-hate speech laws and have also avoided this fate. The U.S. is unique in how little censorship it has. It is perfectly possible to have and enforce hate speech laws and go no further.
This is essentially what happened to Parler when they violated Amazon’s, Apple’s, and Google’s terms of service. Per Amazon, “…we cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others.” Not bound by the first amendment, tech companies are perfectly capable of setting terms of service and then sticking by those terms of service and going no further.
What We Should be Worried About (for a start at least)
It makes far more sense to focus the discussion of big tech companies and online speech around the ways to best help the world. For example, one could argue that big tech should not censor speech in anyway on the grounds that this best helps society (though with online misinformation leading to the assault on the capital this argument is increasingly difficult to make). It is vital that the discussion is correctly framed in this way given tech companies ability to create very real harm.
Biased Algorithms
Algorithms are not neutral arbiters, but instead are imbued with the same prejudices found at every level of society. A study found that an algorithm used by hospitals to allocate healthcare to up to 200 million patients annually “was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs.” One of the algorithm’s factors in assessing risk and the subsequent need for more care was the cost of healthcare a person accrued in one year. On face value, this appears unbiased. However, what the study found was that while the same amount of money was spent on both black and white people, the average black person was far sicker and should have had a higher risk factor. In this way, the systemic racism black people experience was built into the algorithm. It is a clear example of how algorithms perpetuate prejudices within society.
Search algorithms are not immune to these problems either. In 2010, if you searched black girls on Google, the results on the first page were primarily porn results. Through the way people searched and Google designed its algorithms, black girls were associated with sexual gratification. The search results have since improved to exclude porn and include sites on black girl’s coding, mental health, and exercising. However, the playing field is still far from neutral.
To test the bias further, I Googled ‘white girls’, ‘hispanic girls’, and ‘asian girls’. ‘White girls’ first page primarily consisted of a movie called White Girl. ‘Hispanic girls’ largely gave me baby name suggestions. ‘Asian girls’ linked me to sites boasting of beautiful Asian women and creepy dating stuff like ‘Asian girls dating white guys.’ These simple searches create vastly different pictures. Every single Google search (and most likely every search engine) does this exact same skewing of reality to varying extents.
These biases shape how billions of people see the world. They even permeate this post given most of it was researched using Google (though I did grow so paranoid I used Bing for more variety).
Data Harvesting
Like all companies, big tech companies are all about making money. One of the primary ways they do this is by selling and organizing information. Some of the ways they do this could be considered deeply immoral.
Amazon may have banned been Parler from its cloud services, but they allow the tech company Palantir to use their services. Innocently enough, Palantir helps better visualize data. Not innocently at all, Palantir has a contract with ICE (Customs and Immigration Enforcement). They assemble information such as cellphone data and employment information of immigrants to help ICE carry out their raids. As a result, Palantir was directly involved, and Amazon indirectly, in the caging of children (a fuller rundown on the raids Palantir assisted in). Microsoft has also worked with ICE. Clearly, big tech tends to go where the money is regardless of the ethical costs.
Scrutinize Big Tech
Profitability is written into corporate DNA, but ethics is not. For this reason, it is important to keep a close eye on tech companies.
Google was building a search engine for China called Project Dragonfly. It would blacklist certain words such as “human rights”, “student protest”, and “Nobel Prize.” Not only that, but it would feed information to the Chinese government on who searched these words. In response to protests from both in- and outside Google the company abandoned the program.
Pressuring tech companies for their roles in human rights abuses and perpetuating prejudices can make a difference even if it is difficult. Palantir remains unapologetic for its actions. This reality makes pushing and seeking alternatives to big tech all the more urgent so that online information and discourse can become a tool for positive change.