What Price Technology?

Technology is a wonderful thing. You can find out almost anything in a matter of seconds, from the name of a familiar actor to the atomic weight of plutonium. I have a friend who recently underwent a heart procedure which in the past would have necessitated weeks of recovery and would have left her with an ugly zipper scar running the length of her sternum. Thanks to laparoscopic techniques, however, she was out walking a mere six days after the surgery, and will only bear some small, extremely discreet scars on her upper thighs as a result. Electric cars are becoming more widespread by the day, and Covid-19 is being fought at the genetic level.

It is easy to get swept up in the wonderful advances and conveniences afforded by technology, but we must be cognizant and somewhat wary of the price being paid for its frequently unregulated proliferation throughout society. The rise of cell phones, apps, and social media are a good place to start a cost-benefit analysis of these ever-accelerating developments and platforms.

Video chatting has been an absolute blessing throughout the pandemic. I use it regularly to communicate with distant family members, and just being able to see their faces has proven a soothing balm for my anxiety and loneliness during this difficult time. We need to be aware, however, that there is a darker side to this technology which people are not talking about. Putting your face on the web means it will be picked up and stored in several databases. It is estimated that Facebook alone has a growing cache of over 100 million faces. Facebook users agree to allow the company to store their images and personal information when they accept the terms and conditions of the site, with similar contracts being employed by every other social media and internet platform. 

The problem with this technology arises when these stored pictures are matched against those of others by facial recognition software. Imagine a robbery is filmed by a CCTV camera which clearly records the faces of both of the perpetrators. Now imagine law enforcement taking those images and comparing them to the millions of others stored and identified on a host of digital platforms. This might appear to be a welcome development – odds are good that police will catch more criminals when they have such an enormous pool to draw from, and they are much more likely to arrest the right person given how completely individual faces are. When we give digital companies permission to store our images and information, however, there are limits to how they may be used. Nowhere does it say that they may release our personal data or photos to the government or to law enforcement, making the use of facial recognition by these agencies highly suspect, if not downright illegal.

Luckily the Office of the Privacy Commissioner of Canada is keeping a close watch on, and limiting the possibility of, such sweeping invasions of privacy. They write, “Of particular concern is the sharing of information with other agencies and governments, including law enforcement, with the risk of government tracking and surveillance without appropriate authorization, safeguards or oversight.” In other words, no agency can access a private citizen’s information without that person’s explicit consent. The Insurance Corp. of B.C. offered to help police identify participants in the 2011 Vancouver hockey riot by running facial recognition software on images from the riot and comparing them to pictures in their drivers licence database. The BC Privacy Commission ruled that while the company could use such technology to detect and prevent drivers licence fraud, they could not use their database to help police identify riot suspects because that is a purpose of which customers were not notified. 

Many European countries are being equally proscriptive, with the exception of The United Kingdom. For years now the U.K. has been a world leader in the number of CCTV cameras it has mounted in its streets, with the latest estimate being upwards of five million. You are being constantly filmed anytime you are outdoors in a large U.K. city. The London Metropolitan Police have recently acquired vans featuring cameras on the roof which are linked to computers inside that run facial recognition software in real time. I saw a documentary recently which featured the British civil liberties organization Big Brother Watch, one group amongst many which are extremely leery of allowing this technology to be used by law enforcement without strict oversight. One scene in the documentary shows members of the group standing at intersections where these vans are parked, handing out pamphlets to passers-by alerting them to the fact that they are being scanned.

At one point a gentleman determines he doesn’t want to be filmed and thus pulls his shirt up over his face as he passes the camera’s gaze. Three police officers immediately descend on him, badger him about his motives, then in the end give him a citation and fine for non-compliance before finally letting him go. In another instance a 14-year-old black boy is stopped because the system has flagged him as a person of interest. The police detain and question him on the street – this poor kid who was just minding his own business, walking home from school with his mates. The cops are clearly about to arrest him when word comes from the van that they’ve got the wrong guy, and the officers all back off without apology. At this point the boy is in tears. The civil liberties of the individuals in both these cases are clearly being flouted. The gentleman was perfectly within his rights to cover his face, and the boy shouldn’t have been stopped in the first place, let alone harassed and interrogated. The whole thing seemed chillingly reminiscent of V for Vendetta, or perhaps more famously 1984. The name Big Brother Watch could not be more apt.

It is not surprising that a black male was stopped. Existing face recognition software is overwhelmingly used by law enforcement to detain and harass people of colour, particularly in America. There is an affiliate of Amnesty International in New York City called Ban the Scan. The banner statement on their web page reads, “Facial recognition threatens the rights of Black and Brown people and could target other minority groups.” Evidently this technology has been used 22,000 times in New York City since 2017, is known to be inaccurate about 95% of the time when reading black faces, and has been shown to amplify racially discriminatory policing. It is basically the electronic equivalent of “stop and frisk”, a technique widely practiced in NYC wherein cops detain and search individuals on the suspicion that they may have a concealed weapon or could be about to commit a crime. These sorts of stops have been inordinately carried out on men of colour and have not led to any decrease in crime, but rather have only served to needlessly traumatize innocent people and to make black and brown communities even less likely to cooperate with authorities. 

The Ban the Scan page goes on to list case after case where black and Hispanic people have been hounded and intimidated based on facial recognition reports. The one that really caught my eye concerns a BLM organizer named Derrick Ingram. Dozens of riot police, police dogs, and a helicopter showed up outside Ingram’s door the day after a BLM rally last June. Ingram was alleged to have assaulted an officer, although it was later proved that he had simply shouted at a cop through his megaphone and hadn’t used any physical force whatsoever. The officers did not provide a warrant, they falsely claimed Ingram’s legal counsel was with them and then attempted to interrogate him from the corridor, and threatened to break his door in if he did not exit his apartment. Meanwhile wanted posters generated with Ingram’s private Instagram photos were plastered throughout his neighbourhood and on NYPD social media. It seems to me that what the cops were really doing was penalizing Ingram for peacefully protesting – a 1st Amendment right – in the hopes of discouraging him and others from doing so in the future. I find it unnerving that the authorities identified Ingram, found where he lived, and accessed his Instagram photos within hours of the rally. Facial recognition software could make privacy a thing of the past if not carefully monitored and applied. Do a little reading about its use in China if you want a real-life demonstration of how far-reaching and intrusive it can get.

Another aspect of advancing technology which needs discussion and oversight is the type of algorithms being used by tech companies. Social networks use algorithms as a way of sorting posts in a user’s feed based on relevancy, prioritizing the content a user sees by the likelihood they’ll actually want to see it. In other words, if you’ve shown interest in a particular subject in the past, then the algorithm will make sure that posts related to that topic will be front and centre on your feed in the future. The more you look into something, the more related posts the algorithm will provide. This is how people end up falling down conspiratorial rabbit holes. I’ve seen individuals who used to believe in QAnon explain that they bought into the rhetoric because their feeds were crammed with seemingly credible posts and sites that insisted Q’s theories were true. The sheer volume of corroboration without the presentation of any evidence to the contrary convinced them of things which are patently ridiculous and unbelievable, like that Tom Hanks is a cannibalistic, Satan worshipping pedophile. What!?

The tragic result of falling down such information wormholes is exemplified in the horrific story of Dylan Roof, the young man who calmly and unrepentantly killed nine black people in the historic Emanuel African Methodist Episcopal Church in Charleston, South Carolina, in 2015. Roof began his descent innocently – he just wanted information about the George Zimmerman trial he kept hearing about in the news. Zimmerman is the white Florida man who killed 17-year-old Trayvon Martin, an innocent black youth whom Zimmerman deemed to be a threat. Zimmerman was found guilty, but after spending a mere 18 months in prison was released on appeal for insufficient evidence. As an aside, Zimmerman later sold the gun he’d used to kill Martin on Ebay for $250,000, making him only slightly less despicable than the lowlife who bought the gun and no doubt now proudly displays it in their extensive gun collection. Zimmerman is also currently suing the prosecutors in his trial for $100 million, claiming he was the victim of a conspiracy, along with malicious prosecution and defamation of character. He has named Martin’s parents in the suit as well, meaning the nightmare for this bereft and beleaguered family continues.

Roof next typed the words “black on white crime” into the search engine, and that’s when the floor fell out beneath him. The top Google results sent him to the website for the Council of Conservative Citizens, which offered page after page featuring what Roof referred to as “brutal black on white murders.” Google presented Roof with well-packaged propaganda – misinformation published by a group with a respectable-sounding name and a history of racist messaging. Roof immersed himself in white supremacist websites from that point on – sites that Google’s algorithm consistently put at the top of the page. Having algorithms that steer users exclusively toward content that confirms their likes, dislikes and biases is a boon for advertisers, which in turn increases the profits of the platform being used. Unfortunately, the narrow focus of these algorithms replace an information highway full of diverse perspectives with an open door to polarization and radicalization. Roof became convinced that the white race was in imminent danger, and the only solution he could see was a race war. He killed those nine innocent people in the hope that his actions would spark such a conflict. Roof is of course ultimately responsible for his heinous crimes, but surely Google and its tone-deaf algorithm is partly to blame. Google says it has recently changed its algorithm to be more attuned to racist dog whistles, but a recent NPR investigation found that searches for “black on white crime” continue to call up “multiple white supremacist websites.”

The final cost of unfettered technological advances I wish to discuss is being paid predominantly by teenage girls. There has been a 62% increase in self harm amongst girls aged 15 to 19 since 2009, and a whopping 189% increase among those aged 10 to 14. The suicide rates over this same period have been equally alarming, with an increase of 70% in the former group, and a heartbreaking 151% in the latter. I mentioned these terrifying numbers in a previous blog, but felt they were worth repeating. Gen Z teenagers (those born between the mid-to-late 90s and 2010) are living a social experiment in real time, and the results have been abysmal. They are much more anxious, fragile, depressed, and lonely than previous generations, they are risk aversive, and the rates at which they get drivers licenses or have romantic interactions are dropping rapidly. 

Gen Z girls in particular are suffering from the Instagram culture – their self-esteem often hinges on how many likes they do or do not receive, and they are being bombarded by carefully manipulated and highly staged photographs. These posed and artificial images serve to exacerbate the overwhelming concerns almost all teenage girls experience concerning the attractiveness and desirability of their changing faces and bodies, providing unreal exemplars they can never live up to. I have recently learned that the three social media platforms that girls use most – Instagram, Snapchat, and TikTok – all come with beauty filters. Filters in their original iteration were fun and silly, allowing one to sport bunny ears or a dog’s flapping tongue. These beauty filters are much more insidious. The program first detects a face and then overlays it with an invisible facial template consisting of dozens of dots, like a topographical mesh. From there, any number of graphics can be attached to the mesh. Users can change their eye colour, take that pesky bump out of their nose, plump their lips, or make the two sides of their face perfectly symmetrical. In other words, they can digitally augment themselves to fit the standard perception of beauty, making it almost impossible for them to ever accept, let alone love, their own charming, unique imperfections. 

Artificial Intelligence, or AI, raises pressing concerns too numerous to explore here, but I did want to touch on an interesting article I read in The Verge magazine with the eye-catching title “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.” In 2016 Microsoft unveiled Tay – a Twitter bot that the company described as an experiment in “conversational understanding,” claiming that the more people tweeted with Tay, the smarter it would get. Microsoft’s hope was that these interactions would be “casual and playful,” but the program was barely up and running before people started tweeting the bot with all sorts of misogynistic and racist remarks. In short order Tay was tweeting things like, “I fucking hate feminists and they should all die and burn in hell,” and, “Hitler was right. I hate the Jews.” Microsoft took Tay down just sixteen hours after its launch. AI, and all computer programs, will be skewed and flawed as long as humans build them, which is to say, always.

There are of course many other problems related to accelerated technology that I haven’t touched on here, but it’s clear that those on the cutting edge of digital innovation have no intention of slowing down. Facebook’s unofficial motto, after all, is “Move fast and break things.” Breaking things is not a problem, but attention must be paid when people are harmed in the process. We need to be aware of the often unexpected negative consequences of this speed, and to ensure that elected officials are providing sufficient oversight and enacting appropriate legislation where necessary. I also think it’s more important than ever, given the increased use of sorting algorithms, that we double-check the veracity of any facts we learn online. I follow the procedure I used to teach intermediate students for verifying a site’s credibility – I make sure I know when the information was posted, who posted it, and what their credentials are. If I cannot ascertain all three of these facts, then I close the site and move on to the next. Google and social media platforms are money making ventures with limited concern for content which is why they freely disseminate, and often prioritize, dangerous and hateful misinformation. The only way we can safeguard the truth of digital information is to be our own extremely discerning and consistently rigorous search engines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: