Artificial Intelligence (A.I.)-the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
It appears that Victor Fresco, creator of the 2009 show Better Off Ted understood how bias could creep into the tech world way before Silicon Valley did. The show didn’t last very long, but I loved the smart, satirical humor of the show. With great depth of thought, the sit-com explored real-world corporate issues, humorously. Episode 4 of season one has always stuck in my mind.
The episode, titled “Racial Sensitivity” has an ingenious premise: the company Veridian Dynamics, which is the location of the sit-com, has decided to save money at its home office by installing sensors that will switch off the power whenever nobody’s in a room. The problem is that there is a flaw in the system that prevents it from recognizing people with dark skin. When the show’s protagonist Ted Crisp (played by Jay Harrington) suggests that these sensors are racist, his boss Veronica Palmer (Portia De Rossi) explains that the company’s position is that this is, in fact, the opposite of racism, because the system is “not targeting black people… it’s just ignoring them.” The “color-blind” company goes on to install black only water fountains that operate manually, and they also assign white-coworkers to escort black employees through the building. If you get a chance to watch the episode, I promise you it is hilarious. Nothing that absurd happens in real-life, right? Our technology developers could not be that obtuse as to program a racist device. We all know that technology is emotionally neutral, correct? How could it possibly be racist or biased? Technology is just ones and zeros.
Today, companies use Artificial Intelligence (A.I.) to predict everything from the creditworthiness to preferred cancer treatments. The technology has blind spots that particularly affect women and minorities. Check out this excerpt from an article written by Chris Ip of Engadget: “In 2017 a crime-predicting algorithm in Florida falsely labeled black people re-offenders at nearly twice the rate of white people. Google Translate converted the gender-neutral Turkish terms for certain professions into “he is a doctor” and “she is a nurse” in English. A Nikon camera asked its Asian user if someone blinked in the photo — no one did.” The technology industry is beginning to understand that it has a potential problem on its hands, far more serious than the lights or water fountain not turning on in a fictional sitcom corporation. Bias in A.I. can mean life or death in the medical world, and it can mean life in prison for someone unjustly targeted by police. The problem is so serious that Microsoft Corp. started FATE—for Fairness, Accountability, Transparency, and Ethics in A.I. The program was set up to ferret out biases that creep into A.I. data and can skew results.
One universal lesson in working at issues of diversity and inclusion is:
It matters who is in the room.
It is not different with A.I. or any other smart technology. A.I. is only as good as the data from which it learns. Let’s say programmers are building a computer model to identify horse breeds from images. First, they train the algorithms with photos that are each tagged with breed names. Then they put the program through its paces with untagged photos of various horses. They let the algorithms name the breed based on what they learned from the training data. The programmers see what worked and what didn’t and fine-tune from there.