Guide to Building Data-Driven Organizations in the Public Sector

Ethics of Algorithms

(Team 7) Julie Moore and Lorna Romero

Topic Overview

This section of Weapons of Math Destruction we explore how big data can have a negative effect for an individual getting insurance, specifically auto, and a comments on health care related to organizations. The major themes of the book are discussed during the conclusion as well as an aspirational goal for the proper use of data and the non-proliferation of WMDs in this context.

Chapter Summaries

WMD Chapter 9 NO SAFE ZONE: Getting Insurance

Segregating people into different risk classes began back in the late nineteenth century with Frederick Hoffman at Prudential Life Insurance. He used data to determine “race was a powerful predictor of life expectancy.” It was discovered he enveloped all African Americans as uninsurable, he lumped laboring sharecroppers and professionals into one group. Hoffman “published a 330-page report that set back the cause of racial equality in the U.S. and reinforced the status of millions as second-class citizens” “not meaning to harm”.

John Graunt and other mathematicians began calculating big data to determine the most probable arc of a person’s life span. Insurers began drawing their scores from credit reports. Meaning “how you manage money can matter more than how you drive a car.” In other words, as an example in Florida, if you have a clean driving record and poor credit scores one would pay an average of $1,552 per year more than a convicted drunk driver with a good credit score.

Allstate charges individuals based on if they think a consumer will “shop around” for better pricing, if not Allstate will charge an individual more money for a policy. Stated by the former Texas insurance commissioner and CFA director of insurance “Allstate’s insurance pricing has become untethered from the rules of risk-based premiums ad from the rule of law.” The industry charges citizens based on a host of proxies versus true driving skills. This results in a feedback loop that poorer drivers pay higher premiums, so if a driver continues to experience bad debt (because they are paying higher premiums) they may default on other expenses such as auto loans, credit cards, etc., which ultimately spirals the person “into an even more forlorn microsegment.” It’s a never-ending vicious circle.

Consumer Reports got involved and launched a campaign directed at the National Association of Insurance Commissioners (NAIC) titled “Price me how I drive not by who you think I am!” In other words, premiums should be based on driving records, not other proxies. The trucking industry is used as an example. Seven hundred truckers die per year in America. The average cost of a fatal crash is $3.5 million according to the Federal Motor Carrier Safety Administration. So, leading insurers offer discounted rates to truckers who agree to share their driving data, such as a telemetric unit, which logs speed, braking, and accelerating, and a GPS monitor to track a vehicle’s movement. Ideally, this coincides with the Consumer Reports campaign. It’s pointed out that if a driver drives in a risky neighborhood, where car thefts and drunken driving are prevalent, insurers can take this into consideration, which sounds like an improvement. Interestingly, an example of a barista was given where she drives home late night through a bar/stripper street to save time and toll charges and now falls into a higher premium rate. In the early years, we had to “opt in” to be tracked, now everyone can be tracked all the time. The use of big data, without human intervention, is categorizing everyone. Insurance company’s original purpose was to help society balance its risk, however with big data, citizens who can least afford high insurance rates are getting hit the hardest.

The next example in Chapter 9 regards big data used concerning health care. Employer’s new trend is to establish “wellness programs” (being incentivized by the Affordable Car Act), which springs from “good intentions” to offset coverage costs. However, will employers “sift through job candidates?” Companies such as Michelin tire company and CVS have the employees not reaching targeted requirements pay more in health premiums up to $1,000. “Humiliation and fat-shaming” were used by a columnist from the “Bitch Media” regarding CVS. Using Body Mass Index (BMI) was determined to be flawed because of so many different body types and is said to “mold bodies to the corporation’s ideal and infringes on freedom.” The book states there is “scant evidence that mandatory wellness programs actually make workers healthier.” It’s stated that even when people lose weight participating in a wellness program, they tend to gain the weight back in the near future. Although breaking a smoking cigarette habit has shown to be successful.

The last point made in Chapter 9 states “while it is true that people are more likely to suffer from health problems, these tend to come later in life, when they’re off the corporate health plan and on Medicare. In fact, the greatest savings from wellness programs come from the penalties assessed on the workers. In other words, like scheduling algorithms, they provide corporations with yet another tool to raid their employee’s paychecks.” It is believed employers are “trying to map our thoughts and friendships, and predict our productivity,” which could lead to a “full-fledged WMD.”

WMD Conclusion

In the conclusion of Weapons of Math Destruction, Cathy O’Neil provides a recap of the major themes of the book and aspirational goal for the proper use of data and the non-proliferation of WMDs in this context.

According to O’Neil, “being poor in the world of WMDs is getting more and more dangerous and expensive.” For example, poor people with bad credit who live in bad neighborhoods are likely to be served predatory ads for pay-day or loans for-profit universities. Companies and organizations use data for targeting purposes, creating marketing or advertising silos based on a number of factors such as social economic class. O’Neil contends that this is not something the free market can correct on its own, it requires action and regulation.

One example given was while former Democratic President Bill Clinton signed to Defense of Marriage Act, IBM made the decision to give medical benefits to same-sex partners. O’Neil speculates that IBM made that decision to remain competitive in the growing internet and etch marketplace; fairness was a byproduct of the business decision.

Eliminating some WMDs does not always create a financial benefit. Some companies are solely modeled off of this type of manipulation.

WMDs also target the middle class, by preventing job opportunities for qualified candidates or reducing pay for someone who doesn’t meet model health requirements.

But although human decision making is flawed, it can evolve, but automated systems cannot. They process the information of the past not invent the future.

O’Neil suggest that in order to make changes moving forward, data scientists must pledge to a Hippocratic Oath:

O’Neil also notes that a regulatory framework for WMDs must include hidden or non-numerical costs. We must also acknowledge that technology alone cannot do everything. To disarm WMDs, we must measure impact and conduct audits. This will allow us to track biases in automated systems. For example, Netflix uses algorithms to optimize the content feature for their subscribers. But audits often face resistance from “web giants” such as Facebook and Google who wish to keep their information internal. O’Neil argues that these web giants “must be accountable to all of us—which means opening its platform to more data auditors.

We need to demand transparency, disclosing the input data as well as the results of their targeting, data collected must be approved by the user, an opt-in approach, and models should be available to the public.

The main problem for data analysts is not just to identify the problem but present possible solutions. Some mathematical models can used for good, if they are not abused. Examples include tracking goods to see if they were produced by forced labor or predictive models that can determine if a child is likely to be in an abusive home to provide proper services.

Math and data can solve problems, but we need to demand fairness and accountability in order to do good and not cause harm.

Key Take-Aways (for Yellowdig)

Summary #1:

Chapter 9 of WMD reports on how big data can have a negative effect for an individual getting insurance, specifically auto, and a comments on health care related to organizations.

Auto insurance companies use credit reports to determine insurance rates, in other words “how you manage your money can matter more than how you drive a car.” Allstate charges individuals based on if they think a consumer will “shop around” for better pricing, if not, Allstate will charge an individual more money for a policy. This results in a feedback loop that poorer drivers paying higher premiums.

The industry as a whole charges citizens based on a host of proxies versus true driving skills. Consumer Reports got involved and launched a campaign directed at the National Association of Insurance Commissioners (NAIC) titled “Price me how I drive not by who you think I am!” In response, leading insurers began offering discounts to drivers who share their data… although this coincides with the Consumer Reports campaign, drivers were still “dinged” even if they had valid reasons for driving through “bad” areas like bars and strip clubs.

Regarding Health care insurance, the chapter mentions the new employer trend of establishing wellness programs, which brings up concerns of employers “sifting through job candidates. There’s also an interesting discussion of Body Mass Index (BMI) and how it can be misinterpreted.

The last point made in Chapter 9 states “while it is true that people are more likely to suffer from health problems, these tend to come later in life, when they’re off the corporate health plan and on Medicare. In fact, the greatest savings from wellness programs come from the penalties assessed on the workers. In other words, like scheduling algorithms, they provide corporations with yet another tool to raid their employee’s paychecks.” It is believed employers are “trying to map our thoughts and friendships, and predict our productivity,” which could lead to a “full-fledged WMD.”

Summary #2:

In the conclusion of Weapons of Math Destruction, Cathy O’Neil provides a recap of the major themes of the book and aspirational goal for the proper use of data and the non-proliferation of WMDs in this context.

According to O’Neil, “being poor in the world of WMDs is getting more and more dangerous and expensive.”

But although human decision making is flawed, it can evolve, but automated systems cannot. They process the information of the past not invent the future.

O’Neil suggest that in order to make changes moving forward, data scientists must pledge to a Hippocratic Oath.

Math and data can solve problems, but we need to demand fairness and accountability in order to do good and not cause harm.

Discussion Questions

  1. How do you feel about sharing your driving data with auto insurers? The Chapter mentions a barista driving a road through strip clubs, on her way home from work late at night, to avoid tolls and cutting time off her trip…
  2. Do you feel like you are forced to participate in a work wellness program? Chapter 9 suggests these do nothing but “raid your paycheck.”
  3. Does this area need increased regulation, or can the free market correct this abuse?
  4. How much should personal responsibility be a factor when considering future regulation of big data?

References

Bonus Paper (presents a counter-argument to O’Neil, ok to skim to get basic outline of argument):