An insurance biz has retracted boasts of how it uses AI algorithms to study videos of customers for “non-verbal cues” that their claims are fraudulent. The marketing U-turn came after the ethics of this approach was called publicly and loudly into question.
Using machine-learning software to automate decision-making processes to decide whether to accept or deny customers credit or insurance payments is particularly sensitive. Last month, America’s consumer watchdog, the FTC, issued a strongly worded statement warning that it was illegal to deploy algorithms end up discriminating against people based on their race, color, religion, national origin, sex, marital status, and age when making financial-related decisions.
Alarm bells were set off when Lemonade, a company based in New York, admitted it built software that scanned videos of customers explaining the situations they found themselves, which were submitted as part of insurance claims, to decide whether those people were essentially lying or committing some other fraudulent.
Lemonade prides itself on providing an easier and simpler way for people to file pet, home, and life insurance claims. Customers speak to a chat bot, submit their claim, and a decision on how much it should pay them can be made in a few minutes.
“When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process,” Lemonade stated in a series of tweets that have since been deleted.
Netizens criticized Lemonade’s technology, accusing it of being potentially biased and reliant on flimsy sentiment and emotion analysis. The backlash on Twitter prompted the company to delete its posts and issue a new statement, where it claimed it just used facial recognition algorithms to make sure the same person wasn’t making multiple claims.
“There was a sizable discussion on Twitter around a poorly worded tweet of ours (mostly the term ‘non-verbal cues’) which led to confusion as to how we use customer videos to process claims,” the upstart stated on its website. “There were also questions about whether we use approaches like emotion recognition (we don’t), and whether AI is used to automatically decline claims (never!)”
“We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims,” it reiterated.
Our systems don’t evaluate claims based on background, gender, appearance, skin tone, disability, or any physical characteristic (nor do we evaluate any of these by proxy) (3/4)
— Lemonade (@Lemonade_Inc) May 26, 2021
- Banks across America test facial recognition cameras ‘to spy on staff, customers’
- Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves
- Another reminder that bias, testing, diversity is needed in machine learning: Twitter’s image-crop AI may favor white men, women’s chests
- Enjoy a tipple or five? You might need this AI system to tell you when it’s time for a new liver
And in a filing to America’s financial regulator, the SEC, Lemonade said its system collects roughly 1,700 “data points” from customers.
“We use technology and artificial intelligence to reduce hassle, time, and cost associated with purchasing insurance and the claims submission and fulfillment process. We built our entire company on a unified, proprietary, state-of-the-art technology platform. Our customers are able to purchase insurance on our website or through our app, generally in a matter of minutes. Our artificial intelligence system handles substantially all of our customer onboarding and a meaningful portion of our claims,” it said in the filing.
What those data points describe is unclear. It did admit its own technology could have unintended consequences, where customers were paid too much or too little, leading to biased and discriminatory decisions. On the one hand, this is a boilerplate warning to investors and the financial markets that the biz could go belly up, and thus investments could be lost, though on the other hand, it is pretty specific about how it could go wrong.
“Our proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”
The company was launched in 2016, and operates across the US and parts of Europe, including France, Germany, and the Netherlands. It has yet to turn a profit, and spends most of its money on sales and marketing.
“Our future success depends on our ability to continue to develop and implement our proprietary artificial intelligence algorithms, and to maintain the confidentiality of this technology,” it said.
The Register has asked Lemonade for further comment. ®