Machine learning and AI have enormous potential for insurance, notably in risk selection, pricing and marketing. But insurance companies need to proceed thoughtfully as the pace of advancement in AI technology outstrips legislation, creating a minefield for reputational risk.
Companies like Facebook have suffered by only thinking about data privacy after suffering serious breaches and receiving damaging press. This offers a lesson to the insurance industry who cannot afford to adopt the "move fast and break things" mentality.
Before the A.C.A [Affordable Care Act], data brokers bought data from pharmacies and sold it to insurance companies, which would then deny coverage based on prescription histories. Future uses of data in insurance will not be so straightforward. As machine learning works its way into more and more decisions about who gets coverage and what it costs, discrimination becomes harder to spot. A. I. research should march on. But when it comes to insurance in particular, there are unanswered questions about the kind of biases that are acceptable. Discrimination based on genetics has already been deemed repugnant, even if it’s perfectly rational. Poverty might be a rational indicator of risk, but should society allow companies to penalize the poor?