Who is liable if a self-driving truck runs over a pedestrian? What if a black box loan underwriting model rejects an applicant as a result of their ethnicity and the applicant sues?
More and more business processes are being handled by algorithms, rather than humans. Traditional commercial insurance policies cover an organisation in cases of employee negligence, but where does this leave a faulty AI model?
A recent Harvard Business Review article suggests large organisations should be purchasing cover to protect them against the risk of deploy a machine learning model that causes damage.
There are two ways machine learning models can create liability issues for a company. Firstly, an algorithm can be compromised and a hacker could reverse-engineer the data it was trained on. This would be equivalent to a data breach and may be covered by existing cyber policies. Secondly, a faulty model could be put into production and create adverse "real world" consequences. For example, a faulty software patch to a commercially deployed robot could lead to significant physical damage. This scenario is not contemplated by many existing commercial insurance policies.
Policyholders and insurance carriers need to adapt as technological liability starts to enter the real world.
Given that AI adoption has tripled in the last 3 years, insurance providers see this as the next big market. Additionally, two major insurers pointed out that standards organizations such as ISO and NIST are in the process of formulating trustworthy AI frameworks. Moreover, countries are considering AI strategies and are so far emphasizing safety, security, and privacy of ML systems, with the EU leading the effort — all of this activity could lead to regulations in the future.