Who is liable if a self-driving truck runs over a pedestrian? What if a black box loan underwriting model rejects an applicant as a result of their ethnicity and the applicant sues?  

More and more business processes are being handled by algorithms, rather than humans. Traditional commercial insurance policies cover an organisation in cases of employee negligence, but where does this leave a faulty AI model?

A recent Harvard Business Review article suggests large organisations should be purchasing cover to protect them against the risk of deploy a machine learning model that causes damage.

There are two ways machine learning models can create liability issues for a company. Firstly, an algorithm can be compromised and a hacker could reverse-engineer the data it was trained on. This would be equivalent to a data breach and may be covered by existing cyber policies. Secondly, a faulty model could be put into production and create adverse "real world" consequences. For example, a faulty software patch to a commercially deployed robot could lead to significant physical damage. This scenario is not contemplated by many existing commercial insurance policies. 

Policyholders and insurance carriers need to adapt as technological liability starts to enter the real world.