There has been significant public discussion about the potential negative consequences of machine learning and artificial intelligence. 

Many are worried that the next wave of technological development could lead to high unemployment as clerical jobs are automated away. Elsewhere, early machine learning deployments have seen some algorithms replicate the cultural biases and assumptions of the people who programmed them. 

Largely, these concerns have focused on the adverse consequences of well-intentioned attempts to automate a process with AI.

There has been less discussion, however, of the vulnerabilities of machine learning algorithms to nefarious activities, known as "adversarial machine learning."

At the Emtech Digital conference in San Francisco, UC Berkeley professor Dawn Song outlined some of the potential new avenues for fraud and other bad behavior that machine learning could open up.

For example, Google was able to feed data into an email chatbot and re-train it to spit out credit card numbers and other sensitive data.  In another example, researchers used innocuous-looking stickers on a stop sign to trick car's computer vision system into thinking it was entering a 45 mph zone.

Well-intentioned robots are more vulnerable than we might have imagined.  Demand for white-hat hackers is only likely to grow.