Accessibility Links

Artificial Intelligence in Law - but can it learn what is right?

Posted by: Laurence Simons 16/05/16
Following on from Ebony Ezekwesili's article 'Are Lawyers in Danger? The Impact of technology on law as we know it', it seems there is perpetual chatter in legal circles about how Artificial Intelligence could one day integrate itself into the profession. And while Microsoft’s Tay Bot experiment might support the argument that automated lawyers are decades away yet, it does throw up a lot of ethical considerations. 

Tay, an AI program designed by Microsoft was supposed to learn the art of conversation from humans on Twitter. However within a matter of hours she had morphed into a fascist, misogynistic and racist entity, creating memes of Hitler, claiming that 9/11 was a conspiracy and expressing support for Donald Trump. Much like when IBM’s Watson began to swear after memorising the satirical crowdsourced thesaurus, Urban Dictionary, Tay built everything it learnt into its algorithm. Neither programme was provided with moral guidance, and therein lies the issue. 

Perhaps the biggest ethical concern for the legal profession is how to govern an algorithm that is not only continually learning but also makes decisions based on what is has learnt. In other words – how can we teach an AI to make the right decisions? A significant amount of AI research is devoted to governance and value alignment, the latter of which can be influenced by a system called Quixote. The system is based on the programme reading stories with upstanding moral protagonists and being rewarded when it acts like the protagonist as opposed to the antagonist, much like how a child learns right from wrong. 

Moral decision making is inherent throughout the legal profession and while we may be able to teach AI programmes to make an ethical decision, until we can guarantee they will continue to make the right one there little reason to worry about them taking over our roles just yet. AI will undoubtedly become highly useful within the profession, and in fact there are already AI programmes available which can help to ascertain risk levels and streamline administration. But until this kind of sophisticated AI is properly governed it’s highly unlikely any lawyers or judges will be turning up to work to find themselves replaced by a robot.
Tagged In: Digital, Security/Data
Add new comment