By Steven Moe (Partner) and Annemarie Mora (Solicitor) at Parry Field Lawyers

Artificial Intelligence (AI) is already influencing your legal work, even though you may not know it. Most legal software providers have incorporated some aspects of generative AI into their ecosystem including reviewing and summarising documents, reviewing briefs and outlining potential weaknesses, simplifying submissions and filling out forms based on client conversations.

With AI an undisputed inevitability in our daily and legal lives, what should we be paying attention to in the near future when it comes to our work as lawyers, in both a positive and negative way?

The positive: AI opportunities are increasing

AI promoters tend to make appealing promises about the benefits of AI on efficiency and profitability. One study by Harvard and the Boston Consulting Group found that professionals accomplished 12% more tasks in 25% less time, with a 40% improvement in quality compared to co-workers who didn’t use AI.

The implication is that this may impact legal fee structures and billable hours, with some practices considering fixed-fee models to capture the value offering for clients. However, AI tools come with a cost to implement, maintain and train, and this will offset the value they provide.

The negative: AI risks are increasing

AI, like all technology, is still learning. Generative AI, for example, uses large language models ‘trained’ on trillions of pieces of content. The nature of the content will impact its output and can lead to bias. Even more concerning is that AI will ‘hallucinate’, essentially making up content, as occurred when OpenAI’s GPT platform was asked to find precedent cases and ended up fabricating some cases. There is still no substitute for human confirmation.

Another potential pitfall is accidentally compromising client confidentiality by loading confidential information into open AI systems that can make it publicly accessible. Samsung engineers compromised their IP by uploading some chip designs to GPT, which made it accessible to competitors. Anything confidential needs to remain in a private AI domain. Have robust systems and policies in place before choosing to use an AI tool.

It pays to approach the use of AI with eyes wide open. As AI advances and offers up more appealing options for efficiency and creativity, the same technology may be used for malevolent purposes. Phishing emails are now more convincing, with the ability to clone voices and use deepfake (imagery the appears to represent someone doing something but is not bona fide) content to masquerade as real people. The upshot is that as the capability of AI improves, so does human vulnerability to being persuaded by it. Cybersecurity training and approaches need to keep pace with the AI advances.

The importance of cognitive skills

The malevolent application of AI is now being compounded by human tendencies. Taking just deepfakes as an example, those videos produced by the earliest GANs (generative adversarial networks) were often recognisable as fakes due to technological glitches. Early examples of deepfakes include ‘Jacinda Ardern smoking a bong’, ‘President Zelenskyy asking his troops to surrender’, and ‘President Obama’ being inappropriately scornful of President Trump. However, the outputs from the most current GANs have been perceived as ‘more real’ than real images, and anyone can now create convincing deepfakes with great ease.

Research has also found that people tend to overestimate their ability to detect deepfakes. They are more likely to mistake deepfakes for authentic footage than the opposite. Research also finds that people tend to suspend cognitive effort when it comes to consuming video content and instead rely on associative responses and intuition, which may not be sufficient to detect deepfakes. This means that when threat actors use AI to enhance their attacks, cognitive ability is important in being able to detect the threats.

The upshot

AI is becoming more capable, more pervasive, more persuasive and less detectable. It takes no account of personal, professional, political or geographic boundaries and the law is finding it challenging to keep pace with AI’s disruption. Given AI is an undisputed inevitability in our personal and legal lives, we need to adapt to its use. This means taking reasonable steps to ensure people have the systems, processes, and cognitive skills in place to take advantage of the benefits while managing the risks.