Risk management for legal practices

Risk management for legal practices

Risk management for legal practices

While artificial intelligence (AI) will not replace lawyers, legal practices that effectively integrate and manage AI are likely to outperform those that don’t. Understanding both the capabilities and limitations of AI is essential to using it successfully and responsibly within a legal working environment.

To help you consider what changes and safeguards may be needed so your legal practice can successfully (and responsibly) use AI tools, we have compiled a series of fact sheets and practical resources focusing on the practice management side of using AI.

Law practices are encouraged to develop clear policies on AI use and to provide staff with appropriate training. Policies should address:

  • Authorisation and access – who can use AI tools and under what circumstances
  • Identification of AI-generated materials – ensuring transparency and accuracy
  • Client consent – obtaining consent before using AI to record, transcribe, summarise, or draft client-related documents
  • Verification and oversight – requiring senior legal practitioners to review and approve outputs generated using AI

By approaching AI adoption thoughtfully, law practices can harness its benefits while maintaining professional and ethical standards.

Questions for law practices to consider 

When considering whether to use, or to continue the use of, any AI practitioners should always exercise their own judgement. This may include seeking their own advice in order to maintain compliance with the Conduct Rules and/or other regulatory standards relevant to such a decision.

The Law Society’s Ethics Committee has prepared a list of questions that law firms should consider:

Policy and procedures

1. What is the firm’s AI policy?

Are there documented policies within the practice in relation to the use of AI?

Policies ensure alignment within your practice which promotes accountability and supports innovation within a structured framework. Without policies, neither you nor your staff have any way of knowing the steps that should be undertaken.  At a minimum, policies outline the enquiries that should be made and the procedures that should be followed concerning the use of AI. It is almost certain that you will be unable to provide a satisfactory explanation to the client, the Court and/or the regulator concerning your use of AI unless you have adequate policies in place.  Robust policies will assist you. As well as outlining the pre-adoption considerations, policies should deal with post-adoption issues – is there a process for monitoring, review and improvement of the use of the AI tools?

2. Can AI be used for this purpose? 

Do the firm’s policies include whether generative AI tools can be used in the practice, what functions they can perform, and the approval required for their use on any particular file and what supervision protocols apply?

If they don’t, they will not be adequate.

3. What is the firm’s policy about disclosure of AI use to clients?  

What is my firm’s policy about explaining the use of LLMs and generative AI to clients and the implications of their use, both in terms of any risks and the effect on the fees which are likely to be incurred?

The firm’s AI policy needs to address these issues. Transparency is the key.

Training and knowing the limitations

4. What training programs are available? 

Do I have a training program on the use of AI in the practice and am I aware of my AI Tool’s shortcomings?

Policies without training will not be sufficient. The training needs to teach everyone what they need to know in order to comply with the policies.

5. What is the AI program’s tendency for bias?

Do I know whether, and if so how, the large language model (LLM) identifies and mitigates bias in its training data and, if not, how am I going to find out?

LLMs are known to exhibit bias where that bias exists in their training data. It is important that practitioners can show that they have taken adequate steps to satisfy themselves that their preferred LLM or generative AI has sufficiently addressed this issue.

6. What steps should be taken to verify the results of AI use? 

Am I intending to use generative AI for legal advice and, if so, what steps do I propose to take to verify the accuracy and reliability of any such advice?

There is an increasing number of cases where practitioners have relied upon an LLM for legal research and authorities, only for the LLM to produce fictitious information as a result of the LLM’s “hallucination”. That has resulted in disciplinary proceedings, professional embarrassment and almost certainly professional negligence claims. The use of AI is not a substitute for practitioners undertaking work themselves. At most, it is an aid, and the resulting output should be checked and verified by the practitioner, who must “own” the work.

Confidentiality and Standards

7. Will the use of AI potentially breach any professional standards?

Am I satisfied that the use of AI in any particular case will not involve any compromise of professional standards?

If the answer is no, then you must not use AI in that instance.

8. Will my prompts be shared?

What information am I going to provide to the AI program I intend to use, is the information privileged and / or confidential and have I assessed whether there is any risk that providing the information may lead to a breach of that privilege and/or confidentiality?

To assume without the benefit of appropriate enquiries that the privilege or confidentiality in any information you provide to the AI program will be preserved will not be sufficient. Proper enquiries must be made, and, you should only proceed when you have good grounds to be satisfied that the information will be safe and the confidentiality and privilege preserved.

Useful resources

Guidance from legal regulators

Use of AI by lawyers

Use of AI by others in legal proceedings  

Commonwealth Government resources for the adoption of AI by organisations generally

Area
Telephone Number
Law Society of Western Australia Reception
(08) 9324 8600
Law Mutual
(08) 9481 3111
Continuing Professional Development
(08) 9324 8640
Membership Services
(08) 9324 8692
Professional Standards Scheme
(08) 9324 8653
Old Court House Law Museum
(08) 9324 8688
Francis Burt Law Education Programme
(08) 9324 8686
Media Enquiries
(08) 9324 8650