Risk management for legal practices
While artificial intelligence (AI) will not replace lawyers, legal practices that effectively integrate and manage AI are likely to outperform those that don’t. Understanding both the capabilities and limitations of AI is essential to using it successfully and responsibly within a legal working environment.
To help you consider what changes and safeguards may be needed so your legal practice can successfully (and responsibly) use AI tools, we have compiled a series of fact sheets and practical resources focusing on the practice management side of using AI.
Law practices are encouraged to develop clear policies on AI use and to provide staff with appropriate training. Policies should address:
- Authorisation and access – who can use AI tools and under what circumstances
- Identification of AI-generated materials – ensuring transparency and accuracy
- Client consent – obtaining consent before using AI to record, transcribe, summarise, or draft client-related documents
- Verification and oversight – requiring senior legal practitioners to review and approve outputs generated using AI
By approaching AI adoption thoughtfully, law practices can harness its benefits while maintaining professional and ethical standards.
Questions for law practices to consider
When considering whether to use, or to continue the use of, any AI practitioners should always exercise their own judgement. This may include seeking their own advice in order to maintain compliance with the Conduct Rules and/or other regulatory standards relevant to such a decision.
The Law Society’s Ethics Committee has prepared a list of questions that law firms should consider:
Policy and procedures
1. What is the firm’s AI policy?
Are there documented policies within the practice in relation to the use of AI?
Policies ensure alignment within your practice which promotes accountability and supports innovation within a structured framework. Without policies, neither you nor your staff have any way of knowing the steps that should be undertaken. At a minimum, policies outline the enquiries that should be made and the procedures that should be followed concerning the use of AI. It is almost certain that you will be unable to provide a satisfactory explanation to the client, the Court and/or the regulator concerning your use of AI unless you have adequate policies in place. Robust policies will assist you. As well as outlining the pre-adoption considerations, policies should deal with post-adoption issues – is there a process for monitoring, review and improvement of the use of the AI tools?
2. Can AI be used for this purpose?
Do the firm’s policies include whether generative AI tools can be used in the practice, what functions they can perform, and the approval required for their use on any particular file and what supervision protocols apply?
If they don’t, they will not be adequate.
3. What is the firm’s policy about disclosure of AI use to clients?
What is my firm’s policy about explaining the use of LLMs and generative AI to clients and the implications of their use, both in terms of any risks and the effect on the fees which are likely to be incurred?
The firm’s AI policy needs to address these issues. Transparency is the key.
Training and knowing the limitations
4. What training programs are available?
Do I have a training program on the use of AI in the practice and am I aware of my AI Tool’s shortcomings?
Policies without training will not be sufficient. The training needs to teach everyone what they need to know in order to comply with the policies.
5. What is the AI program’s tendency for bias?
Do I know whether, and if so how, the large language model (LLM) identifies and mitigates bias in its training data and, if not, how am I going to find out?
LLMs are known to exhibit bias where that bias exists in their training data. It is important that practitioners can show that they have taken adequate steps to satisfy themselves that their preferred LLM or generative AI has sufficiently addressed this issue.
6. What steps should be taken to verify the results of AI use?
Am I intending to use generative AI for legal advice and, if so, what steps do I propose to take to verify the accuracy and reliability of any such advice?
There is an increasing number of cases where practitioners have relied upon an LLM for legal research and authorities, only for the LLM to produce fictitious information as a result of the LLM’s “hallucination”. That has resulted in disciplinary proceedings, professional embarrassment and almost certainly professional negligence claims. The use of AI is not a substitute for practitioners undertaking work themselves. At most, it is an aid, and the resulting output should be checked and verified by the practitioner, who must “own” the work.
Confidentiality and Standards
7. Will the use of AI potentially breach any professional standards?
Am I satisfied that the use of AI in any particular case will not involve any compromise of professional standards?
If the answer is no, then you must not use AI in that instance.
8. Will my prompts be shared?
What information am I going to provide to the AI program I intend to use, is the information privileged and / or confidential and have I assessed whether there is any risk that providing the information may lead to a breach of that privilege and/or confidentiality?
To assume without the benefit of appropriate enquiries that the privilege or confidentiality in any information you provide to the AI program will be preserved will not be sufficient. Proper enquiries must be made, and, you should only proceed when you have good grounds to be satisfied that the information will be safe and the confidentiality and privilege preserved.
Useful resources
Guidance from legal regulators
- Legal Practice Board of Western Australia, NSW Law Society and Victorian Legal Services Board and Commissioner – Joint statement on the use of artificial intelligence in Australian legal practice
- Victorian Legal Services Board and Commissioner – Risk outlook improper use of AI
- Legal Practitioners’ Liability Committee (Victoria) – Managing the risks of AI in law practices
Use of AI by lawyers
- Law Society’s 2025 Use of AI Survey analysis
- LIV – How lawyers can thrive in a world of uncertainty
- Law Society of NSW – Selecting the right GenAI tool for your legal practice
- Legal Practitioners’ Liability Committee (Victoria) – Limitations and risks of using AI in legal practice
- Queensland Law Society – AI Policy template
- Brief article – Is your business using AI? What you need to know about the proposed changes to privacy laws in Australia
Use of AI by others in legal proceedings
- University of New South Wales – More people are using AI in court, not a lawyer. It could cost you money and your case (explaining the risks of AI to SRLs or clients)
- The Conversation (by University of New South Wales Professor of Law) – AI is creating fake legal cases and making its way into real courtrooms with disastrous results
Commonwealth Government resources for the adoption of AI by organisations generally
- Guidance for AI adoption – Foundation (for organisations new to AI use)
- Guidance for AI adoption – Implementation Practices (detailed guidance)
- AI screening tool
- AI policy guide and template (editable word version)
- AI register template (editable excel version)