Using AI, your questions answered
The use of AI in the legal practice is constantly evolving. These questions and answered are provided by way of information and for educational purposes only. It should not be relied on as a substitute for legal advice. If you have any feedback about our AI Hub or have found a resource or recent case of interest, we would love to hear from you. Please email us.
What does the term AI refer to?
AI is often used as an umbrella term to cover a wide range of technologies.
Broadly speaking, AI is defined in the AI Glossary (a plain English guide to key AI terms for the Australian legal profession) as the programming of machines to behave in ways that mimic human intelligence and human capabilities, including reasoning, decision-making, interacting, and perceiving.
AI can take many forms, such as Large Language Models (LLMs), Machine Learning Models, and Deep Learning Models. Many people use the term AI to describe generative AI chatbots which are computer programs that simulates online human conversations using generative AI.
What is generative AI?
Generative AI is a form of artificial intelligence that generates text (or image, video or sound) based on prompts from the user. Generative AI chatbots, such as ChatGPT, Claude, Google Gemini and Microsoft Copilot, are AI tools that have been trained to respond in a conversational, online chat style. You can enter prompts (questions) to get the chatbots to do things like generate or summarise text or answer questions.
You can also enter more prompts to refine the response.
If you are considering using generative AI to prepare court or tribunal documents, it is important for you to understand how the specific AI tool works and how its use may cause issues in your case.
How do gen AI chatbots work?
Before using generative AI chatbots (or any other AI tools), it’s important to understand what they can (and can’t) do.
Generative AI chatbots may mimic conversation, but they neither think nor reason like humans. They lack human intelligence and operate without genuine reasoning or understanding – they don’t think or reason the way humans do.
Generative AI chatbots are tools that are powered by Large Language Models (LLMs), which analyse large amounts of text to predict the most likely next word in a sentence based on context. In simple terms, they repeatedly autocomplete to generate paragraphs of text that sound natural and coherent.
Because LLMs are trained to produce responses that seem most plausible or human-like, their answers reflect statistical likelihood, not factual accuracy. Generative AI tools have no understanding of meaning or truth and accordingly cannot reliably interpret complex or nuanced questions. Users need to apply their own professional judgment and verify any information produced before relying on any generative AI outputs.
Queensland Courts have prepared a plain English AI Guide for non- lawyers. This guide is a handy downloadable resource that can be used to explain these concepts to clients.
Where do gen AI chatbots find the information for their answers?
Generative AI chatbots create answers based on the information they have been trained on. Most public chatbots are trained using text from across the internet — including websites, online books and social media — rather than from verified or authoritative legal sources. Some also rely on data that may be outdated. As a result, the Australian legal information they provide may not be accurate or reliable. Because AI chatbots cannot distinguish between fact and opinion in their training material, their responses may include incorrect, biased or misleading statements presented as fact.
The use of generative AI chatbots, including ones built into legal practice management systems, carries the risk of creating hallucinations and there are many examples where lawyers have used generative AI to prepare lists of cases which contained fake or fictitious case names which resulted in adverse consequences for the lawyers concerned, including personal indemnity costs orders and referrals to legal regulatory bodies for investigation into improper conduct.
Lawyers also need to be careful they do not input confidential or personal client information into the generative AI chatbot which may breach confidentiality obligations and may waive any legal professional privilege.
What is machine learning?
Machine learning is a process which involves programming a computer using a large set of relevant data (training data). Instead of being given explicit instructions, the system is provided with a goal and learns how to achieve it through trial and error. It builds a model that attempts to solve the problem and continually checks its outputs against the expected outcomes in the data (model training).
What is deep learning?
Deep learning is a type of artificial intelligence that teaches computers to learn by example. It uses layers of algorithms called “neural networks” to recognise patterns in data such as images, text, or speech. Over time, the system improves its accuracy by learning from large amounts of information, making it useful for tasks like image recognition, language translation, and voice assistants.
What is a LLM?
An LLM is a large language model which, through sophisticated pattern recognition and probabilistic calculations, learns to predict the next best word or part of a word in a sentence.
This type of AI tool communicates with humans using natural language, rather than relying on traditional computer code or commands. While it excels at simulating conversation and interaction, its ability to conduct research or perform calculations can be inconsistent.
Examples of large language models in common usage include ChatGPT, GPT-4 (Open AI), Claude (by Anthropic), Gemini (formerly Bard by Google), LLaMA (large language model metaAI by Meta (Facebook)).
What are some commonly used examples of AI tools or programs?
The first and perhaps most well-known AI tool is ChatGPT. ChatGPT is OpenAI’s generative AI chatbot that was launched in 2023 and catapulted the use of AI into mainstream society through ease of access (via a website) and its use of conversational language to drive output. ChatGPT stands for “Chat Generative Pre-Trained Transformer”. Other examples of generative AI chatbots include Microsoft Copilot, Google Gemini, Harvey AI and Claude (Anthropic’s generative AI chatbot).
AI is also now inbuilt into subscription based legal research tools (e.g. Lexis AI and Westlaw) and other litigation technologies including E-Discovery platforms (e.g. Bundledocs or eDiscovery).
Further examples of gen AI tools used in Australian legal practice are discussed here.
How is AI currently being used in legal practice?
There are many specific AI tools and many ways in which AI is currently being used in different legal practices for certain tasks. AI is now built into most legal practice management systems (closed practice systems) and examples of how AI can be leveraged in such systems include to:
- Prepare chronologies
- Compare documents (to note inconsistencies)
- Draft documents and emails
- Summarise lengthy texts
- Assist with legal research
- Search through documents
In 2025, the Law Society conducted a survey to find out how practitioners across the Western Australian legal profession were using AI.
128 anonymous responses were received from a variety of practitioners across private practice, government and community legal centres. 54% of respondents reported using AI platforms in their current legal practice.
Of those who did report using AI, 47% reported using it for legal research, 30% for practice management or administrative tasks, 27% for legal document generation (e.g. memos, letters, precedents, advice etc), 19% for discovery/document processing and 17% for drafting non-contentious legal instruments (e.g. agreements, wills, leases etc…).
The specific AI tools used included Microsoft Co-Pilot or Harvey AI (51%), other subscription based legal research tools (e.g. Lexis AI) (40%), publicly available large language models (e.g. ChatGPT) (32%), e-discovery platforms (e.g. Bundledocs or eDiscovery) (13%).
10% reported using inhouse developed AI programs and 12% did not know what AI tools were being used in their legal practice.
A detailed analysis of the survey’s results is available here.
What are the risks of relying on AI tools for legal research?
While AI is increasingly being used as a research tool in litigation, the Courts have in recent times stressed it is not an appropriate substitute for legal research by the legal practitioner.
Specifically, in the 2025 decision of JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314, Judge Gerrard stated that the use of AI for legal research “…comes with considerable risks which, if not mitigated, have the capacity to lead to actions which could be construed as a contempt of court”.
In that decision, the legal practitioner in question used AI to prepare a list of cases which included false authorities and accordingly, the practitioner was referred to the relevant legal regulatory body for investigation.
His Honour noted at para 24 of the decision there are a concerning number of reported matters where reliance upon AI has directly led to the citation of fictitious cases in support of a legal principle. His Honour summarised those risks as follows:
“First, if discovered, there is the potential for a good case to be undermined by rank incompetence. Second, if undiscovered, there is the potential that the Court may be embarrassed and the administration of justice risks being compromised. Relatedly, the repetition of such cases in reported cases in turn feeds the cycle, and the possibility of a tranche of cases relying upon a falsehood ensues. Further, the prevalence of this practice significantly wastes the time and resources of opposing parties and the Court. Finally, there is damage to the reputation of the profession when the clients of practitioners can genuinely feel aggrieved that they have paid for professional legal representation but received only the benefit of an amateurish and perfunctory online search.”
Other examples where legal practitioners have been the subject of adverse consequences arising from their use of AI to prepare legal research include the family law matter of Handa & Mallick [2024] FedCFamC2F 957 in which a Melbourne practitioner tendered a single-list of authorities to the Court that had been prepared from LEAP (a legal software package) which allegedly contained false authorities (which could not be located and were not provided by the practitioner to the Court when requested).
Other examples include the decision of Valu (No. 2) [2025] FedCFamC2G 95, Dayal (2024) 386 FLR 359 and Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731 (Wamba Wemba).
What are the risks of relying on gen AI tools to prepare legal advice or court documents?
If you intend to use generative AI to assist in providing legal advice, you must take steps to verify the accuracy and reliability of any output and ensure that your use of AI does not breach ethical or professional obligations.
When using open platforms such as ChatGPT, only de-identified client information should be entered. This helps minimise the risk of breaching confidentiality or disclosing personal information.
To assess the reliability of AI-generated advice, lawyers must understand what data or sources the AI system has drawn upon to produce its response. There have been several instances of practitioners relying on AI-generated case law or legal authorities that later proved to be fictitious “hallucinations”, leading to disciplinary action, reputational damage, and potential professional negligence claims.
AI tools should never be treated as a substitute for a lawyer’s own professional judgment or legal research. At best, they can serve as an aid to legal work but the practitioner remains responsible for checking, verifying, and ultimately “owning” the work.
If generative AI is used within a closed or secure system in a legal practice, any AI-generated content should be treated as a draft, to be reviewed and finalised by the supervising lawyer or other appropriately qualified professional.
Is it safe to use AI tools that are embedded in legal practice management software?
Not necessarily.
In July 2024, the Federal Circuit and Family Court of Australia published its reasons for judgment in the case of Handa & Mallick [2024] FedCFamC2F 957.
In that case, a family lawyer in Melbourne was referred to the Victorian Legal Services Board and Commissioner for investigation after he tendered a single-list of authorities to the Court that had been prepared from LEAP (a legal software package) of which none of the listed cases could be located.
Practitioners should find out from their legal software providers how AI is embedded into their functionality, and what settings may be relevant, so they can consider the risks involved and what steps need to be taken to ensure the use of AI does not put the solicitor at risk of breaching their ethical and professional obligations.
All content produced should be independently verified by the supervising lawyer to confirm that it meets the lawyer’s ethical standards and professional responsibilities.
What are the ethical obligations of lawyers arising from the use of AI?
There are several core duties which are enlivened through the use of AI, including competence care and skill, confidentiality and privilege and duty to supervise – specifically, Rules 4, 5, 9, 17 and 19 and 37 of the Australian Solicitor Conduct Rules.
Practitioners are encouraged to read the ASCR.
In short, those rules provide that:
- a solicitor must deliver legal services competently, diligently and as promptly as reasonably possible (ASCR r4.1.3) and avoid any compromise to their integrity and professional independence (ASCR r4.1.4)
- a solicitor must not engage in conduct, in the course of legal practice or otherwise, which demonstrates they are not a fit and proper person to practise law or which may be prejudicial to, or diminish public confidence in the administration of justice or bring the profession into disrepute (ASCR r5)
- a solicitor has a duty to maintain confidentiality (ASCR r9)
- a solicitor has a duty to maintain independence and exercise the forensic judgements called for during the case (ASCR r17)
- a solicitor owes specific duties to the court (ASCR r19)
- a solicitor with designated responsibility for a matter must exercise reasonable supervision over solicitors and other employees engaged in the provision of the legal services for that matter (ASCR r37)
The Legal Practice Board of Western Australia, the Victorian Legal Services Board + Commissioner and the Law Society of New South Wales have produced a joint statement on the use of AI in Australian legal practice that discusses the specific ASCR rules enlivened by the use of AI.
There are also useful guides issued by the Queensland Law Society and Law Institute of Victoria concerning the use of AI in legal practice which Western Australian practitioners may also find helpful.
Do lawyers have an obligation to use AI?
At present, there is no specific obligation for legal practitioners to use AI in legal practice. However, some legal academics have noted that while there is currently no express obligation for lawyers to use AI, growing adoption across the profession may lead courts to expect its use for certain tasks. Others have suggested that the requirement for solicitors to charge costs that are fair, reasonable and proportionate may, now or in the future, effectively oblige solicitors to use AI (or at least prevent them from recovering costs exceeding what would have been incurred had AI been used) where the responsible use of AI could have reduced those costs.
The ethical use of AI in legal practice is a rapidly changing landscape. Lawyers not only need to ensure they carefully consider any ethical concerns about the use of AI raised by their clients and address them proactively but also keep up to date with developments in this fast-paced area of practice.
How far does a lawyer’s duty to supervise extend to the use of AI by others in the legal practice?
In Australia, the duty to supervise is framed as an individual duty. It is worthwhile noting that this is different to the United Kingdom which has a different system of regulation in which law firms have their own duties (e.g. collective duties) separate to those of individual lawyers.
In Wamba Wemba Native Title Claim Group v State of Victoria [2025] FCA 731 (Wamba Wemba), a junior solicitor deposed that she had used Google Scholar while working from home to produce the document citations for footnoting in documents filed at Court. Her supervising lawyer (the solicitor for the applicant) deposed that he had checked her work and described the supervision as having been performed collaboratively between team members, but the substance of his evidence was that he was not aware that anyone had checked the junior solicitor’s work. The supervising solicitor accepted it was an error on his part to allow collaborative work to be performed remotely and described the failure to ensure that anyone checked the junior solicitor’s work as “an oversight error”.
The Court ultimately described the error (at para 15) as “…centrally one of failing to check and verify the output of the search tool, which was contributed to by the inexperience of the junior solicitor and the failure of [the lawyer on record] to have systems in place to ensure that her work was appropriately supervised and checked.”
While the Court in that case ordered the supervising solicitor to personally pay the costs of the respondents, on an indemnity basis, the Court did not consider it was appropriate to refer the solicitors’ conduct to the Victorian Legal Services Board in that case.
What is the duty of principals with respect to the use of AI within the legal practice?
Principals are responsible for ensuring proper supervision of the provision of legal services within a law practice. They may be held accountable for ethical breaches by employees if they were, or reasonably should have been, in a position to influence that conduct.
Support staff, including paralegals, assistants and receptionists, may have access to AI tools used in the practice. Principals must therefore have effective supervision systems in place to ensure staff do not unintentionally breach the practice’s legal or ethical obligations.
While individual practitioners remain responsible for their own competence and professional judgement, the Principal is ultimately accountable for the overall conduct of the practice. A failure to provide adequate supervision may result in disciplinary action, including potential disqualification from practice.
See ASCR r37.1 and the LIV AI Ethics Guidelines for more information.
What are the risks from the use of AI to confidentiality and privilege?
A key risk in using AI is that information included in prompts to the AI tool (or used for the initial training of an AI tool) may be reused or disclosed by some AI providers. For example, ChatGPT’s terms of use makes it clear it will use content to train the system (unless the user has actively opted out) and/or give data to undefined third parties.
The level of risks arising from the use of AI to confidentiality depend in part on whether the relevant platform is open or closed.
What are the risks arising from the use of an open AI platform?
An open platform (i.e. one in which information inputted into the platform can be accessed by those outside of the firm) may destroy the quality of confidentiality and jeopardise the privileged nature of communications.
Why does the use of open AI systems carry greater risk?
Open-access AI systems share the information you enter with the system’s developer and, in some cases, may make it accessible to other users. For example, ChatGPT’s terms of use makes it clear they will be using content to train the system (unless the user as actively opted out) and/or give data to undefined third parties. For this reason, they are not suitable for handling confidential or personal client information. If a practitioner intends to use client data with an AI tool, they should obtain the client’s informed consent first.
Does having a closed AI system reduce the risk?
Some AI systems are described as “closed,” meaning the information entered is not shared outside the legal practice. However, the Law Institute of Victoria in its AI Ethical Guidance has strongly advised practitioners to carefully examine any assertion that an AI is a “closed system”. Even closed systems carry risk and may still be vulnerable to data breaches, cyber-attacks and exfiltration of training data. If a law practice wishes to train an AI model with client data, it may be prudent to anonymise the data to mitigate risks.
Can information barriers mitigate the risks of using AI?
Where a legal practice has an information barrier in place, it is important to ensure that the use of AI does not undermine its effectiveness or inadvertently create a conflict of interest.
If a client’s confidential information is entered into an AI system, there is a risk that confidentiality may be breached. This could happen if the same AI tool is later used for another client and inadvertently draws on or reveals information from the first client’s matter.
Practitioners also need to be mindful of the Harman undertaking.
What is the relevance of the Harman undertaking to the use of AI?
The Harman undertaking is an implied common law obligation that requires parties to litigation not to use any information obtained through court process for a purpose other than that for which it was originally produced (i.e. an ancillary or collateral purpose).
The Harman undertaking is an obligation implied at law that exists when parties engage in litigation. Practitioners who utilise information contained in materials produced or created by order of the court in legal proceedings for a collateral purpose (i.e. to train an AI model or tool) may breach the undertaking.
Could the use of open AI (like ChatGPT) by a client to draft an email to their lawyer constitute a waiver of legal professional privilege?
Some legal academics have suggested that if a client uses ChatGPT to write an email to their lawyer containing otherwise confidential and privileged information, it is possible that the use of ChatGPT of itself may take the information outside of the protection of legal professional privilege.
In the United Kingdom, the Court has accepted sharing data to a cloud storage system could constitute in certain circumstances a waiver of privilege. Whether such conduct may amount to waiver of legal professional privilege in Australia is likely to change as the use of AI, and its risks, becomes more widespread.
How can legal practitioners safely use AI in legal practice?
Legal regulatory bodies in NSW, Victoria, and Western Australia (i.e. the Legal Practice Board of Western Australia) have issued guidance to practitioners on the use of AI through their Statement on the use of AI in Australian legal practice.
The Statement outlines the specific conduct rules applicable to practitioners in WA which must be complied with when using AI in any capacity in the provision of legal service delivery and suggests practices limit the use of AI tools in legal practice to tasks which are lower-risk and easier to verify (e.g. drafting a polite email or suggesting how to structure an argument) and prohibiting its use for tasks which are higher-risk and which should be independently verified (e.g. translating advice into another language, analysing an unfamiliar legal concept or executive decision-making).
What tasks are generative AI chatbots best used for?
Generative AI chatbots are most effective at working with text and performing text processing tasks. They can summarise information, adjust tone or format, and create new content or outlines in a specific style. They can also provide general information on a wide range of topics and respond to questions. However, it’s important to remember that their answers are not always accurate and may contain errors.
Can gen AI chatbots ever be safely used in legal practice?
At present, generative AI chatbots cannot provide reliable legal advice tailored to individual cases.
However, they can serve as a useful support tool for legal practitioners. Generative AI tools can assist in distilling legal principles, explaining the law in plain language, and helping to communicate how it applies to a client’s circumstances. These tools may also assist with drafting by organising facts more clearly, suggesting headings, refining structure, or improving formatting, grammar, tone, and writing style.
Caution is essential when using generative AI to assist with affidavits or witness statements. Any sworn or affirmed document must accurately reflect the person’s own knowledge, words, and intent, not text generated by an AI system.
Generative AI tools can also be valuable in a review or analytical role, for example, acting as a “devil’s advocate” to identify weaknesses in an argument, highlight inconsistencies, or test clarity and logic by generating alternative drafts or summaries.
All content produced by generative AI should be independently verified by the supervising lawyer to confirm that it meets the lawyer’s ethical standards and professional responsibilities.
What is best practice when it comes to using AI in legal practice?
In September 2025, the Law Institute of Victoria published Ethical Guidelines for AI use in legal practice which contained nine recommendations for practitioners to follow (if they wanted to minimise their risk of ethical breaches, complaints to the regulatory body or needing to make a professional indemnity insurance claim from using AI in legal practice).
- Never enter confidential client information or legally privileged information into an open source or commercial AI system without client consent;
- Check all outputs generated by AI for accuracy and suitability;
- Check all cases generated by AI and ensure they exist and are relevant to the client matter;
- Disclose the use of AI to your client, and also be in the position to inform your opponent and the Court (if put on inquiry about its use);
- Read the Privacy Policy and terms of use of an AI tool before you use it for the first time and review them regularly;
- Review the Model Card and Data Nutrition Label (if available) in conjunction with the Privacy Policy.
- Ask technology vendors relevant questions after reviewing the Privacy Policy, Model Card, and Data Nutrition Label to ensure informed choices are made when selecting an AI tool for your organisation;
- Ensure a client is charged fairly for work completed with AI; and
- Retain relevant information to conduct conflict checks after terminating a retainer as part of good practice management.
What information should a lawyer know about an AI program before they use it?
Lawyers should know enough about the AI program they intend to use that they could, if needed, be able to explain to the client, the Court, the opponent or all three, the AI program used, what it was used for, how it was used and what information was obtained from it.
If a lawyer cannot do so, the Law Society’s Ethics Committee is of the view that they should not use the AI program. The work must be yours, which necessarily requires you to be able to provide the explanation required.
In September 2025, the Law Institute of Victoria published Ethical Guidelines for AI use in legal practice which recommended that practitioners should read the Privacy Policy and terms of use of an AI tool before it is used for the first time and review them regularly, review the Model Card and Data Nutrition Label (if available) in conjunction with the Privacy Policy and ask technology vendors relevant questions after reviewing the Privacy Policy, Model Card, and Data Nutrition Label to ensure informed choices are made when selecting an AI tool for your organisation.
What steps do Courts expect legal professionals to take to verify AI assisted work?
A qualified legal practitioner should take ultimate responsibility for any document submitted to the Court regardless of how AI has been used in generating that document.
The accuracy, relevance and propriety of all pleadings, affidavits, submissions, citations, witness statements and expert reports relied upon by a represented party remains the responsibility of the legal practitioner.
In JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314, Judge Gerrard was concerned that the lawyer involved did not appear to fully comprehend what was required of him.
At para 34 Judge Gerrard explained that it was, “…not sufficient to simply check that the cases cited were not fictitious. What is expected from legal practitioners as part of their duty to the Court and to their client is that those cases (if they do exist) are reviewed to ensure they are authority for the principle the lawyer wishes to rely upon, have not been subsequently overturned or distinguished by a higher court, and are considered in respect of how and why those principles are relevant to the factual matrix of the case in which they intend to advance that proposition.”
Should legal practices which use AI have documented policies within their practice in relation to its use?
Yes.
It is the view of the Law Society’s Ethics Committee that without policies, neither the lawyer nor the staff have any way of knowing the steps that should be undertaken, the enquiries that should be made and the procedures that should be followed concerning the use of AI. It is almost certain that you will be unable to provide a satisfactory explanation to the client, the Court and/or the regulator concerning your use of AI unless you have adequate policies in place.
What should the AI policies include or address?
Policies should include whether generative AI can be used in the practice, what functions the AI tools can perform, whose approval is required for their use on any particular file and what supervision protocols apply.
Policies should cover how the use of LLMs and generative AI will be explained to clients and the implications of their use, both in terms of any risks and the effect on the fees which are likely to be incurred.
The Statement on the use of AI in Australian legal practice produced jointly between the Legal Practice Board of Western Australia and counterparts in NSW and Victoria states that lawyers who are using AI in their practice should consider implementing clear, risk-based policies to minimise data and security breaches, and set out what AI tools they have decided to use in their practice, who can use those tools, for what purposes, and with what information. These policies should also set out how they will continuously and actively supervise the use of AI tools by junior and support staff, and how documents containing AI-generated content will be reviewed for accuracy and verified before they are settled. We recommend that lawyers make these policies available to clients upon request to increase transparency.
Lawyers should carefully consider any ethical concerns about the use of AI raised by their clients and address them proactively.
Will AI policies alone be sufficient?
No.
The Law Society’s Ethics Committee has emphasised that policies without training will not be sufficient. The training needs to teach everyone in the practice what they need to know and do in order to comply with the policies.
What information does counsel need to know about the use of AI to prepare legal documents in a matter?
In DPP v Khan [2024] ACTSC 19, the Supreme Court of the Australian Capital Territory emphasised at para 43 that counsel should make appropriate enquiries and be in a position to inform the court as to whether or not any material that is being tendered has been written or rewritten with the assistance of an AI tool.
What information should be disclosed about the use of AI tools by a legal practice?
The Statement on the use of AI in Australian legal practice produced jointly between the Legal Practice Board of Western Australia and regulatory counterparts in NSW and Victoria recommends that lawyers be transparent about the use of AI within a legal practice and properly record and disclose to their clients (and where necessary or appropriate, the court and fellow practitioners) when and how they have used AI in a matter.
In the Guidelines for the use of generative AI, the Supreme Court of Western Australia states that generative AI must not be used in a way that may mislead other parties or the court as to the work undertaken by a party or a practitioner or the manner in which the content of a document has been produced. Accordingly, when directed by the court or where otherwise necessary or appropriate, the use of generative AI (including the preparation of any materials) should be disclosed to other parties and the court.
As the use of AI rapidly changes, so too will the ethical and professional obligations of legal practitioners. Lawyers need to ensure they stay up to date with developments in this fast-paced area of practice.
Is the use of AI relevant to cost disclosure?
In terms of cost disclosure, lawyers using AI to support their work should ensure that the time and work they bill clients for accurately represent the legal work done by law practice staff for their client. Lawyers who use AI should ensure that it does not unnecessarily increase costs for their client above traditional methods (e.g. because of additional time spent verifying or correcting its output).
Some legal academics have suggested that the requirement for solicitors to charge costs that are fair, reasonable and proportionate may, now or in the future, effectively oblige solicitors to use AI or at least prevent them from recovering costs exceeding what would have been incurred had AI been used, where the responsible use of AI could have reduced those costs.
The ethical use of AI in legal practice is a rapidly changing landscape. Lawyers not only need to ensure they carefully consider any ethical concerns about the use of AI raised by their clients and address them proactively but also keep up to date with developments in this fast-paced area of practice.
What are the potential consequences which may follow the misuse of AI by a legal practitioner?
The misuse of AI by a legal practitioner could result in either (or all) of the following adverse consequences – referral to the Legal Practice Board of Western Australia for investigation and/or costs orders (including personal costs orders on an indemnity basis) and/or claims against the practice.
There are also reputational consequences for the practitioner, the firm and the profession as a whole to consider.
When the misuse of AI is identified by the Court, the approach of the Court in how they respond to that will depend on the circumstances. For example, in the case of Wamba Wemba, the applicant’s solicitor was ordered to personally pay the costs of the respondents, on an indemnity basis, incurred through the firm’s use of artificial intelligence in the preparation of documents served on the respondents.
In JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314, the Court was satisfied that it was appropriate for a personal costs order to be made against the practitioner and that the practitioner should be referred to the Legal Practice Board of Western Australia for investigation. In that case, the Court noted concern at para 34 of the judgment that the lawyer did not appear to fully comprehend what was required of him to discharge his ethical obligations.
In what circumstances may misuse of AI result in indemnity costs being ordered against a practitioner?
In Wamba Wemba, a junior solicitor at the firm prepared footnotes to court documents while she was working out of the office and when she did not have access to the physical or electronic copies of the footnoted documents held at the law firm. She stated that she used the Google Scholar to produce the document citations.
Her supervising solicitor described that work as having been performed collaboratively between team members and he was not aware that anyone had checked the junior solicitor’s work. The lawyer accepted that it had been an error on his part to allow collaborative work to be performed remotely.
The Court stated that the supervising lawyer, as the lawyer on record, failed in this case to have systems in place to ensure that [the work of the junior lawyers] was appropriately supervised and checked.
In this case, the court ordered that the supervising solicitor personally pay the costs of the respondents, on an indemnity basis.
In what circumstances may a lawyer who has misused AI be referred by the Court to a regulatory body?
There have been several cases in Australia in which Courts have referred a lawyer to the appropriate regulatory body. In these cases, the Court found that the lawyer engaged in improper conduct.
In JNE24 v Minister for Immigration and Citizenship [2025] FedCFamC2G 1314, Judge Gerrard stated at para 28 that in considering whether to make such a referral, the Court was guided by the sensible and measured approach taken in other decisions, in particular, by Judge Skaros in Valu (No 2) and Judge Humphreys in Dayal.
That approach required considering a number of matters including:
(a) The sincere apology and genuine regret of the legal representative;
(b) Her Honour’s acceptance that the conduct would not be repeated;
(c) The undertaking of the legal representative to further his knowledge and understanding of the risks of using generative AI;
(d) As soon as the legal representative became aware of the fictitious citations, he took steps to ameliorate the error;
(e) The inconvenience to the Minister and the Court, and the disruption to the hearing;
(f) The strong public interest in referring the conduct to the regulatory authority given the increased use of generative AI tools by legal practitioners.
How is the use of AI regulated by courts in WA?
In December 2025, the Supreme Court of Western Australia issued Guidelines on the use of generative AI .
The guidelines highlight the obligations practitioners have which must not be breached through the use of AI and requires any AI generated content that is relied on for the purpose of conducting proceedings in the Supreme Court to be verified by a human who takes legal responsibility for the contents of that document.
How is the use of AI by lawyers regulated by courts in other parts of Australia?
Courts and tribunals around Australia are taking varied approaches to the regulation of AI by legal practitioners in their respective jurisdiction.
These differences can create challenges for practitioners working across multiple jurisdictions as differing rules and procedures make it harder for practitioners to meet their professional and compliance obligations.
Further information about practice directions in courts around Australia is available on this AI Hub here.
International use and regulation
For practitioners interested in finding out more about the development of AI regulation globally, there is a Global AI Regulation Tracker – an interactive world map by Raymond Sun that tracks AI law, regulatory and policy development around the world which the Law Council of Australia has referenced as an online resource.
The Legaltech Hub is an insights and analysis platform which helps legal professionals find legal technology software and consultants in any language, anywhere around the world.
Are there any CPDs available?
Yes.
There is a range of CPD webinars available on demand on AI and related topics including Navigating the Landscape of AI tools for Lawyers, Machine Learning and the Disruption of Intellectual Property and Cybersecurity.
Members can access the recordings of these CPD webinars for free through their Member Online Portal.
Information about upcoming CPDs can be found here – CPD Seminars: Enhance Your Legal Skills.