What is AI

What is AI?

Artificial Intelligence (AI) allows machines to carry out tasks that would normally require human intelligence—such as communicating in natural language, organising information, and solving complex problems. However, AI only mimics intelligence and understanding, it does not replicate it. It is important that legal practitioners understand AI and how it operates to ensure that when they use it, they do so ethically.

Below are some definitions of key terms and concepts legal practitioners must understand in order to ethically use AI in their legal practice.

Generative AI: Generative AI refers to a form of artificial intelligence capable of producing original content such as text, images, or music. While the output may appear novel, it is generated based on patterns learned from existing data. Unlike a search engine that retrieves existing information, Generative AI predicts and constructs new material based on its training data. Because it relies on these patterns, Generative AI can sometimes generate false or misleading content, known as an AI hallucination, where it fabricates facts, references, or details that do not actually exist.

Machine learning: a process which involves programming a computer using a large set of relevant data, known as training data. Instead of being given explicit instructions, the system is provided with a goal and learns how to achieve it through trial and error. It builds a model that attempts to solve the problem and continually checks its outputs against the expected outcomes in the data (i.e. model training). Over time, the model can improve its performance through further data input and feedback. 

Large Language Model (LLM): This type of AI tool communicates with humans using natural language, rather than relying on traditional computer code or commands. While it excels at simulating conversation and interaction, its ability to conduct research or perform calculations can be inconsistent. Examples of large language models in common usage include ChatGPT, GPT-4 (Open AI), Claude (by Anthropic), Gemini (formerly Bard by Google), LLaMA (large language model metaAI by Meta (Facebook).

Ingestion: The process in which data is used to train or fine-tune a machine learning model. The data is often tokenised and is typically de-identified and rendered unreadable to humans which may place it outside the scope of many common definitions of “confidential information”. During ingestion, the AI system learns and internalises patterns from the training data, but it does not usually reproduce the data word-for-word. As a result, its use may not meet the traditional thresholds for copyright infringement or breach of confidentiality.

Hallucination: Generative AI systems have a tendency to produce outputs that seem plausible but include invented or inaccurate elements designed to fill gaps in the surrounding content. This occurs because current AI does not truly understand the information it generates and cannot independently verify the accuracy of its output. For instance, a general-purpose language model may draft legal submissions that appear credible on the surface but contain fabricated case citations or fictional judicial statements. The model can tell from its training that submissions should contain case situations but doesn’t understand that those citations need to reference real cases. While the risk of such “hallucinations” is expected to decrease as AI models become more refined and specialised, it is unlikely to be eliminated entirely.

Bias: Any tool developed using machine learning will inherently reflect the limitations of its training data. If the data does not adequately represent the population or context in which the tool is applied, this can lead to bias or increased inaccuracies. For example, facial recognition software may perform poorly in identifying gender or racial characteristics if the training dataset lacks diversity. Similarly, AI systems trained to negotiate contracts in one legal jurisdiction or language might appear effective in another, but their accuracy and reliability cannot be assumed without thorough validation.

Useful resources

 

Area
Telephone Number
Law Society of Western Australia Reception
(08) 9324 8600
Law Mutual
(08) 9481 3111
Continuing Professional Development
(08) 9324 8640
Membership Services
(08) 9324 8692
Professional Standards Scheme
(08) 9324 8653
Old Court House Law Museum
(08) 9324 8688
Francis Burt Law Education Programme
(08) 9324 8686
Media Enquiries
(08) 9324 8650