Factual. Independent. Impartial.
Follow us
Artificial intelligence is now part of everyday life for millions of Australians, but many lack the confidence and skills to use it effectively and safely.
Almost half of Australians are using generative AI, according to a 2025 Australian Digital Inclusion Index survey, while an EY report found adoption rises to more than two-thirds among workers.
Despite this widespread use, confidence and proficiency remain low.
A 2024 Western Sydney University survey found just one in five adults feels confident using AI tools, while 46 per cent said they want to learn more.
Similarly, the EY report found that more than half of Australian workers lack confidence in their AI skills, and only a third have received any formal training from their employer.
But using tools such as AI chatbots safely doesn't require a computer science degree.
You just need to understand what they do and how to check their work.
Jake Renzella, a computer scientist at the University of NSW, says AI chatbots are interfaces for large language models (LLMs).
LLMs work by predicting the next most likely text based on patterns they find in vast amounts of training data.
"Imagine you are presented with the sentence: 'Hello, how are ___'. Most people would say 'you'," Dr Renzella told AAP FactCheck.
"The language models are doing something very similar."
That means AI chatbots can rapidly generate polished, confident responses that may be inaccurate.
Here are three habits that can help you use AI safely and effectively.
1) Be Specific
Dr Renzella said vague queries or 'prompts' get vague responses from AI chatbots.
He said to think of prompting like instructing a new office intern.
"If you say 'make me a cup of coffee', the intern is unlikely to know how you like your coffee, where you'd like your coffee from, and so on," he said.
A good prompt explains the context, audience, format, scope and guardrails to a chatbot, Dr Renzella said.
You should also avoid leading questions, as chatbots are built to be agreeable and will reinforce your assumptions.
Researchers call this 'sycophancy', and it can lead to inaccurate responses or even harm to vulnerable users.
The eSafety Commissioner warned in 2026 that sycophancy in AI chatbots posed risks to children by 'entrapping and entrancing' them with human-like, often harmful conversations rather than flagging danger.
So asking open, neutral and specific questions is likely to produce safer and more accurate responses.
2. Request Sources
Chatbot responses often sound highly authoritative, but Dr Renzella warns that fluency is often mistaken for accuracy.
When a chatbot presents fabricated information with confidence, that's called a hallucination.
They also routinely fabricate sources and quotes from sources, he said.
In 2025, a judge criticised lawyers for submitting AI-generated court documents that cited nonexistent cases and contained inaccurate quotes.
Consulting giant Deloitte partially refunded the government for a $440,000 report after it was found to contain fake academic references and other errors, The Guardian reported.
Dr Renzella said the risk is even higher for Australian topics due to the low amount of local training data.
"In Australian legal contexts, or medical contexts, we often find American-centric responses which are inappropriate or hallucinations," he said.
Remember to always ask chatbots for citations of their sources, then search for them and verify them yourself.
3. Always Review
Polished chatbot responses can make it tempting to accept the answer and move on.
Some 57 per cent of Australian employees relied on AI output without checking it, a KPMG study found, with nearly 60 per cent making work-related mistakes as a result.
Jason Lodge, an educational psychologist at the University of Queensland, has warned that polished AI responses also discourage students from engaging deeply.
"Research tells us this can signal to the learner that deep mental engagement is no longer necessary," Professor Lodge wrote in The Conversation.
This can trigger a cycle in which a student's own knowledge is eroded, he explained, making them more dependent on AI responses and less able to judge their accuracy.
"Critical thinking is not a generic skill – it is deeply intertwined with knowledge," Prof Lodge wrote.
So for important decisions, particularly in relation to finance, health, legal matters or safety, always review AI chatbot responses critically and ensure a qualified human has the final say.
The most important part of AI is you
Be Specific: Give clear instructions to get better results.
Request Sources: Ask for citations to verify the information.
Always Review: You decide what to trust and use.
AAP FactCheck thanks Dr Jake Renzella from the University of NSW for providing the tips and other content for this article.