Software giant Atlassian will issue guidance to companies eager to use artificial intelligence technology in a bid to curb risky behaviour, dangerous bias and “oops” moments, the company has revealed.
Atlassian chief legal officer Erika Fisher told AAP the company would release an AI template to partners on Wednesday after teaming with researchers at the University of Technology Sydney to test its advice.
The guidance will come during a time of unprecedented growth for artificial intelligence technology and after a survey from tech firm UKG showed almost one in four Australians trusted AI tools “completely”.
Ms Fisher said the arrival of generative AI had an impact equivalent to Apple’s iPhone and questions about safe ways to use the technology had overtaken privacy concerns as the top query for Atlassian.
Companies, she said, were right to question safe use of the technology as it could prove dangerous in some scenarios.
“Day to day, there seems to be a new (AI) headline around everything from an ‘oops’ to ‘oh, that’s really damaging’,” Ms Fisher said.
“We try to think about all of the angles through which seemingly good ideas could be exploited, could have bias introduced, could have unintentional areas of undermining and to really look at it from a risk-based analysis.”
Atlassian’s AI template, which has been tested by UTS’ Human Technology Institute, includes guidance for clear communication, building for trust, involving engineering and legal teams, and considering mitigation strategies.
Ms Fisher said the advice had come from real-world experience with the technology, including one instance in which Atlassian assessed using AI to streamline processes for a recruitment company but judged it to be too risky to deploy.
“We couldn’t get comfortable with the provenance of the data that it was providing to help us make different decisions and obviously the outcomes of that are tremendous when we think about bias, when we think about how it shapes our workforce,” she said.
“This was an area where we said we’re just not comfortable introducing it – the risk is too high in terms of unintended consequences.”
The Atlassian guidance would stop short of being a compliance framework, she said, but was designed to “get everyone past the blank page”.
The firm’s advice comes as a UKG survey of more than 1000 Australian employees found 24 per cent trusted the results from generative AI tools “completely” and 55 per cent had some faith their findings.
More than one in three Australian workers had entered personal, identifying information into AI tools, the research found, and one in four had shared sensitive work information even though it could be used or shared.
Those surveyed said they intended to use AI to balance their workload, automate time-consuming tasks, and complete more work.