According to data from Austrade’s Australian AI Industry Capability Report, prepared by the Commonwealth Scientific and Industrial Research Organisation (CSIRO), more than 50 percent of organizations are using AI, while nearly half of Australians have used generative AI.
Alongside this rapid uptake, attention has increasingly turned to how AI systems are designed and governed. National policy initiatives, including the National AI Plan and the AI Plan for the Australian Public Service, emphasize the importance of safe, secure, and trustworthy AI systems. CSIRO has positioned responsible AI as a central focus of its work in this area.
CSIRO’s approach to responsible AI emphasizes building safety, security, privacy, and reliability into systems throughout their lifecycle. This includes maintaining human oversight, enabling scrutiny of AI decisions, and supporting informed and safe use by individuals and communities.
In 2025, CSIRO’s Responsible AI (RAI) team published Engineering AI Systems, a practical guide addressing the design and operation of AI systems. The publication builds on principles introduced in an earlier Responsible AI release and reflects the methods used by CSIRO researchers and engineers in applied settings.
CSIRO is also working with the Audit Office of NSW to explore how AI can be used in government auditing and decision-making, combining AI tools with established audit oversight practices.
To support broader adoption, CSIRO has partnered with the National AI Centre to develop Guidance for AI Adoption, which outlines six practices for responsible AI governance and use. Complementary guidance on AI-generated content explains how and when to disclose the use of AI in text, images, audio, and video, including approaches such as labeling, watermarking, and metadata.
Together, these initiatives are intended to support safe and informed adoption of AI technologies across Australian organizations and communities.
Click here to read the original press release.
More News in this Theme