This blog is a smaller part of an article in Bugcrowd’s newest report, Inside the Mind of a CISO. Check out the report for the full article, along with other thought pieces, infographics, and data analyses for CISOs and security leaders.

Dan Maslin is an experienced technology executive based in Australia. For the past six years he has worked at Monash University, Australia’s largest university with around 90,000 students and 20,000 staff, where he is Group Chief Information Security Officer and Head of Infrastructure Strategy.

In 2025, Monash University announced its investment in building and operating an advanced AI supercomputer to transform AI-driven research. This supercomputer is the first of its kind in Australia to utilize the NVIDIA GB200 NVL72 platform and is expected to deliver unprecedented AI capability for research in areas from cancer detection to climate action.

We sat down with Dan to learn more about this amazing project, AI governance, and his approach to proactive security.

Can you tell us about how you’re approaching security for this new AI supercomputer?

There are so many layers to this! To start, fortunately for me, the organization has a positive security culture and typically considers cyber, privacy, and sovereignty early on in projects. As CISO, I was brought into the project very early—more than 6 months before anything became public—and was on the evaluation panel for all parts of the project. I needed to be comfortable on everything from the data center where we’d host it through to the supplier of the hardware. We landed on an arrangement with CDC as a data center and NVIDIA and Dell hardware. I was able to query security considerations for every aspect—from physical security at the place of hosting to software and hardware supply chain assurance, the vetting of staff, and all parties’ approach to vulnerability disclosure and inclusion in bug bounty programs (yes, that was a question they needed to respond to!).

The issue of AI governance extends beyond tech into realms of compliance, operations, and brand reputation. How are you approaching and prioritizing AI governance?

For Monash, AI governance runs even deeper. Aside from the usual corporate environment considerations around AI in operations, we also have to consider the impacts of AI on both research and education, both of which are likely to be heavily impacted in the coming years. In early 2024, Monash established an Artificial Intelligence Steering Committee, with more than a dozen members representing every corner of the university. Reporting directly to the Vice-Chancellor (the equivalent of the CEO in a corporation), the Committee exists to create a clear understanding of the risks and strategic benefits of using AI for education, research, and operations, both in the short and long term, and it oversees and informs decision-making on the use of AI across the Monash Group into the future.

Monash also has a publicly published AI Readiness Framework that is fairly comprehensive and considers the people, technology, and scaling aspects, and this is where governance is situated. It includes an organization-wide agreement on responsible use principles, internal policies, the risk management approach, and tracking of the evolving legal and regulatory landscape surrounding AI. So in short, AI governance is a product of organization-wide input, reporting into the most senior level of management.

How do proactive security and offensive security testing play a role in your overall security strategy?

Offensive security testing is absolutely at the core and one of the first principles we introduced when I joined five years ago. We can’t scale to continuously proactively test our environment with our internal resources—we need a crowd. We will never have the broad and expert skills internally to deeply test and provide effective assurance across everything, from mobile apps and building management systems to corporate IT and supercomputers; we need to leverage a variety of skills available within a crowd of ethical hackers to have confidence that we can know about a vulnerability first. I’ve always said that we can’t manage what we don’t know about, so we’re better off prioritizing the scalability and continuous visibility of our environment.

Can you highlight an initiative from your team over the past year that exemplifies excellence, innovation, and resilience?

Our team created and runs the Cyber Security Student Incubation Program, which was set up to do three things: build a reliable talent pipeline for the internal cyber security team, give students meaningful paid experience while they study, and help produce job-ready graduates who don’t need to start from scratch in the industry. We recruit five students each year and give them part-time roles (usually 2–3 days a week for a year) paid at market rate and supported by structured training and mentoring. This isn’t unpaid work experience—they’re treated as part of the team. We see it as win-win-win; we win because we get access to new intelligent talent about to enter the market, the students win because they get real-life paid work experience for a year, and the industry wins because it gets a Monash graduate with a degree and a full year of hands-on real-life work experience.