The Security Dangers of Reliance on AI

18 March 2024

As businesses rush toward the opportunity, potential, and convenience of AI, many wise heads are warning of the dangers presented by trusting machines with critical functions. Recent embarrassment for Google over its Gemini AI tool after spending billions in research and development is just one example of the pitfalls of rushing such solutions to market before bugs have been swatted.

A group of Queensland academics from the Sunshine Coast University recently highlighted some of the moral and technical risks associated with rushing to AI. The research, published in the journal AI and Ethics, explored the business risks to privacy and security affecting the public, staff, and other stakeholders. “There are growing concerns that the race to integrate generative AI is not being accompanied by adequate guardrails or safety evaluations,” the study stated.

“The rapid adoption of generative AI seems to be moving faster than the industry’s understanding of the technology and its inherent ethical and cyber security risks,” co-author and UniSC lecturer in cyber security Dr Declan Humphreys said. While Gemini was unable to produce images that reflected reality, a seemingly trivial mishap, many Australian companies are handing their security over to machines in the blind hope that AI can adequately defend priceless digital assets. 

The authors suggested a checklist for companies exploring AI options:

  • Decision makers should practice secure and ethical AI model design.
  • Companies must rely on a trusted and fair data collection process.
  • Any security strategy must include secure data storage.
  • AI initiatives should follow ethical principles for model retraining and maintenance.
  • Any AI rollout should incorporate the upskilling and training of the affected workforce.

One obvious danger of the AI development process involves feeding sensitive data into the model without fully appreciating the use and security of the system being developed. 

Gartner also warns of the dangers, particularly related to third-party relationships. Careful management of such relationships has become increasingly fraught as companies realise that their security depends largely on those up and downstream. Lengthy supply chains make for lengthy lists of third-party relationships that must be scrutinised and supervised. If one of your suppliers has entrusted critical data to an untested, emerging AI security tool, the potential for friction arises.  

While we are excited by the developing AI storyline, there must be caution when security and sensitive data are at stake. We recommend extensive research and consultation before handing your cyber security needs over to the machines. 

For more information about the growing role of AI, contact Intalock today.

back to blog

We protect australia's leading brands and businesses against cyber threats.

Cyber security is in our DNA