Go to WSRB
Go to BuildingMetrix
CREATE AN ACCOUNT
LOG IN

About the Company

Who we are and how we serve insurers, agents, and Washington state residents.  

CEO Perspective

Engaging thought leadership on key insurance industry issues from our CEO. 

Meet the Team

Get to know the team behind WSRB’s trusted data and excellent customer service. 

Careers

Learn about the benefits of working at WSRB and apply for open positions.  

Underwriting Property

A guide to key risks in Washington state: fire, wildfire, and earthquakes.


Video Hub

Expert webinars, timely discussions, and in-depth conversations with industry leaders.. 

Commercial Property

Information on loss costs, policy rating, and assessment tools 


Industry Toolkit

Links to help you work smarter and serve your customers.  

Protection Classes

The evaluation process explained from start to finish.


WSRB Blog

News on emerging risks as well as our latest products. 

Library

In-depth content on essential insurance topics.


InsuranceEDGE

Weekly newsletter covering the P/C industry, curated by our experts. 

 

A Brave New World: Navigating the Dangers of Generative AI

Sarah McMillan
March 5, 2024

The age of generative AI (GenAI) is upon us.

GenAI, a sophisticated blend of algorithms and neural networks, possesses the power to analyze data, make decisions, and even generate content that mirrors human intelligence.1 It’s adoption in the mainstream has been startlingly rapid, continuing to grow and evolve as it becomes a part of our everyday lives.

For employees, understanding the nuances of GenAI is essential to maintaining information security in the face of new and developing dangers.

Check out additional Information Security articles here

Potential hazards

Much like technologies that came before it, the current era of GenAI can be likened to the Wild West: good and bad converge in a lawless land of possibility. Because oversight, for the most part, is nonexistent, vigilance is required to ensure that malicious forces don’t wreak havoc.

Here are a few of the most important risks to be aware of:

Intellectual Property

One of the primary concerns surrounds the safeguarding of Intellectual Property (IP). GenAI, with its capacity to learn from vast datasets, poses a significant challenge in protecting proprietary technologies, trade secrets, and business processes.2

A disturbing trend is emerging: AI models are being trained on copyrighted material without sufficient license or royalties – in other words, they are stealing proprietary information that is not theirs to take. With an absence of laws designed to regulate the incorporation of such works into GenAI systems, there exists few safeguards against this form of piracy.3

Sensitive Information

On a very basic level, GenAI learns from exposure: inputs into readily available tools like ChatGPT and Dall-E are logged, analyzed, and utilized in future outputs.

Employees must exercise caution when entering information into these systems, as any input can be regurgitated during subsequent queries. The imperative to keep Personally Identifiable Information (PII) out of AI tools underscores the potential risks associated with privacy breaches.

Always generalize queries and refrain from incorporating identifiable characteristics; this helps to maintain confidentiality.

Incorrect Information and Reliance

In its current state, GenAI models have the tendency to "hallucinate" information, fabricated data that is presented as truth; for example, ChatGPT has been known to generate fictitious citations.4,5 This raises questions about the reliability of information sourced from GenAI models.

Relying solely on these systems may lead to issues such as false intelligence, skill degradation, and an inhuman tone, emphasizing the importance of treating GenAI, especially “free-to-use” offerings, as tools rather than wholesale replacements.6

Impersonation, Misuse, and Malicious Attacks

Malicious actors can leverage GenAI for various fraud and scam schemes: virtual kidnapping scams, social engineered phishing, CEO impersonation fraud – the list is expansive.7 The creation of deepfakes – computer generated images, video, and audio meant to impersonate real people – exacerbates this threat.

The language model of GenAI opens doors to unseen scale, speed, and complexity levels in cyberattacks, optimizing phishing attempts and making them more convincing.8 And even entry-level coders can do this, furthering their goal of stealing data, infecting networks, and attacking systems.

Ethics

The potential invasion of personal privacy, influence over decision-making processes, and the perpetuation of discrimination due to flawed algorithms and biased data strike at the core of societal values and equality.9

As we navigate the uncharted waters of GenAI, it becomes apparent that an ethical approach is crucial to mitigate the inherent risks and ensure the responsible development and utilization of this powerful technology.

Mitigating risk

Identify GenAI Content

Practice skills necessary for recognizing and combating phishing attempts orchestrated by GenAI. To help develop these important skills, have your company’s Information Security team provide Cybersecurity Awareness Training and Phishing Simulation campaigns that resemble GenAI outputs.

Information Security

First and foremost, when using GenAI tools, avoid inputting sensitive data such as company names, Personally Identifiable Information (PII), intellectual property (IP), and company assets. Instead, only use generalized queries that steer clear of compromising specifics.

Only use GenAI tools approved by your Information Security Team, ensuring a controlled and secure environment.

Tightened Security

It’s important that your company’s Information Security team takes time to regularly patch systems associated with GenAI, fortifying against vulnerabilities; conducting routine penetration tests, exposing weaknesses, and closing any security gaps is imperative. Additionally, utilizing high-quality antivirus technology to thwart potential threats further safeguards company assets and activities.

Incident Response Plan

In the dynamic landscape of GenAI, the occurrence of cyber security attacks is not a matter of 'if' but 'when.' Developing a playbook for such eventualities enables teams to rapidly contain, investigate, and remediate any events or incidents arising from GenAI-related security breaches.

-- -- --

GenAI does a lot of wonderful things. I mean, where else can you generate a picture of your cat majestically riding a horse on the moon?

Yet, like with other forms of technology, GenAI isn’t without its pitfalls.

As we step forward into this brave new world, it’s important to tread lightly, ensuring that your team is armed with the knowledge, vigilance, and safety precautions necessary to maintain security companywide.


[1] McKinsey, https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai

[2] Harvard Business Review, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem

[3] TechTarget, https://www.techtarget.com/whatis/feature/AI-lawsuits-explained-Whos-getting-sued

[4] Forbes, https://www.forbes.com/sites/forbesbusinesscouncil/2023/09/07/ai-use-at-work-is-growing-four-risks-to-discuss-with-your-team

[5] Duke, https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/

[6] Forbes, https://www.forbes.com/sites/forbestechcouncil/2023/06/29/six-risks-of-generative-ai/

[7] ABC News, https://abcnews.go.com/Technology/ai-fuel-financial-scams-online-industry-experts/story?id=103732051

[8] Malware Bytes, https://www.malwarebytes.com/cybersecurity/basics/risks-of-ai-in-cyber-security

[9] LinkedIn, https://www.linkedin.com/pulse/risks-ai-security-perspective-rakita-zika/

Sarah is WSRB's Security Analyst with past experience in IT and Clinical Engineering within the Healthcare Industry.

You May Also Like

These stories on cybersecurity

blog listing blog sign up

Sign up for our blog