Threat Spotlight: AI and Machine Learning

A navy background with the white text "AI & Machine Learning."

Executive Summary

  • AI has many different applications, including cybersecurity
  • AI is one tool out of many, so cybersecurity (or any other field) should not solely rely on it
  • Though threat actors can use these tools for malicious activities, blue teams can take advantage of that by anticipating and mimicking threat actors’ actions to train their organizations’ employees
  • As AI becomes more advanced and available, it becomes even more pressing for society to protect peoples’ intellectual property while allowing usage for the tool
  • To make sure we have the same definition of AI and related terms:
    • Artificial intelligence (AI) is broadly trying to get machines to remember human behavior
    • Machine learning (ML) is a subset of AI in which a developer provides a machine new data to train it to be able to provide better answers to questions
    • Deep learning is a subfield of machine learning that uses neuron architecture to provide training to a more complex model

Check out our full webinar recording, AI and Machine Learning: The Future of Cybersecurity in 2023, and/or keep reading for the highlights.

AI and Cybersecurity Applications

Flare Head of Software Development Alexandre Viau and Flare Director of Marketing Eric Clay discuss core AI use cases in cybersecurity

There’s a misconception that using AI means that the machine is analyzing larger quantities of data. Rather, the machine may be able to spot patterns that would’ve been hard to spot manually. For example, it could take a long time to build a rule set of 1,000 rules that look for every single pattern you want. But, there are well documented ways to spot 100 common patterns more easily than a human could. 

Also, AI can do well where there’s large quantities of data in which the data or shape of the data keeps evolving. It’s difficult to assign a development team to make decisions based on constantly changing information. 

A cybersecurity example: if people are creating fake profiles on a platform or trying different login methods, a person could teach a model to recognize what is normal human behavior versus threat actor behavior.

ChatGPT for Threat Actors

Flare AI/Data Lead François Masson and Flare Head of Software Development Alexandre Viau talk about malicious applications of ChatGPT

ChatGPT is a viral chatbot that has been trained to generate sensible responses to questions. 

Threat actors can take advantage of this, and there’s a demonstration in the video. For example, a threat actor could ask ChatGPT to write an email about the World Cup. Phishing emails generally have obvious patterns (like a “Nigerian prince” offering large sums of money or a company leader needing urgent assistance and for the email recipient to send some sensitive information) but this AI tool can write normal looking emails well.

It could take time to research and write a convincing email about the World Cup, but ChatGPT can cut that time down to minutes. Instead of a malicious actor taking time to write one phishing email that could get fingerprinted and blocked immediately, with ChatGPT they could write and send thousands of varied emails. In addition, ChatGPT can write a functional JavaScript field that collects credit card information. 

With these, a malicious actor could set up a website that seems like a place to buy World Cup tickets, and a victim could fall for it believing they bought World Cup tickets.

These methods can also be used in reverse for blue teams. For example, they could write specific simulated phishing emails for training purposes so employees are more aware of emails that threat actors might send them.

Ethical Considerations with (New) AI Tools

Flare Head of Software Development Alexandre Viau and Flare Director of Marketing Eric Clay have a conversation about ethical considerations and concerns about (emerging) AI tools

Publicly available AI tools like ChatGPT and DALL-E generate a lot of interest and excitement, as they are fun to tinker with and can even save time or money. However, as developers train these tools on existing art, text, resources, and/or work, what happens when those are fed to the model without the original creator’s consent?

Artists were upset when they learned that their work contributed to training an AI image-generation tool, Stable Diffusion, and could replicate their unique styles. 

There are other similar applications, like GitHub Copilot, which has been trained on billions of line of code, so that it can help programmers write code. If those code suggestions are based off of other peoples’ code, who owns it?

Legislation struggles to keep up with the fast pace of AI’s progress. As technology overall advances, society grapples with ethical and legal questions about balancing the availability of tools and respect for peoples’ work and intellectual property.

How Flare Can Help

Flare enables you to automatically scan the clear and dark web for your organization’s sensitive leaked data before threat actors can utilize it. Our AI-driven system provides sophisticated analysis to prioritize threats (while cutting out the noise). 

Flare allows you and your security team to: 

  • Monitor around 10 billion leaked credentials 
  • Cut incident response time by about 95%
  • Understand your organization’s external data exposure to improve your security posture

Want to see how Flare can help your organization stay ahead of threat actors? Request a demo to learn more.

Share This Article

Related Content