Artificial Intelligence did not just start being invented recently; in fact, it has been in development/conceptualized since the 1950’s. Alan Turing, considered to be known as the “father of artificial intelligence”, was able to publish a proposal that was for a test to distinguish between humans and AI and this test was called the Turing Test. During the year 1952, a computer scientist, Arthur Samuel, was able to develop a program that could independently learn how to be able to play checkers, and it was the first program of its kind. In the more recent years what made artificial intelligence more prominent was the creation of GPT-1, GPT-2, and GPT-3. OpenAI, the artificial intelligence research company, created and designed the GPT’s. Sam Altman, who is an entrepreneur and an AI developer, is the CEO and co-founder of the company, OpenAI. The GPT models were deep learning models that have increasingly improved chatbot capabilities. GPT-1 was first introduced in the year 2018, GPT-2 in 2019, and GPT-3 in 2020. In November 2022, OpenAI released ChatGPT as a free “research preview”. Not too long later, it became incredibly popular with over one million users after five days of launch. With all of these innovations occurring rapidly with Artificial Intelligence, there has been a recent “influx of government strategies, panels, dialogues and policy papers, including efforts to regulate and standardize AI systems.” (Charlotte, 2022, para.1). This will only continue to worsen as Artificial Intelligence advancements occur almost every day and will continue to do so. Artificial Intelligence aiding cybersecurity was first founded in the 1990s when the intrusion detection systems (IDS) were first being developed. Essentially, they attempted to incorporate Artificial Intelligence into cybersecurity through intrusion detection systems and it used simple rule-based algorithms to find out if there was any abnormal behavior in networks.