Introduction



Artificial intelligence has seen a significant increase in development in the past few years, with companies such as NVIDIA, Meta, and Google dedicating resources to its development with proprietary software. Specifically with Meta, as they own Facebook and Instagram, which are two of the most popular social media platforms on the internet, they have been overseeing the advancement of artificial intelligence when it comes to user engagement and curating to what the user likes on those platforms. Instagram has their own proprietary section dedicated to short form videos, which are called Reels, which is their version of what you would find on TikTok, another popular social media platform. The AI will gather information from what kind of content the user consumes and make an effort to curate similar content to farm user engagement, which can be both a beneficial and harmful effect. If it comes to a point where users are consuming content generated by AI and are news-related, that can be a huge concern when it comes to social media and the spread of misinformation.

There are benefits to AI usage in social media, such as improving user experience and advertising for business pages, but there are more risks when it comes to AI in social media, such as the potential spread of misinformation and data privacy violations. The spread of misinformation is a significant factor in the usage of artificial intelligence in social media, as social media platforms have the highest outreach to internet users. Misinformed users can potentially spread the wrong information, causing ripples of false information throughout platforms, which can wrongfully impact searches when it comes to any related topic. This paper will focus on artificial intelligence on social media development, specifically the spreading of misinformation and breach of data privacy, including its ethical, social, and security concerns.