Legal, Ethical, and Social Implications


Legal Challenges


In the world of social media, artificial intelligence can be used in a multitude of ways, but when it comes to content creation, AI can take from preexisting works to generate content of its own, usually without permission. The generated content can sometimes mimic a preexisting work that has a distinct style, such as artworks from an established artist. A key example of this is the Studio Ghibli art style, which is a lighthearted fantasy-like art style that incorporates elements like nature and vibrant colors with a whimsical feeling, which AI systems such as DALL-E or Stable Diffusion like to mimic when it comes to generating AI art. Not including drawn artworks, it also includes photographs that AI can study and learn from.

There is a legal case where Getty Images filed a lawsuit against Stability AI due to copyright infringement and unauthorized usage for AI development without any commercial license. When it comes to the original work and the AI generated work, the owner of the original work would also have their names tied to the AI generated work, as it takes from what the original work’s style was like. Getty Images claims that Stability AI used nearly 12 million of their copyrighted images to train their AI models, which is a heavy task on its own due to their task on verifying each image on how it was used to train the Stability AI models. As there are over 12 million images, it becomes a near-impossible task to take on, and due to current regulations, copyright laws would have to include some sort of update to its policies with AI usage (Coulter 129).

In addition to the risk of copyright infringement from AI, there are also legal issues when it comes to data privacy of users and AI, especially with the General Data Protection Regulation (GDPR), which protects user information unless explicitly told by the user themselves on sharing data. With the uprising of AI in various tech systems, there is a small chance that even without the user’s consent, their data can be used to train an AI system, which has its own share of legal problems.

The legality of AI usage on social media has its own spectrum, but as AI usage grows rampant among social media platforms, more content comes out, especially if it’s regarding the news or any current events. Going back to the discussion on fake news and social media, it seems to be the place with the most misinformation, especially if it involves the usage of AI. In the present, AI has advanced enough that it can successfully create replicas of voices and faces of known figures, such as politicians, and that introduces many legal troubles, such as the origin, and AI moderation falsely flagging something as either true or false.


Ethical Considerations


The main issue with artificial intelligence comes from its ethical usage, or even the lack of ethics. Bringing back the topic of fake news and artificial intelligence, AI has gotten to a level where it can have three distinct types, analytical, human-inspired, and humanized (Kaplan 168). The humanized AI shows characteristics of cognitive, emotional, and social intelligence and can form its own thoughts especially regarding politics. It is self-aware and could, in fact, have a full-on discussion with another human, especially with its power to form its own thoughts, about politics. This indirectly influences how humans think when it comes to politics, because if they’re not well informed by a legitimate news outlet, they could turn to artificial intelligence to gather their thoughts, which wouldn’t be ideal and would have a higher risk of spreading misinformation.

On the topic of politics, bias is a significant factor when it comes to certain news outlets. Depending on the AI system, it can be tailored to prefer a certain side over another, especially when it trains from different news outlets, which can either be left-leaning or right-leaning, and that will skew the results quite a bit. On a scale based off the algorithm, the algorithm could also lead the user into a side where there is a clear bias for either the left or the right or even lead them into a pit filled with fake news and unclear motivations.

Social Implications


Social media has the biggest impact when it comes to the social implications of artificial intelligence usage on its platforms. Again, the algorithm can personalize based on the user’s preferences and experiences, but it can also trap the user into an echo chamber where they will be fed the same ideals and thoughts, especially in a news setting. It also stops differing viewpoints on a certain issue and can stray away from the truth if the user consumes too much of the content. That itself leads into a pit where the user won’t be open to civil discussion on the topic due to the content they’ve been consuming on a certain topic.

Going back to the topic of generative AI and how risky its existence is in a social media space, looking from a social standpoint, it falls into the same boat where misinformation can skew a user’s perception on a situation, but it can also spread to other users based on the algorithm. It can cause discourse among communities, as well as reducing the amount of trust built over the internet due to the fake news. Generative AI also splits communities, as there are people that advocate for generative art compared to manmade art, citing that there is no need for human creativity if machines can do the same thing. AI has gotten so advanced to the point where it is difficult to tell the difference between something manmade or machine-made, and that itself can cause a lot of engagement and discourse over the usage of AI.