Generative AI introduces several key ethical themes that impact various aspects of society. This is why institutions like UNESCO and nations such as the United States have developed principles, guidelines, and policies to promote the responsible use of AI. The US (under the Biden Administration) has even created a "Blueprint for an AI Bill of Rights" to help with issues like:
These are only a few examples of the broader ethical dilemmas we must navigate as AI technology becomes more integrated into society. (Read more about the AI Bill of Rights.)
The resources provided after each thematic overview are meant to serve as starting points for your own exploration into these topics. Many of these resources are based on U.S. perspectives, so it's important to consider how these issues may vary in different global contexts.
Algorithms are often seen as impartial decision-makers, but they can reflect and amplify human biases, as shown in the Axios video. From facial recognition errors disproportionately affecting dark-skinned women to courtroom risk assessments unfairly targeting Black defendants, biased training data and a lack of diversity among developers can lead to significant injustices.
With algorithms playing critical roles in decisions about loans, jobs, and even justice, ensuring "algorithmic accountability" is essential. As these systems become more complex and less transparent, promoting diversity among creators and scrutinizing their training data are crucial steps toward fairness and equity. Watch the Axios video (2018) to learn about how biases are being baked into AI.
Machine learning powers many technologies we use daily, from navigation to voice assistants. Unlike traditional programming, where solutions are explicitly coded, machine learning relies on patterns in data to "learn" solutions. However, this process can inadvertently embed human biases into technology.
The Google video (2017) highlights three key types of bias in machine learning: interaction bias, where user inputs skew results; latent bias, where historical data reinforces stereotypes; and selection bias, where unrepresentative training data excludes certain groups. Recognizing and addressing these biases is essential to ensuring technology serves everyone equitably, making awareness and inclusive practices critical in developing machine learning systems.
Deepfakes are highly realistic, computer-generated images or videos that manipulate a person’s appearance or speech, often to portray them doing or saying things they never actually did. They are created using AI models, which learn from large datasets of real media and generate new content that can be strikingly convincing. While this technology can enable creative applications in entertainment, marketing, and art, it also poses ethical and societal challenges, raising concerns about misinformation, privacy breaches, and malicious use. Even the Department of Homeland Security published a guide about the "Increasing Threat of DeepFake Identities" (2021), which they followed up with "Phase 2: Deepfake Mitigation Measures" (2022).
Watch the following Al Jazeera video from 2021 to learn about the potential (and the realized) dangers of deepfakes.
Consider exploring and participating in the research project from Northwestern University's Kellogg School of Management. The first time you encounter this activity, you will be asked to agree to participate in the project. For more information, read their Informed Consent page. (Feel free to skip the activity!)
In the activity, you will examine up to 220 images and determine whether you think they're fake. For some images, you'll have a time limit for your examination (for example, for some images you'll have 10 seconds; for others, you may only have 1 second.)
Once you make your choice, use the slider scale to share your level of confidence, then click submit.