The NEW Ethics of Artificial Intelligence
DOI:
https://doi.org/10.58445/rars.2948Keywords:
Artificial intelligence, Ethics, Training Data, Moral Reasoning, Algorithmic bias, Developer intentAbstract
As artificial intelligence systems become increasingly integrated into decision-making processes in society, the ethical reasoning of these models needs closer examination. This paper investigates how different AI models are able to handle certain ethical scenarios differently based on three key factors: training data, algorithmic architecture, and human intention. These factors are essential for understanding why ethical outcomes vary so widely across systems that may seem similar at first glance.
First, the training data that shapes an AI model fundamentally determines its moral responses. Diverse or biased datasets can influence how a model interprets ethical dilemmas, reflecting the cultural, historical, and systemic values present in the data. Second, the architecture of AI systems, from rule-based frameworks to deep learning models, affects how information is processed and how moral decisions are made. The complexity and structure of these systems can influence the depth, flexibility, and adaptability of their ethical reasoning. Finally, human intention plays an important role in guiding AI development, from the values engineers embed into systems to the goals they prioritize. This human oversight can steer, or even distort, the ethical performance of AI.
Together, these elements show that AI’s moral reasoning is not natural or universal, but instead very constructed and highly variable. Understanding these influences is crucial not only for improving current systems but also for designing future AI that can function with ethical integrity across diverse contexts. This research emphasizes that creating ethical AI is less about enforcing singular rules and more about intentional and representative design.
References
Askell, Amanda, et al. "A General Language Assistant as a Laboratory for Alignment." Arxiv.org, Cornell University, 1 Dec. 2021, arxiv.org/abs/2112.00861. Accessed 26 Jan. 2025.
Bender, Emily M., et al. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" ACM Digital Library, Association for Computing Machinery, 1 Mar. 2021, dl.acm.org/doi/10.1145/3442188.3445922. Accessed 10 Nov. 2024.
Blodgett, Su Lin, et al. "Language (Technology) is Power: A Critical Survey of 'Bias' in NLP." ACL Anthology, July 2020, aclanthology.org/2020.acl-main.485/. Accessed 9 Dec. 2024.
Brunet, Marc-Etienne, et al. "Understanding the Origins of Bias and Word Embeddings." Proceeding of Machine Learning Research, MLResearchPress, 2019, proceedings.mlr.press/v97/brunet19a/brunet19a.pdf. Accessed 22 Nov. 2024.
Farhud, Dariush D., and Shaghayegh Zokaei. "Ethical Issues of Artificial Intelligence in Medicine and Healthcare." PubMed Central, National Library of Medicine, Nov. 2021, pmc.ncbi.nlm.nih.gov/articles/PMC8826344/. Accessed 3 Feb. 2025.
Tao, Yan, et al. "Cultural bias and cultural alignment of large language models." Oxford Academic PNAS Nexus, Oxford UP, National Academy of Sciences, 17 Sept. 2024, academic.oup.com/pnasnexus/article/3/9/pgae346/7756548. Accessed 18 Nov. 2024.
Udupa, Sahana, et al. "Ethical Scaling for Content Moderation: Extreme Speech and the (In)Significance of Artificial Intelligence." Harvard Kennedy School Shorenstein Center on Media, Politics and Public Policy, Harvard Kennedy School, 9 June 2022, shorensteincenter.org/ethical-scaling-content-moderation-extreme-speech-insignificance-artificial-intelligence/. Accessed 10 Nov. 2024.
Downloads
Posted
Categories
License
Copyright (c) 2025 Ehaan Sair

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.