Preprint / Version 1

Conscious AI Should Be Managed Similarly to Humans

##article.authors##

  • Arushi Saurabh California High School

DOI:

https://doi.org/10.58445/rars.194

Keywords:

artificial intelligence, AI, Computer Science, ethics

Abstract

AI systems are becoming increasingly prominent and ubiquitous in our daily lives. For example, one can find AI systems in social media and in large language models (ex. ChatGPT) used to predict user behavior and text input. While these AI systems can be useful, they still have problems aligning with our human values. This paper will conduct a systematic review of ethical AI design methods, and discuss instances where these design methods assisted AI in aligning with human values. Then we will discuss how AI developing human consciousness may change the implementation of these design methods. This paper provides a potential way to think about how to manage AI in the case it develops human consciousness. 

References

Barnidge, M., & Peacock, C. (2019). A Third Wave of Selective Exposure Research? The Challenges Posed by Hyperpartisan News on Social Media. Media and Communication, 7(3), 4–7. https://doi.org/10.17645/mac.v7i3.2257

Birhane, A., Isaac, W. S., Prabhakaran, V., Díaz, M., Elish, M. C., Gabriel, I., & Mohamed, S. (2022). Power to the People? Opportunities and Challenges for Participatory AI. arXiv (Cornell University). https://doi.org/10.1145/3551624.3555290

Celi, L. A., Cellini, J., Charpignon, M., Dee, E. C., Dernoncourt, F., Eber, R., Mitchell, W., Moukheiber, L., Schirmer, J., Situ, J., Paguio, J. A., Park, J., Wawira, J., & Yao, J. S. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities—A global review. PLOS Digital Health, 1(3), e0000022. https://doi.org/10.1371/journal.pdig.0000022

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences of the United States of America, 118(9). https://doi.org/10.1073/pnas.2023301118

Costanza-Chock, S., Raji, I. D., & Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533213

Delgado, F., Barocas, S., & Levy, K. (2022). An Uncommon Task: Participatory Design in Legal AI. Proceedings of the ACM on Human-computer Interaction, 6(CSCW1), 1–23. https://doi.org/10.1145/3512898

Delgado, F., Yang, S. C., Madaio, M. P., & Yang, Q. (2021). Stakeholder Participation in AI: Beyond “Add Diverse Stakeholders and Stir.” arXiv (Cornell University). https://doi.org/10.48550/arxiv.2111.01122

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., . . . Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Friedman, B., & Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press.

Friedman, B., Kahn, P. H., & Borning, A. (2002). Value Sensitive Design: Theory and Methods. UW CSE.

Fulmer, J. (2022). Addressing AI and Implicit Bias in Healthcare. TechnologyAdvice. https://technologyadvice.com/blog/healthcare/ai-bias-in-healthcare/#:~:text=While%20largely%20unintentional%2C%20AI%20bias,output%20an%20AI%20algorithm%20provides

Hildt, E. (2019). Artificial Intelligence: Does Consciousness Matter? Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01535

Hoffmann-Riem, W. (2020). Artificial Intelligence as a Challenge for Law and Regulation. Springer eBooks, 1–29. https://doi.org/10.1007/978-3-030-32361-5_1

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., . . . Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health Care. JAMA, 322(24), 2377. https://doi.org/10.1001/jama.2019.18058

Tucker, J. A., Guess, A. M., Barberá, P., Vaccari, C., Siegel, A. A., Sanovich, S., Stukal, D., & Nyhan, B. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Social Science Research Network. https://doi.org/10.2139/ssrn.3144139

Umbrello, S., & Van De Poel, I. (2021). Mapping value sensitive design onto AI for social good principles. AI And Ethics, 1(3), 283–296. https://doi.org/10.1007/s43681-021-00038-3

Wilson C., Ghosh A., Jiang S., Mislove A., Baker L., Szary J., Trindel K., and Polli F.. (2021). Building and Auditing Fair Algorithms: A Case Study in Candidate Screening. In ACM Conference on Fairness, Accountability, and Transparency https://doi.org/10.1145/3442188.3445928

Zytko, D., Wisniewski, P., Guha, S., Baumer, E. P. S., & Lee, M. G. (2022). Participatory Design of AI Systems: Opportunities and Challenges Across Diverse Users, Relationships, and Application Domains. CHI Conference on Human Factors in Computing Systems Extended Abstracts. https://doi.org/10.1145/3491101.3516506

Downloads

Posted

2023-05-01