CCAC Theses and Dissertations

Date of Award

2026

Document Type

Dissertation

Degree Name

Doctor of Philosophy Cybersecurity Management

Department

College of Computing, AI, and Cybersecurity

Advisor

Ling Wang

Committee Member

Zulma Westney

Committee Member

Junping Sun

Keywords

AI, ChatGPT, Ethics

Abstract

With the general population’s recent and dramatic increase in the frequency of ChatGPT and other similar Artificial Intelligence Generated Content (AIGC) usage throughout various industries, gray areas are becoming more prominent regarding whether ChatGPT is considered to be ethical or unethical in certain situations. Examples of unethical use of ChatGPT include plagiarism, the use of inaccurate information in drawing conclusions, and the creation of malicious code that negatively impacts various companies. Not all of these ethical concerns are necessarily the fault of the user or the AIGC. To date, peer-reviewed research on the ethical usage of ChatGPT is limited, primarily due to its novelty. Of the research available, the primary focus has been on AI’s flaws and how they can lead users into unethical situations. A literature review of current research has revealed a lack of studies specifically examining users' perspectives on ethical use for ChatGPPT. That is, although the AIGC may have inherent flaws in handling data or generating results, the user’s intent of the AIGC has not been analyzed as a factor for ethics. As a result, this study aimed to create and validate a survey to establish a cornerstone for the foundation of ethical guidelines by identifying the boundaries of human behavior when using powerful technologies, such as AIGC. The study contributed to the growing body of research that focuses on the constantly evolving recommendations for the ethical use of ChatGPT. The results of this study, hopefully, serve the general public by improving the safety, security, and healthy use of such AIGC products. The survey presented 50 scenarios with ethical implications to subject matter experts (SMEs), who were asked to evaluate which items were least impactful in creating a baseline for ethical ChatGPT use. This ultimately reduced the number of scenarios to 38. The second part of the study administered the validated survey to participants over 18 years old, who were asked to rate each item on a Likert scale from 1 to 5, indicating how ethical/unethical they perceived each item to be. Each participant was asked to give their perceptions of ChatGPT’s ethical dilemmas on the provided survey. The data was then analyzed to determine the central tendency using the mode and the frequencies to find the variability. Once the data had been analyzed, it was used to demonstrate how the findings could be applied to answer the research questions of the study. The participant data, among other findings that will be discussed, revealed that there were indications that ChatGPT performed poorly in healthcare scenarios, which did not fare well once the means were calculated, with a majority of the bottom 10 scenarios being about healthcare. With the help of the data, the study was able to answer Research Questions 1, 2, and three with the data collected from participants after it was analyzed. For Research Questions 4 and 5, the study was unable to answer them due to a lack of data from the demographics on the participant survey. Overall, the study’s findings do contribute to future research by providing the starting point for other researchers to continue gaining the public’s perception of the ethical use of ChatGPT. Future studies could target more specific populations using the survey, or they could even expand on it by adding more or altering scenarios based on the current trends for the usage of ChatGPT. This data could also potentially help provide some perspective for policymakers who are looking to create or change AI-related policies.

Share

COinS