Artificial intelligence (AI) is a powerful tool that is gaining traction in scientific research. A plethora of AI models, such as language models, image generators, or protein folding models, is also developing a broad range of predictions, outcomes, or designs resulting in great scientific discoveries [1]. Timely commercial AI products are empowered by large generic models pre-trained on broad and public data. Broadly and flexibly applied, these models can immediately boost productivity in some the discipline. However, they also bring a spectrum of unseen risks due to their misuse. The scientific community employing AI must be aware of these risks and actively mitigate them. Violence, bioweapon engineering, and worse cases disrupting the social order are all possible and potential abuse of models like ChatGPT. Uncontrolled AI-generated falsely licensed vaccines or medicines can be disastrous to human and environmental safety. AI models generating toxic and illegal images, deep fakes, or jump-scene videos can stir public panic or plunge humanity into the abyss of surveillance and oppression. Disclosure of AI-generated personal data can also infringe upon the very basis of human civilization, such as privacy, democracy, equality, and freedom.
1. Introduction to Artificial Intelligence in Scientific Research
In academia, the irresponsible use of AI may impair the fairness of the peer-review process, rationality in scientific evaluation, and scientific quality. It also cuts off the fundamental source of scientific progress and knowledge. These potential misuses need to be fully understood and proactively controlled within a scientific framework. Science is a distinctive paradigm devoted to uncovering the nature of the universe and its building blocks. It is a self-correcting and transparent process, publishing robust hypotheses and models widely for everyone to test, improve, replicate, or debunk. Importantly, scientists always march along the path told by nature, but never follow arbitrary opinions, deities, or idols. Balancing the ethical principle of open science and the security principles of weapons proliferation and supermassive attacks without a sophisticated implement or a technical exercise is delicate and intricate. As the cost of knowledge generation decreases, brute-force scientists may emerge, expanding the coastal explosions into the deep sea of scientific computation. It is essential to control the risk of AI misuse in Science.
2. Benefits and Applications of Artificial Intelligence in Scientific Research
In recent years, artificial intelligence (AI) has transformed scientific research across numerous fields, such as chemistry, biology, physics, mathematics, and materials [1]. AI models have successfully generated new insights and knowledge through diverse modes such as lanthanide metal mapping, protein folding predictions, peptide sequence design, and the in-silico discovery of new materials. Even in domains previously believed to be unattainable for AI, such as science writing and art creation, researchers are witnessing remarkable progress. These inflows of AI models are analogous to the invention of lasers, the internet, and other epoch-making changes. Despite the astonishing achievements of AI models, the scientific community employing AI ought to remain alert regarding the potential misuses of these models and associated implications for scientific integrity and societal welfare.
There are several AI-based systems for drug discovery; these systems are playing a vital role in pharmacological research, which includes drug discovery, development, personalized medicine, etc. Therefore, to bridge the gap between data and drug discovery, various machine learning and artificial intelligence approaches are being adopted nowadays in the field of pharmacology [2]. In addition, molecular dynamics simulation-based studies are also being guided by artificial intelligence and machine learning approaches. These AI-based applications have already shown great outcomes in pharmacological research all around the world.
3. Challenges and Limitations of Artificial Intelligence in Scientific Research
The integration of artificial intelligence (AI) models into scientific research has brought about remarkable progress across various disciplines, comprising chemistry, biology, physics, mathematics, the earth sciences, and astronomy. However, these models harbor the potential for misuse, which could engender profound risks to both scientific integrity and societal welfare. Misuse includes the dissemination of toxic, harmful, or misleading scientific knowledge, fraudulently securing funding, patents, or an academic position, and facilitating criminal and terrorist activities. Another concern is the models being employed to undermine the credibility of scientific publications, thereby threatening the reliability of societal knowledge systems. Lowered trust in scientific methodologies could lead to rejection of scientific knowledge altogether, endangering public health and safety [1]. These concerns are not merely hypothetical scenarios; there have been precedents of similar strategies already being attempted. Thus, it is imperative that the scientific community employing AI remains cognizant of these risks and proactively endeavors to mitigate them.
The scientific community employing AI must work together and determine how best to ensure that its positive impacts are realized while the risks of misuse are effectively reduced. To that end, it is perhaps advisable to begin with an understanding that scientific integrity is inseparable from human integrity. Hence, the integrity of AI, like any other science, fundamentally relies on the integrity of its practitioners: researchers, developers, and users. Thus, there is a necessity to remain firmly committed to a set of ethical guidelines comprising awareness of associated risks, self-restraint on misuse, and open collaboration with the policy sector to ensure regulatory compliance. This study emphasized the necessity for a steadfast commitment to the principles of responsible AI within the scientific arena, with involvement from all stakeholders.
4. Ethical Principles and Guidelines in Scientific Research
The ethical principles and guidelines in scientific research are the basic normative ethics which every scientist ought to respect. This normative ethical proposal helps to uncover ethical problems and dilemmas. In addition, normative ethics are helpful to scientists in order to minimize possible ethical errors and violations of the ethical conduct that can arise in the research process [3].
Thus, a review of the basic ethical principles and international and national guidelines in scientific research contribute to the debate about the role of ethical principles and concerns in scientific research in general and in AI modelling in scientific research in particular. The aim is to establish the basic ethical principles and guidelines, in order to provide a foundation for the ethical discussion of AI in scientific research. As such, the intention is not to discuss potential scenarios or hypothetical AI applications that could arise in scientific research. Furthermore, the intention is not to discuss potential technological difficulties or problems with AI applications in scientific research. AI modelling in scientific research is not an issue of technological concern and probably not even of ethical concern. The ethical discussion is essential and imperative and needs to be taken at the normative ethical level [4].
5. Ethical Issues in the Use of Artificial Intelligence in Scientific Research
The change brought about by artificial intelligence (AI) is often underestimated. AI affects essentially all parts of society, even without AI acknowledging it. Scientific research is not exempt. Conducting research with AI is very different from conducting research without it, both ethically and methodologically. Ethical issues are both arising and needed to be addressed because of AI’s use in the research domain. Ethical issues in research are not new. However, the introduction of AI makes ethical issues unique. There also are a lot of wider societal ethical discussions about the use of AI, even beyond research. That said, there is still the need to explore the ethical issues from the unique scientific research perspective regarding the use of AI [1].
AI has a prominent role in the conduct of scientific research nowadays. AI technology has moved into direct use by scientists. Many scientists have started to leverage AI systems to carry out traditional research processes, such as scoping literature, developing hypotheses, designing experiments, running experiments, and analyzing experimental results. Some even use AI tools to automate the whole process, from iteratively generating research questions to submitting just-done manuscripts with minimal human effort. AI models in today’s traditional scientific research are essentially AI-enhanced collaborating scientists working together with AI (co-sci) as collaborators. There is also an ongoing progressive merge between AI and research, and AI is progressively shifting from tools to research collaborators and co-authors. This means that there are already AI agents actively and autonomously involved in the conduct of research and handling model outputs. With the development and deployment of stronger AI models, these trends are expected to become even more pronounced [3].
6. Bias and Fairness in Artificial Intelligence
Bias and fairness are increasingly important ethical issues in artificial intelligence (AI). As AI systems have become ubiquitous in practice, it has been acknowledged that these systems are frequently biased and discriminatory. AI systems can produce biased outputs based on individuals’ perceived characteristics, such as their race or socioeconomic status. Biases in algorithmic decision-making systems tend to mirror and amplify societal biases, generating a feedback loop that can entrench discrimination. The emergence of biased and discriminatory AI systems raises fundamental ethical questions and motivates a growing body of interdisciplinary work focused on uncovering theories of justice and lived experiences of discrimination [5].
Understanding how the “fairness” of an algorithm can be defined, assessed, and ensured raises a similarly urgent set of technical challenges, motivating a body of work seeking to engineer fairness into AI. Broadly, bias and fairness refer to the ethical questions associated with the degree to which the outputs of AI decision-making systems are dependent on sensitive individual attributes (e.g., race or gender). Bias can be understood to lie along a spectrum, with discrimination representing the most harmful form of bias. A “fair” algorithm is often taken to mean that the algorithm should be free from bias, so that its output should not depend on sensitive features [6].
7. Transparency and Accountability in Artificial Intelligence
As AI is incorporated into decision-making processes that impact individuals and society, there is a fundamental ethical need to guarantee transparency in automated process design and outcomes [7]. Important choices about physics, models, training data, and algorithm parameterization can lead to irrational or socially unacceptable decisions. Transparency is increasingly required if algorithms can affect creditworthiness, job opportunities, access to public services, privacy, and safety. It is necessary to clarify what transparency means. It can refer to the openness of the design process, data provenance, the significance of the underlying models, process performance, effects on privacy, how outcomes can be interpreted, and the criteria for rendering decisions [8]. Transparency can serve several purposes such as investigating and debugging malfunctioning systems, informing benign users, gaining trust, empowering those affected by the technologies, and enabling oversight. While an understanding of system details can help critique decisions, this does not mean the decisions are comprehensible to all. Providing purely mathematical characterizations of complex algorithms may still leave the users in “the technical dark”. Computer literacy is required for interpretable statistics but mere descriptive statistics might be deceptive. How usable and intelligible data accounting technologies would need to be in practice remains an open question. The challenge in making algorithmic decision-making transparent stems from an essential tension. Algorithms are characteristically stable, predictable, and amenable to auditing. Deep AI systems, on the other hand, are often opaque black boxes, are extremely sensitive to specific low-level data perturbations, and lose efficacy when continuously opened up to scrutiny. Algorithmic decisions must be correct and correspond to their mathematical intent; this is an important precondition for public trust and acceptance. A dual notion of accountability is also developed. The first is traceability, which means transparent tracking chains of algorithms, actors, decisions, and data completely and coherently through technologies and practices. The second is governance, meaning that organizations and actors responsible for the sustained functioning of AI systems can be positively or negatively held to account.
Accountability in Artificial Intelligence
Adopting the exploration of autonomy in AI, the construction of AI systems must be thought of as multi-participant processes where specific choices lead to automatized activities that impact human lives and societal organization. Following this user-centered approach, the most immediate issues are the theoretical and conceptual implications of shifting responsibility for decisions onto incapable insentient actors. Responsibility comes in various shapes: legal, organizational, political, or personal. It can be considered a constellation of beliefs binding punishable risk-of-harm activities to suitable actors or insure against overly risky actions. In most social contexts, there is a general expectation of agency from persons and companies, which further assures accountability through chains linking actions to potential sanctions. Law might not fully embrace non-human actors. The current legal framework globally envisages hard and soft sanctions and performance standards solely for available legal persons. As agents in cooperative partnerships, software configurations might become decision-makers in tort liability causal explanations that base guilt on complexity and risks, not on intent. And finally, turning to parties influential on societal organization, it is unclear how companies could be expected to fulfil the moral obligations currently associated with the discretion of states. Would tepid non-compliance fines and formal redistribution of direct decisions suffice to replace sovereign states?
8. Privacy and Data Protection in Artificial Intelligence
Recent advancements in artificial intelligence (AI) have transformed and revolutionized the ways in which data is captured, assessed, organized, applied, and stored. Many everyday devices and tools make use of AI applications to evolve, upgrade, and facilitate everyday chores and workings. As a result, AI tools are being incorporated into services, products, and devices at an unprecedented rate, fostering numerous conveniences and advantages in everyday life. The integration of AI tools allows for a range of functions such as search engine applications, money processing systems, chat applications, and many others [9]. However, with the exponential growth and innovation of AI applications comes an abundance of ethical considerations and prudence. Recent headlines shed light on AI privacy concerning chat applications altering the way work presentation and performance is conducted, news algorithms, and filters on social media applications altering the perception and flow of information, and mobility gadgets being incorporated into security systems [10]. Such AI systems excessively gather personal information concerning users’ habits, thoughts, perceptions, and motions, change the way personal interviews and focus groups are conceived, and disable an abundance of privacy and confidentiality.
Similar to any technologically evolved application, AI is concocted from programming codes and algorithms that dictate the way conditions and samples are considered and the forthcoming actions proposed. Additionally, AI applications are being infused with varying means of information to progress, learn, and evolve. Ethics plays a decisive role in establishing rules, obligations, and manners of application concerning moralities. In terms of AI, ethics could be regarded as the models, guidelines, directions, and backgrounds under which the development and applicability of AI applications are considered. Such ethical considerations fall into fragments under which privacy represents a critical one concerning the use, management, and application of data. According to such ethical consideration, it must be assessed whether the use of AI in scientific research violates or clashes with the notion of privacy and what information is deemed to be private.
9. Human Autonomy and Control in Artificial Intelligence
Ethics of Artificial Intelligence and Robotics systems that can be more or less autonomous are outlined. The design of technical artefacts has ethical relevance for their use; e. g. the design of automobiles influences how they can be used, although driving still requires a certain competence. This also stresses the need to consider different stages of the artefacts’ life. These stages can be roughly divided into: Before invite a certain technology or artefact in society (design policy, funding, research agendas, etc.); While they are being developed (the three Rs); and Once they are in use (regulation and emergency plans).
Different disciplines look at these stages differently. Natural sciences try to describe systems as they are, social sciences describe systems in relation to their environment, emphasizing human agency and social structures, forming norms for systems as they ought to be, economic studies look into self-regulatory mechanisms such as ‘invisible hands’ and market efficiency. There is also a distinction between what is ethically permitted (or not) and what is ethically required (or not) [11].
10. Responsibility and Liability in Artificial Intelligence
Responsibility and liability are core ethical dimensions of artificial intelligence (AI) and a wide variety of related technologies [11]. Applications of AI are numerous and include finance, ticketing, surveillance, information retrieval, recruitment, scientific research, industrial production, social credit systems, care for the elderly, etc. The unpredictability of AI systems makes debates about responsibility and liability pertinent.
Attribution of responsibility and liability is particularly relevant in scientific research. What happens if a faulty AI model generates a scientific breakthrough that is later found responsible for massive financial losses? Who takes responsibility? And if there is no one up the chain, is the model itself or its operators to blame? AI systems are referred to as tools, instruments, or even agents (be they “intelligent agents” or “autonomous agents”). The core question is whether scientific, moral, and/or legal responsibility should be assigned, and to whom or to what.
11. Regulatory Frameworks and Policies in Artificial Intelligence
The growing integration of artificial intelligence (AI) technologies in scientific research poses new moral dilemmas and unintended repercussions, calling for caution and savvy policy-making in this fast-paced field [9]. Recognizing the utmost importance of approaching these challenges from the outset of any AI deployment or design initiative is vital. Given the distinctive characteristics of AI systems, the ethical considerations surrounding their use may differ from issues encountered with other technologies employed in science [12]. Ideal solutions already available for simple technologies such as chemicals or complex redox flow batteries may prove insufficient for AI applications. Thus, it is evident that discussions around ethical issues ought to take place in the design stage of an AI-based technology or product.
The present section reviews the complexity of, and issues associated with, AI technologies within scientific research and the urgent need to establish frameworks and policies to ensure the integrity of the scientific endeavor.
12. Case Studies and Examples of Ethical Dilemmas in Scientific Research
Case studies and examples are presented to illustrate the ethical considerations of using artificial intelligence in scientific research. The studies show how the use of AI can lead to ethical dilemmas, including issues of privacy, transparency, accountability, bias, generalization, and the non-translatability of context, language, and methodology. The cases also highlight other dilemmas, such as the value-free ideal, the sell-out ideal, and the field’s insensitivity to the importance of data, power, and voice, and the shape of reality that hinges on how knowledge is pursued and made usable [1].
Case studies and examples are presented to illustrate the ethical considerations of using artificial intelligence in scientific research. A recent study investigated ethical dilemmas associated with the use of machine learning in scientific research, using citations from Elsevier as a data source. A Web of Science Model (WoS Model) was created to identify important parameters related to such dilemmas. All 23 parameters were additionally classified into nine themes, each addressing specific aspects of the ethical analysis of machine-learning research in the sciences. The models can be particularly useful to conduct research in scientific fields in which AI tools are just emerging [3].
13. Future Directions and Emerging Trends in Ethical AI Research
For ethical AI research, there are a wide variety of lenses and dimensions from which to consider it. Focusing on the domain of knowledge construction or knowledge building, attention is limited to ethical considerations for AI technologies and systems with the ability to do knowledge construction or building tasks (such as writing texts, creating music, producing visual images, and designing experiments) that were previously only undertaken by human researchers. Future directions and emerging trends for, or from the perspective of, ethical AI research in the context of scientific research are supplied. Focusing on the research object of AI technologies or systems with knowledge construction ability, this focus addresses ethical AI research for knowledge construction, covering ethics relating to the use of AI or other technological systems in knowledge construction, and ethics or any normative considerations concerning knowledge technologies or systems with knowledge construction ability. Within this frame of reference, 11 future directions and emerging trends are mapped and described in detail.[13] [9]
14. Conclusion and Recommendations
As artificial intelligence systems are swiftly developed and adopted, the ethical considerations of their effects are significantly lagging behind. AI systems possess the unique ability to enhance, save, and impede the pace of scientific research, sometimes all at the same time. Most importantly, the impact of AI systems on the scientific research landscape must be assessed. To this end, the ethical principles of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers that apply to the role of AI in scientific research were identified and reviewed. The guiding ethical principles were analyzed for feasibility and gaps based on their interplay with the scientific research with the development and use of AI systems. Finally, recommendations were put forth to help guide developers, users, and organizations regarding the ethical use of AI systems in scientific research [3].
A working definition and guiding principles to ensure a constructive and ethical use of artificial intelligence methods in scientific research activities are proposed. Ideas to create trustworthy artificial intelligence environments are outlined, thereby inspiring future ethical artificial intelligence research endeavors in scientific domains. Extensive academic and industrial interest in, and a large number of ongoing initiatives on, the subject of adapting artificial intelligence tools to help advance scientific research are found. However, the ethical considerations of such tools’ effects on scientific research as a whole, as opposed to their effects on researchers, are largely absent. It is crucial to consider beyond the context of artificial intelligence tools’ individual utilization cases how these tools will reshape the scientific research landscape as a whole and what unintended consequences this reshaping can have [9].
References:
[1] J. He, W. Feng, Y. Min, J. Yi et al., “Control Risk for Potential Misuse of Artificial Intelligence in Science,” 2023. [PDF]
[2] S. Singh, R. Kumar, S. Payra, and S. K Singh, “Artificial Intelligence and Machine Learning in Pharmacological Research: Bridging the Gap Between Data and Drug Discovery,” 2023. ncbi.nlm.nih.gov
[3] S. Goltz and G. Dondoli, “A note on science, legal research and artificial intelligence,” 2019. [PDF]
[4] R. E. Tractenberg, “What is ethical AI? Leading or participating on an ethical team and/or working in statistics, data science, and artificial intelligence,” 2023. osf.io
[5] X. Ferrer, T. van Nuenen, J. M. Such, M. Coté et al., “Bias and Discrimination in AI: a cross-disciplinary perspective,” 2020. [PDF]
[6] O. Bohdal, T. Hospedales, P. H. S. Torr, and F. Barez, “Fairness in AI and Its Long-Term Implications on Society,” 2023. [PDF]
[7] N. Díaz-Rodríguez, J. Del Ser, M. Coeckelbergh, M. López de Prado et al., “Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation,” 2023. [PDF]
[8] S. Larsson, “The Socio-Legal Relevance of Artificial Intelligence (report),” 2019. [PDF]
[9] N. Kluge Corrêa, C. Galvão, J. William Santos, C. Del Pino et al., “Worldwide AI Ethics: a review of 200 guidelines and recommendations for AI governance,” 2022. [PDF]
[10] P. Radanliev, O. Santos, A. Brandon-Jones, and A. Joinson, “Ethics and responsible AI deployment,” 2024. ncbi.nlm.nih.gov
[11] V. C. Müller, “Ethics of Artificial Intelligence and Robotics,” 2020. [PDF]
[12] A. Maccaro, K. Stokes, L. Statham, L. He et al., “Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices,” 2024. ncbi.nlm.nih.gov
[13] C. J. Liang, T. H. Le, Y. Ham, B. R. K. Mantha et al., “Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry,” 2023. [PDF]