Emergence of artificial intelligence chatbots in scientific research

Article information

J Exerc Rehabil Vol. 19, No. 3, 139-140, June, 2023
Publication date (electronic) : 2023 June 28
doi : https://doi.org/10.12965/jer.2346234.117
Department of Physical Therapy, Namseoul University, 91 Daehak-ro, Seonghwan-eup, Sebuk-gu, Cheonan 31020, Korea

Humans have been utilizing unique characteristics to gather information, observe, and analyze in order to satisfy our curiosity. The journey towards enlightenment has been painstaking. The process of conducting research and publishing articles has been a tremendous undertaking for scientists. To assess such time-consuming labor, humans have developed various tools to aid in this tedious and time-consuming process. Revolutionary tools such as calculators, computers, the Internet, and cloud technology have been released over the years to expedite the research process and minimize errors and confusion.

A recent technological revolution that is stirring up the scientific community worldwide is the emergence of AI chatbots like Chat Generative Pre-Trained Transformer (ChatGPT) or Socratic by Google. OpenAI’s ChatGPT, which was first introduced to the public on November 30th, 2022, has taken artificial intelligence (AI) embedded systems to a whole new level. While AI has commonly been applied as Internet of Things (IoT) devices in recent years, these AI chatbots have exponentially increased their ability to imitate intelligent human behavior through self-learning capabilities. They can understand and interact with natural human language text by utilizing a data processing system similar to that of the human brain, enabling them to identify patterns and make predictions based on text inputs.

AI chatbots offer various advantages. They streamline work processes by employing faster and simpler methods. Additionally, they save time, eliminate biases, and automate repetitive tasks, among other benefits. With their ability to exponentially grow in intelligence and gather information worldwide, the advantages of AI chatbots are seemingly limitless. These advantages of AI chatbots can serve as powerful boosters for scientists engaged in more aggressive scientific researches. AI chatbots can assist not only in writing abstracts and articles but also in summarizing data, providing suggestions for structure, references, and language review. They are capable of generating text in multiple styles on a wide variety of topics. They can help generate initial drafts and even suggest titles with minimal input. Furthermore, they can organize and compose the methods section by including sample sizes and data analysis techniques, as well as providing well-structured results and discussion sections.

However, AI chatbots have certain limitations. They cannot provide results without input from researchers. They cannot replace researchers’ ideas, expertise, judgment, and personality. Ultimately, although AI chatbots are highly intelligent, they can only function based on the instructions given by human handlers. Therefore, the responsibility for the results lies solely with the human handlers or researchers. AI chatbots can only serve as assistants to researchers.

Nevertheless, the limitations of AI chatbots and the degree of engagement in research will change as AI chatbots evolve in intelligence. Ethical concerns are among the most significant considerations regarding the emerging usage of AI chatbots. A simple search on Google Scholar reveals over 54,600 articles related to AI chatbots and 91,700 articles related to Open GPT. Moreover, there are 16,900 articles related to ethics and Open GPT.

There is a strong consensus regarding the ethical dilemma of using AI chatbots in scientific research. The greatest concern is the issue of plagiarism. AI chatbots make detecting plagiarism even more challenging because they can rephrase others’ work to avoid traditional plagiarism detection methods. To protect the rights to original sources, various counteractive services are being introduced to detect AI-assisted plagiarism. Just as different armors are built to protect against different swords, technological protections will be intelligently developed by experts in the field to safeguard these rights. However, there will be a new consensus among individual researchers and members of the scientific community regarding topics such as possible plagiarism and the appropriate application of this newly emerged assistant. Since we are in the early stages of the AI chatbot era, there are many concerns that lack clear conclusions. No one truly knows how AI chatbots will evolve in the future. Many predict catastrophic results and consequences of using AI chatbots in scientific research. However, regardless of whether humans are prepared or not, the AI era has arrived, and we must prepare for the future. One thing is certain: AIs can only mimic human beings. While AIs can learn from humans and express themselves as human-like entities, they cannot possess the originality and unique characteristics of humans. One of the defining characteristics of human scientists is their willingness to provide improved solutions through scientific analysis and observation. AI-based results will lack the expertise and critical human thinking behind the work. One of the critical values of original research is the initiative and leadership taken by the researcher.

General tips for avoiding unintentional plagiarism include using multiple sources, paraphrasing content, citing sources, reviewing and editing with one’s own ideas. As a community, we should also derive a consensus from multiple experiences and inferences to avoid confusion and prevent inappropriate invasion of individual originality. It is also the responsibility of the scientific society and its members to provide appropriate guidelines for researchers to stay on track and maintain control over their work.

The emergence of AI chatbots seems like a charging wild stallion. Humans may be facing unexperienced vulnerability, but with properly controlled leash, humans can leap higher in scientific advances. As a footnote, I would like to indicate that this editorial was proof written by ChatGPT and approximately 5% of the suggestions made by ChatGPT was utilized.

Notes

CONFLICT OF INTEREST

No potential conflict of interest relevant to this article was reported.

Article information Continued