Around this time last year, many of us had already started experimenting with the then only recently released ChatGPT (OpenAI). Some marveled at its capacity to write text according to very precise instructions, often successfully keeping to the rules of a specific style or genre. Others, especially those working in education, may have felt slight uneasiness about the rapid progress in this seemingly very new field. Of course, while the public release of ChatGPT had this effect on the general public, AI experts had been long aware of the ongoing advances in this area.
The question of how many chatbot-generated papers would be handed in by students loomed large over high schools and universities last year. The academic sector too started pondering possible rules and regulations to ensure that current standards would be maintained, especially as regards the quality of research and research assessment, as well as ethical practice more generally. In response to all of this, 2023 brought lively discussions about many aspects of generative AI tools, mostly chatbots. Updated scholarly publication and citation guidelines, as well as changes to student assessment were some of the results. Towards the end of the year, the Faculty of Business Administration at the Prague University of Economics and Business (VŠE) announced the abolishment of Bachelor theses due to the impossibility of ruling out the use of generative AI tools.
Friend or Foe?
This may have been a slightly unexpected step to those who had – by that time in November – started to see generative AI as less of a threat in that department. AI chatbots can be awe-inspiring when it comes to a variety of tasks, but their inability to provide information and sources reliably hardly makes them fit to generate a thesis that would hold up to scrutiny (without profound editing). Large Language Models (LLMs) – in simplified terms, the technology that has made AI chatbots possible – have the significant disadvantage that text is generated based on probability. This means that while grammar is usually correct and style appropriate, factual inaccuracies slip in from time to time. These are referred to as hallucinations. New users, in particular, might not think it necessary to double-check a chatbot’s output, impressed by the overall presentation. (As regards languages with less speakers, such as Czech, hallucinations occur much more frequently. Additionally, grammatical mistakes appear as well, since the LLM has been trained on a smaller dataset.)
AI chatbots can be especially frustrating when working with references. Those who attempted to use them as search engines would generally be confronted with made-up titles of non-existent papers, including non-functional URLs or DOIs. Even if correct bibliographic information was entered as part of the prompt, the chatbot would usually corrupt it in the text it generated by creatively changing some details. Regarding ChatGPT specifically, its inability to access the internet (in the free version) is an additional hindrance. Of course, ChatGPT is not the only impressive AI chatbot on the market, with Google’s Gemini (previously Bard) currently receiving a lot of attention, in part due to its later release. Gemini, as well as Microsoft Copilot (previously Bing Chat) or Perplexity AI do have access to the internet. The latter two also consistently cite their sources as functional hyperlinks next to their answers (while still lacking the ability to generate hallucination-free text). This makes them resemble Wikipedia, and it is recommended to approach them in a similar way. That means remaining highly wary of the text itself but leveraging the sources they reference.
What seems to be the overall consensus among the public, and teachers and instructors more specifically, is that generative AI warrants new approaches to written assignments. However, the situation is far from black-and-white. Surely, some students will give in to the temptation to hand in AI-generated coursework even when told not to do so, which is an issue that needs to be addressed. But when used critically and with guidance, AI chatbots can also play their role in information literacy classes, serving as examples that not everything on the internet should be trusted. Additionally, their potential to save time by laying the groundwork, for instance by producing text that is then expanded by a human (or at least carefully edited), also often comes up in discussions. A brainstorming tool seems to be the most popular designation of the currently available AI chatbots.
Some AI tools are still waiting to be discovered
Perhaps the most important lesson that we can draw from 2023 is that while the world’s leading tech companies have captivated us with their race to release the most successful AI chatbot on the market, there is much more out there worth our attention, especially if we are students, scholars, or librarians. A variety of AI tools designed specifically to aid in academic research have been released – and are continuously being developed – without much notice from the general public, or even students and academics.
Even if we remain in the area of prompt-based searches, meaning that we ask the tool a fully-formed question (as opposed to using keywords, Boolean operators, and filters), there are many exciting options geared to the academic sphere. SciSpace, Elicit, Consensus, and even the above-mentioned Perplexity AI, can all be used to look for scholarly articles. If, on the other hand, you already have a collection of resources in your own digital library, you might be interested in tools that allow you to search for additional, similar papers (and books). You can upload your personal collection into the following apps and decide which of them works the best for you: Research Rabbit, Litmaps, Inciteful, or Connected Papers, among others.
If you would like to learn more, check out our guide to AI tools for academic purposes.
So, have fun exploring in 2024!