The AI Debate: Guiding Light or Dark Tunnel for Education?

Written by: Adam Urban (NTK)

Opening the Dialogue: How AI is Shaping the Future of Education

Will AI serve as a catalyst for innovation, or will it widen the gap between effective learning and mere efficiency? AI’s impact on education is profound and multifaceted, leaving us to ponder whether it will be our guiding light into the future or a dark tunnel with unforeseen consequences.

Because of this, universities need to respond not only by integrating AI into their policies and curricula, but also by thinking about how knowledge is delivered. This brings us to another question: of whether the way we teach is right for the 21st century. The discussion about teaching methods has been going on for a while, even before AI in the Czech context. One of the arguments related to instruction is that students are not really learning and gaining knowledge, but rather learning to decode or guess the “teacher’s password” and say the right answer to pass an exam and get credit without proper understanding.

Prof. Mazur has said in one of his speeches that lectures are problematic for many reasons, but the main flaw is that the traditional “lecture method is a process whereby the lecturer’s notes are transferred to the students’ notebooks without passing through the brains of either.” These shortcomings are exacerbated by the democratization in the use of AI because students might rely on AI-generated summaries rather than engaging with the material directly. However, with proper and sensitive use, AI could bridge these weaknesses and make the learning process more effective and attractive for both students and instructors.

This post seeks not to provide definitive answers, but is intended to spark a conversation about AI’s role in education.

Mastering AI: Literacy and the Art of Prompting

AI literacy refers to a set of skills that allow individuals to effectively and responsibly use artificial intelligence tools (Long & Magerko, 2020). In an era where AI is becoming increasingly embedded in educational processes, understanding these tools goes beyond mere usage. It involves knowing how AI systems make decisions, recognizing their limitations, and being aware of the ethical implications of their use.

The problem we face today is that many people using AI tools are not fully knowledgeable about how to use them effectively and responsibly. In other words, they are not aware of the limitations, capabilities, and processes that go on under the hood of these tools. Inadequate AI literacy among students and instructors can lead to ethical problems related to unintentional plagiarism, low-quality outputs, and even intentional academic misconduct.

One of the most critical aspects of AI literacy is mastering the art of creating prompts. This involves designing inputs or “prompts” that guide AI systems to produce useful and accurate outputs. In an educational setting, creating appropriate prompts can determine whether an AI tool enhances learning or simply generates generic and unhelpful responses. In other words, this is very well captured by the concept garbage in-garbage out. Working properly with AI should not be based on trial and error but rather on thoughtful steps and a clear idea of the desired outcome.

Mastering prompt creation and AI literacy is not without its challenges. It requires a clear understanding of both one’s subject matter and how AI systems process information. This is where universities should take a proactive approach. The trend is to actively promote the use of AI by students and instructors, but few, if any, universities have yet incorporated teaching AI literacy into their curricula.Related to this is the dilemma for universities of who will train the trainers (instructors) and how to deal with the generational skills gap, where students are sometimes several steps ahead of their instructors in terms of using AI tools.

AI in Education: Guiding Light or Dark Tunnel?

AI holds immense potential for enhancing the educational experience, offering personalized learning capabilities and increased efficiency in various tasks. It also promotes greater accessibility through tools that accommodate diverse learning needs, such as language translation, adaptive learning for varied paces, and assistive technologies for students with disabilities. However, this potential comes with risks—AI use could exacerbate inequalities, lead to over-reliance on the tools, and diminish the depth of human learning if not carefully managed.

When leveraged thoughtfully, AI can be a powerful tool for good. It can provide tailored educational experiences, broaden equitable access to knowledge and tools, and potentially help educators focus on more creative and meaningful aspects of teaching. By enhancing personalization and efficiency, AI can support students in reaching their full potential. For instance, integrating AI assistants into research databases like Scopus or Web of Science allows students and educators to navigate complex research landscapes more effectively, identify relevant studies, and streamline literature reviews, making high-quality research more accessible to a broader audience. Additionally, using AI tools has the potential to help reduce the inequality and discrimination faced by non-native English-speaking students and instructors in an English-dominated scholarly publishing environment. Banning the use of AI tools may do more harm than good (in fact, this would likely be futile) and a more realistic course should involve the creation of clear institutional strategies and guidelines for the use of AI, both in and out of academia.

There is admittedly a dark side to AI. If over-relied upon, it can, among other things, contribute to a lack of critical thinking, foster dependency, and perpetuate biases inherent in the algorithms. Without careful oversight, AI could lead to a future where education becomes more about surface learning than about truly understanding and engaging with educational material. Moreover, AI is entering a battlefield for information, media, and digital literacies for our students that has not yet been won yet.

The future of AI in education depends on the choices we make today. Educators and institutions must take an active role in shaping how AI is integrated into learning environments. This includes developing clear ethical guidelines, promoting AI literacy (and other literacies), and ensuring that AI tools are used to complement, rather than replace, traditional educational methods. Institutions must also determine who is responsible for specific steps in the use of AI tools for use in the academic context.

As we stand at this crossroads, it’s up to us to decide whether AI will be our guiding light or the beginning of a dark tunnel in education. By approaching AI with caution, curiosity, and a commitment to ethical practices, we can ensure that it enhances, rather than diminishes, the educational experience.

Will AI lead us to a brighter educational future, or will it cast shadows that are difficult to escape? The answer lies in our hands.

Resources:

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

Posted in Artificial Intelligence, Science Education, Tech Ethics | Comments Off on The AI Debate: Guiding Light or Dark Tunnel for Education?

“Language Tools: Mastering Language for Better Results”

Written by: Lenka Chladová (NTK) and Eliška Skládalová (NTK)

We have come a long way since the first release of Microsoft Grammar Checker in the 90s, with Artificial Intelligence (AI) massively expanding the possibilities. The so-called language tools and writing assistants now encompass much more than grammar and can help with style with many features such as paraphrasers and summarizers. Some even include features that can generate an entire text, even abstracts, based on input you provide. When thinking about using such tools, factors to consider include not only features and reliability but also the price. The golden rule of shopping, “higher price doesn’t mean better services”, also applies here. While some tools are entirely free, many have limited free functionalities, requiring payment to unlock a full range of features. 

In this post, we provide an overview of current grammar tools, discussing their advantages and disadvantages, as well as considering the potential risks and future developments of these tools. We also propose key questions to consider before using them. For a list of selected language tools that we tested and compared, visit NTK’s guide

Everyone has their preferences, but…

Everyone has their own preferences or even demands for these kinds of tools. The aesthetics of design, functionality, reliability, the number of features, and price are among commonly-important factors. For us, one of the important factors is a tool’s interface and clearness, both of which aid in effective and easy orientation and use. This of course relates to the display language; most of the tools we tested were in English, but some of them were in different languages, such as German, French or Spanish (even if they managed to make corrections in English). What could be considered a useful feature for some is the generation of new text such as title or abstract based on an uploaded essay or article. Such text can provide you with inspiration and ideas while drawing your input.

For us, the user experience fell apart with some tools with obvious limitations. When working on a paper or having several full-page documents, copying and pasting everything into very small fragments makes the process inefficient and slower than it could be. Limiting unique features in free versions is one thing, but basic feature limitations, for example, grammar correction, are frustrating. Many grammar correction tools are available with interfaces in languages with many speakers, such as German, French, or Spanish. For languages spoken by fewer people, such as Czech, various tools offer features like synonym generation, punctuation correction, and grammar explanations. However, the Czech grammar correction tools we tested did not meet our expectations.

A quick mention of our favorites

During our testing, some tools particularly captured our attention due to their unique features and performance. Sentence Checkup stood out with its reliable grammar correction and various interesting features. Plus, it’s free, making it a great option for everyone. While evaluating tools, we set up our criteria, and one of them was “first impression”. Some tools had good functionality, but their design or organization was a bit confusing. In this regard, Multilings really shined. Even though it only offers a one-week free trial before requiring payment, it has a plagiarism checker, translation services, and an easy-to-use interface. One more crucial factor for us was price. During our review, we found that some tools can be quite expensive, so it’s up to you to decide whether you want to invest in them or if they’re worth the price. Of course, there are many great tools worth checking out, and you can find a more detailed list sorted by categories here.

What’s the real (not just financial) cost?

Language tools, especially those based on AI, bring to the table not only effectiveness but also fears and risks. What happens with the data we provide? Will our texts be used for AI training? Will anyone have access to a text we uploaded?  Many questions come to mind when dealing with these kinds of AI-based tools. In case you want to be sure, we suggest checking the tool’s data policy, as some tools promise to never store, use, or sell input, while others may use data for different purposes (e.g., AI training). 

Is it AI, or is it us? 

Advanced features can seem to be on the cutting edge of writing ethics. With features such as paraphrasing, one’s input and words fade and the manner of expression changes, even though the meaning stays more or less the same. Where are the limits? Can we use this kind of text for academic writing? Is there a moment when a text is no longer “ours”? These questions have prompted universities and other (often academic) institutions to publish guidelines on generative AI use. Some guidelines allow using generative AI tools in limited circumstances, which they explicitly state; some ban AI completely. Some institutions haven’t even published guidelines yet, which could assist when making decisions about whether to use generative AI or not. The ethical aspects of academic writing (academic integrity) are definitely something we should consider, especially given how fast these technologies are evolving and how quickly tools offer new, creative ways to make our work easier.  

While dealing with these kinds of tools, especially text generation and advanced editing, a question arises when considering modern education methods. Should such tools be used? If so, how? How will they influence the student’s work, and what are the possible consequences? Outcomes and final responsibility now rest with us. 

The downfall? 

Many tools offer their own set of extensions for browsers and MS Word. Still, many just provide online features, which means you have to visit a website and copy/paste the selected document into the tool’s interface. There are exceptions, such as Writefull or Grammarly, which have incorporated features directly into MS Word (which can be useful, mainly because it can improve effectiveness). However, not many of us want to keep returning to a tool’s website and losing time editing text in two separate places. Is there a future for such tools? Could they be more helpful if integrated into text editors or web browsers? Only the future will provide us with better answers.

Posted in Uncategorized | Comments Off on “Language Tools: Mastering Language for Better Results”

AI tools in academia: Looking back at 2023

Around this time last year, many of us had already started experimenting with the then only recently released ChatGPT (OpenAI). Some marveled at its capacity to write text according to very precise instructions, often successfully keeping to the rules of a specific style or genre. Others, especially those working in education, may have felt slight uneasiness about the rapid progress in this seemingly very new field. Of course, while the public release of ChatGPT had this effect on the general public, AI experts had been long aware of the ongoing advances in this area.

The question of how many chatbot-generated papers would be handed in by students loomed large over high schools and universities last year. The academic sector too started pondering possible rules and regulations to ensure that current standards would be maintained, especially as regards the quality of research and research assessment, as well as ethical practice more generally. In response to all of this, 2023 brought lively discussions about many aspects of generative AI tools, mostly chatbots. Updated scholarly publication and citation guidelines, as well as changes to student assessment were some of the results. Towards the end of the year, the Faculty of Business Administration at the Prague University of Economics and Business (VŠE) announced the abolishment of Bachelor theses due to the impossibility of ruling out the use of generative AI tools.

Friend or Foe?

This may have been a slightly unexpected step to those who had – by that time in November – started to see generative AI as less of a threat in that department. AI chatbots can be awe-inspiring when it comes to a variety of tasks, but their inability to provide information and sources reliably hardly makes them fit to generate a thesis that would hold up to scrutiny (without profound editing). Large Language Models (LLMs) – in simplified terms, the technology that has made AI chatbots possible – have the significant disadvantage that text is generated based on probability. This means that while grammar is usually correct and style appropriate, factual inaccuracies slip in from time to time. These are referred to as hallucinations. New users, in particular, might not think it necessary to double-check a chatbot’s output, impressed by the overall presentation. (As regards languages with less speakers, such as Czech, hallucinations occur much more frequently. Additionally, grammatical mistakes appear as well, since the LLM has been trained on a smaller dataset.)

           AI chatbots can be especially frustrating when working with references. Those who attempted to use them as search engines would generally be confronted with made-up titles of non-existent papers, including non-functional URLs or DOIs. Even if correct bibliographic information was entered as part of the prompt, the chatbot would usually corrupt it in the text it generated by creatively changing some details. Regarding ChatGPT specifically, its inability to access the internet (in the free version) is an additional hindrance. Of course, ChatGPT is not the only impressive AI chatbot on the market, with Google’s Gemini (previously Bard) currently receiving a lot of attention, in part due to its later release. Gemini, as well as Microsoft Copilot (previously Bing Chat) or Perplexity AI do have access to the internet. The latter two also consistently cite their sources as functional hyperlinks next to their answers (while still lacking the ability to generate hallucination-free text). This makes them resemble Wikipedia, and it is recommended to approach them in a similar way. That means remaining highly wary of the text itself but leveraging the sources they reference.   

           What seems to be the overall consensus among the public, and teachers and instructors more specifically, is that generative AI warrants new approaches to written assignments. However, the situation is far from black-and-white. Surely, some students will give in to the temptation to hand in AI-generated coursework even when told not to do so, which is an issue that needs to be addressed. But when used critically and with guidance, AI chatbots can also play their role in information literacy classes, serving as examples that not everything on the internet should be trusted. Additionally, their potential to save time by laying the groundwork, for instance by producing text that is then expanded by a human (or at least carefully edited), also often comes up in discussions. A brainstorming tool seems to be the most popular designation of the currently available AI chatbots.

Some AI tools are still waiting to be discovered

Perhaps the most important lesson that we can draw from 2023 is that while the world’s leading tech companies have captivated us with their race to release the most successful AI chatbot on the market, there is much more out there worth our attention, especially if we are students, scholars, or librarians. A variety of AI tools designed specifically to aid in academic research have been released – and are continuously being developed – without much notice from the general public, or even students and academics.

Even if we remain in the area of prompt-based searches, meaning that we ask the tool a fully-formed question (as opposed to using keywords, Boolean operators, and filters), there are many exciting options geared to the academic sphere. SciSpace, Elicit, Consensus, and even the above-mentioned Perplexity AI, can all be used to look for scholarly articles. If, on the other hand, you already have a collection of resources in your own digital library, you might be interested in tools that allow you to search for additional, similar papers (and books). You can upload your personal collection into the following apps and decide which of them works the best for you: Research Rabbit, Litmaps, Inciteful, or Connected Papers, among others.

If you would like to learn more, check out our guide to AI tools for academic purposes.

So, have fun exploring in 2024!

Posted in Uncategorized | Comments Off on AI tools in academia: Looking back at 2023

Open Lab Notebook / Open Notebook Science

Written by Eva Karbanová, a former NTK employee, a CCBC press secretary.

“Open notebook” science is a practice in which research scientists record their work online and make it publicly available while conducting research in near-real time. Such research is thus completely open to the public and includes all its aspects such as raw data and any associated material. The approach was described by Bradley (2007), the first to use the term, as “no insider information”. Open notebook science makes the research process transparent and provides unsuccessful, not very significant, or unpublished outcomes (sometimes called “dark data”) to anyone interested (Goetz 2007).

Let’s take a look at the primary advantages of this practice, according to the literature. According to Schapira, Harding and Consortium (2019, p. 3), open lab notebooks can save time, resources, and knowledge. Making the information accessible quickly means that other researchers will be able to build upon the open results, making it possible for others to avoid spending time and resources on redundant experiments (Powell 2016). Open lab notebooks should include detailed protocols to achieve experimental replicability. The necessity of more transparent, replicable experiments has been discussed recently (for example, by Nature 2016 and Wallach et al. 2018). Negative data from unpublished research might additionally provide important insights (Mlinarić et al. 2017; Nimpf and Keays 2020). Open lab notebooks can also give experts a space for discussion, to (for example) spot discrepancies in an experiment and so on.

Early career researchers can use their notebooks to connect with peers and experts in the field. One can also add a link to one’s research notebook in an academic CV when applying to a position in order to showcase technical skills (Schapira, Harding and Consortium 2019).

Possible drawbacks of open notebooks (Harding and Consortium 2019, Sanderson 2008, Schapira 2018, Zirnask 2014):

  • Possible data theft (being “scooped”): Risk can be mitigated by using repositories such as Zenodo, which assign a citable DOI (or other citable record) to a notebook.
  • Difficulty publishing open notebook results in traditional peer-reviewed journals.
  • Influencing other research projects before the research documented in the open notebook is complete and/or well-analysed (this is why there are experimental collaborations without open notebook use with strict rules to avoid data leakage and issues with influencing results).
  • “Data deluge”: flooding the information space with a large amount of non-peer-reviewed material.
  • Can be difficult without a smooth process: maintaining an open notebook should be executed effectively to avoid wasting too much time (as with a regular lab notebook).

What are the necessary characteristics of open notebooks? Harding (2019, p. 2) notes she designed her notebook “to be discoverable, accessible, clear, and detailed in its presentation, and to permit dialogue between readers and me, and to pave the way for collaborations.”

Examples of platforms for open notebooks:

Picking the right open notebook platform can be daunting. Every scientific field has a different environment with different data collection requirements and different data types (e.g., code, images, equations, value). Several open notebook options are listed below; you may wish to ask your mentor if they have a preference for a particular tool. One could additionally create a blog for an open notebook.

To create an open notebook, contact the coordinator. Data are uploaded and stored on zenodo.org (maintained by CERN as a part of the OpenAIRE initiative). You can also link zenodo files to your ORCID profile.

Used by many open sourced software developers, among others. A code repository that allows parallel code editing.

Join via a web form. Designated for biology and biological engineering.

Free software, open standards, and web services for interactive computing across all programming languages.

Various interfaces (lab book, notebook, hub, Voilá) for sharing outputs.

Use https://nbviewer.org/ to make a jupyter notebook publicly shareable.

Open database for neuroscience projects.

If your research, for any reason, cannot be made public, another option for organisation and cooperating/sharing with other researchers or within teams, or managing protocols is the use of electronic laboratory notebooks. There are many options to choose from such as: https://rmarkdown.rstudio.com/, https://www.labarchives.com/, https://www.benchling.com/, and https://www.elabftw.net/ (among others).

Openly published protocols: If you do not wish to share your process but would like to publish and share a protocol which you have designed, a tool like https://www.protocols.io can be considered. After publishing a protocol on protocols.io you obtain a digital object identifier (DOI). A DOI can be used in a manuscript so that it can, if the article is approved for publication in a scientific journal, be published (automatically, if a DOI is used upon article submission) at a later date.

Resources:

Bradley, Jean-Claude. (2007). Open notebook science using blogs and wikis. Nature Precedings. https://doi.org/10.1038/npre.2007.39.1

Goetz, Thomas. (2007). Freeing the dark data of failed scientific experiments. Wired, 15(10). Available from: https://www.wired.com/2007/09/st-essay-3/ 

Harding, Rachel J. (2019) Open notebook science can maximize impact for rare disease projects. PLoS Biol, 17(1). https://doi.org/10.1371/journal.pbio.3000120

Mliarić, Ana, Horvat, Martina. & Šupak Smolčić, Vesna. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochem Med, 27(3). https://doi.org/10.11613/BM.2017.030201

Nature. (2016). Reality check on reproducibility. Nature, 533(7604), 437-437. https://doi.org/10.1038/533437a

Nimpf, Simon & Keays David A. (2020). Why (and how) we should publish negative data. EMBO Rep., 21(1). https://doi.org/10.15252/embr.201949775

Powell, Kendall. (2016). Does it take too long to publish research? Nature, 530, 148-151. https://doi.org/10.1038/530148a

Sanderson, Katherine. (2008). Data on display. Nature, 455(18), 273. https://doi.org/10.1038/455273a

Schapira, Matthieu. (2018). Open lab notebooks to increase impact and accelerate discovery . Springer Nature. Available at: https://researchdata.springernature.com/posts/29655-open-lab-notebooks-to-increase-impact-and-accelerate-discovery

Schapira, Matthieu & Rachel J Harding. (2019). Open laboratory notebooks: good for science, good for society, good for scientists. F1000Res, 8(87). https://doi.org/10.12688/f1000research.17710.2

Wallach, Joshua D, Boyack, Kevin W. & John P. A. Ioannidis. (2018) Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017. PLoS Biology, 16. https://doi.org/10.1371/journal.pbio.2006930

Zirnask, Mart. (2014). Are open notebooks the future of science? UT Blog. Available at: https://blog.ut.ee/are-open-notebooks-the-future-of-science/

Posted in Data Repositories, Discovery, Open Science, Uncategorized | Comments Off on Open Lab Notebook / Open Notebook Science

My experience with applying for a Fulbright scholarship in the Czech Republic

Written by Michal Hubálek, a doctoral candidate at the University of Hradec Králové and a former NTK employee.

For as long as I can remember, I have somehow always known about the opportunity to do a Fulbright Scholarship in the US. Probably, I learned about Fulbright from the promotional materials provided by the University of Hradec Králové (UHK) International Office at the Philosophical Faculty. Going to study or conduct research in the US has always seemed like a big challenge to me, both in terms of my language proficiency and my academic preparedness.

I cannot say, however, that I would apply for a Fulbright under any circumstances, or just to go to the US. Even though this would make sense for me since my research is focused, among other things, on American philosophical movements such as pragmatism and naturalism. Relevant ongoing research projects and various sources for my work are thus naturally there.

I decided to apply for a specifically Czech Fulbright grant, the Masaryk-Fulbright scholarship, to work on my doctoral dissertation (I am now in the fifth year of my doctoral studies), with a topic that revolves around the concept of naturalism and historical/evolutionary explanation. “The time was right in my career” is what I listed in my application as the main reason for my submitting it. In 2017, Professor Paul A. Roth from the University of California-Santa Cruz taught a Philosophy of History course for one term at UHK (during the course, we discussed a manuscript he later published, The Philosophical Structure of Historical Explanation), and it was a personally and philosophically transformative experience for me.

So I quickly realized that having the opportunity to meet up regularly with Professor Roth again in person (and, of course, also having the much-needed time for research and writing thanks to the Fulbright scholarship), would be the best possible impulse for finishing my PhD thesis. From this perspective, my case is specific because I knew Professor Roth in advance, and I knew that he would be happy to write the invitation letter for me (this proved to be an advantage because the letter had to be re-written several times, always with a short turnaround time).

I was at a stage in my career when I also knew that starting in January 2023, I would no longer receive a PhD stipend and that I would have to find other resources to finish my studies. For these reasons, I wrote various research proposal sketches and refined my CV throughout 2022. In August 2022, I returned from an Erasmus+ traineeship at the Institute Vienna Circle in Austria, and I slowly started filling out an online application for the Fulbright-Masaryk stipend (deadline: November).

This period of time (three months) was enough for me because I already had a rough-and-ready research proposal which, moreover, substantially mirrored the topic of my PhD thesis. I appreciated that I had complete freedom to apply with a subject in which I was already interested as a PhD student and a pre-doctoral researcher. For the Fulbright-Masaryk scholarship, applicants must also prove that they are “not only outstanding experts in their scientific field but also active in the civic or public life of their institutions or communities, just like Tomáš Garrigue Masaryk.” I was pretty confident that I met these criteria, so after contacting the Fulbright Commission to hear their opinion, I simply added a “Public and Community Service Statement” to my CV emphasizing and putting into context my various past and present activities. Beyond writing the research project itself and receiving the invitation letter from the US, the most time-consuming part of preparing the proposal was for me to put together three additional recommendation letters on my behalf from colleagues and/or former instructors.

At the end of January, the Fulbright Commission informed me that I had passed the first two rounds of the selection process (meeting the formal requirements plus an anonymous review of my project by two experts in my field) and that I was invited to an in-person interview in February. The interview was relatively short (about fifteen minutes), and the committee consisted of Fulbright scholars from the US currently staying in the Czech Republic and others.

The interview was not, I felt, primarily concerned with my research proposal or my academic, scientific, or teaching achievements. It was about me as a person, citizen, and cultural ambassador, so the commission was primarily interested in my attitudes, visions, and future professional plans. Moreover, they were very interested in my practical plans related to moving to the US with my whole family and related to my research (e.g., if I had already checked the cost of living at my host institution). I recall four questions that were explicitly posed to me in this regard:

  1. Why the US? Why is it necessary to conduct your research in the US?
  2. Why did you choose this particular departure date for your research stay?
  3. What would you do if your mentor was ill or absent?
  4. What difficulties do you think you might experience in the US?

I had to wait until March to learn I had been awarded a Fulbright-Masaryk scholarship. I must say that I am delighted with how the Czech Fulbright Commission handles things and communicate; there are several handbooks and guidelines for us recipients of various Fulbright stipends, and the coordinators are very patient and swift in answering our questions and acknowledging any adjustments (I, for example, had to change my date of departure from October to July after discussions with Professor Roth). In May, all the scholars receiving a Fulbright for the 2023/2024 academic year had an informational meeting with Fulbright Commission coordinators and four former Fulbright scholars. This was a very friendly event (with free pizza!) during which we could ask any kind of question and tackle any kind of worries we had. Thus, I wholeheartedly recommend anyone interested in conducting research or studying in the US to apply for a Fulbright stipend.

As I already indicated, active researchers, scholars, teachers, and publicly-involved people are halfway there since the Fulbright Commission does appreciate this, regardless of one’s discipline or research interests. What is sometimes underestimated, I think, although the Fulbright Commission always mentions that, is that, with the current rental rates in some US states, the monthly Fulbright stipend might not cover all your costs, especially if you want to move overseas with your family. Personal savings are, therefore, really required — at least for some destinations in the US. If you are considering applying for a Fulbright stipend and want some help or just wish to chat about it all, please feel free to contact me at: hubalek.michal.42@gmail.com. Here you can also find a case study for the Fulbright application written by Stephanie Krueger.

Michal
Prague, July 2023

Posted in Early Career Researchers, Funding Opportunity | Comments Off on My experience with applying for a Fulbright scholarship in the Czech Republic

AI and writing: much ado about generated essays

A recent Reddit/Twitter discussion thread on artificial intelligence (AI) and academic writing recently emerged, following claims of a Reddit user to have used AI to write well-graded essays.

The Guardian picked up on this discussion with an article entitled “‘Full-on robot writing’: the artificial intelligence challenge facing universities.” The article provided background links on specific developments in AI writing and describes how universities are responding to new technological developments, noting how some institutions (this article was focused on Australia) are considering such works as plagiarism in their policy statements. It poses the question of how educators should view current developments:

“To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems. One could, for instance, imagine the technology facilitating a “boring dystopia”, further degrading those aspects of the university already most eroded by corporate imperatives. Higher education has, after all, invested heavily in AI systems for grading, so that, in theory, algorithms might mark the output of other algorithms, in an infinite process in which nothing whatsoever ever gets learned.

But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be. AI might spur us to recognise genuine knowledge, so that, as the university of the future embraces technology, it appreciates anew what makes us human.”

Despite all the hand-wringing, an Inside Higher Education piece written by a professor of a class (“Rhetoric and Algorithms”) outlines the results of an in-class experiment with AI tools, in which the professor encouraged undergraduate students to use as many AI tools as possible to create an essay. The professor found the overall quality of the results to be poor, but perhaps more importantly for an overarching discussion of this topic, students did not like the process using such tools:

“I asked my students to write short reflections on their AI essays’ quality and difficulty. Almost every student reported hating this assignment. They were quick to recognize that their AI-generated essays were substandard, and those used to earning top grades were loath to turn in their results. The students overwhelmingly reported that using AI required far more time than simply writing their essays the old-fashioned way would have. To get a little extra insight on the ‘writing’ process, I also asked students to hand in all the collected outputs from the AI text generation ‘pre-writing.’ The students were regularly producing 5,000 to 10,000 words (sometimes as many as 25,000 words) of outputs in order to cobble together essays that barely met the 1,800-word floor.”

The professor argues that good writers produce better AI output, noting also that can, with such assignments, be effectively used to illustrate to students about the writing submission and feedback process, with the tools providing immediate feedback to students, which motivated students could use to learn. He argues that others worried about plagiarism in their assigned essays can mitigate the risk of AI-generated work by making assignments very specific, and notes also that educators and university policymakers must take developments in this area into account:

I am deeply skeptical that even the best models will ever really allow students to produce writing that far exceeds their current ability. Effective prompt generation and revision are dependent on high-level writing skills. Even as artificial intelligence gets better, I question the extent to which novice writers will be able to direct text generators skillfully enough to produce impressive results.

I would tend to agree with this author, with the current state of technological affairs. I do wonder how current plagiarism tools would be able to track AI-written content, if it’s not in the corpus of comparative texts for a tool, and possible burdens imposed on writing instructors in determining if work is original or not.

And the more I deal with written texts, I feel more than ever that written assignments are crucial to quality higher education. The academic writing process, in my opinion, sharpens students’ skills in many areas, particularly if work is carefully reviewed by instructors with appropriate and constructive feedback. And I agree with the author of the second article, that AI tools can be helpful learning tools (I myself use AI grammar and language tools for this purpose).

I do, however, worry about a world, as alluded to in the first article, in which journalistic content is written by AI. Rather than question of the role of writing in higher education, perhaps we should question where and how AI (not just written output) interacts with the real world, perhaps skewing perceptions.

Graham, S. S. (October 24, 2022). AI-Generated Essays Are Nothing to Worry About. Inside Higher Ed. https://www.insidehighered.com/views/2022/10/24/ai-generated-essays-are-nothing-worry-about-opinion

Sparrow, J. (November 18, 2022). ‘Full-on robot writing’: the artificial intelligence challenge facing universities. The Guardian. https://www.theguardian.com/australia-news/2022/nov/19/full-on-robot-writing-the-artificial-intelligence-challenge-facing-universities

Posted in Academic Integrity, Academic writing in English, Plagiarism, Science Education | Comments Off on AI and writing: much ado about generated essays

New grammar and language tools helpful, but do not replace clear ideas

Many students and colleagues I know, both native and non-native speakers, are eagerly embracing new grammar and language tools, some of which “learn” over time with artificial intelligence (AI). I myself use LanguageTool, a grammar, style, and spelling checker, as an “overlay” over Google Docs whenever I can, finding myself missing the supplementary tool when I use Microsoft Word. 

While such tools are useful, they (in my opinion) do not replace clear thought. I often tell non-native students that if they cannot express themselves well in their native language, none of the English writing tools will help them present their thoughts better in this second language. 

In addition to LanguageTool, my students and colleagues find the following tools of use:

I tried to find a comparison chart created by universities or libraries for these tools, but was unsuccessful. Various lists of the so-called best tools for 2022 (scroll past the paid content) are available in this sample search.

Stepping backwards, here is a nice subject guide to editing and proofreading in English that includes a nice checklist. 

Curtain University (2021). Editing and proofreading your assignment. https://libguides.library.curtin.edu.au/uniskills/assignment-skills/writing/editing-proofreading 

Posted in Academic writing in English, NCIP, Uncategorized | Comments Off on New grammar and language tools helpful, but do not replace clear ideas

NCIP enables participation in the HERMES project (“Strengthening Digital Resource Sharing during COVID and Beyond”)

Open, captioned video footage of the NTK NCIP-funded presentation by Dr. Stephanie Krueger in English on academic resource use cases at the PhD+ level is now available on YouTube as part of the HERMES project open learning channel. The thirty-seven-minute lecture and Q&A session covers use cases for doctoral students, early career researchers, and established researchers and explains gated and open resources useful for common tasks performed at each level. Live sessions, part of a pilot for the HERMES project, included audience members from the IFLA DDRS committee, bachelor and master students from Hacettepe University (Turkey), and members of the NTK Services team. Italian, Spanish, Arabic, and Turkish subtitles will be added over time, making the content even more accessible for learners.

Stephanie Krueger lecturing for HERMES on YouTube

Full video of presentation available at:

Krueger, S. (2022). Resource discovery: Use cases in the academic field. https://youtu.be/C1clHHYtPPg

Posted in Discovery, IFLA DDRS, NCIP, Open Access, Science Education | Comments Off on NCIP enables participation in the HERMES project (“Strengthening Digital Resource Sharing during COVID and Beyond”)

Tips for improving courses based on learning theory

AAC&U has provided helpful tips (including links to many useful resources) to instructors contemplating improving courses for students in the coming semester. Even if you’re a learning theory expert, these tips and resources can assist in contemplating if one’s courses are the best they can be.

Read more:

Demeter, E. (2021). Reflecting on Course Redesign: How Faculty Can Measure the Impact of Instructional Changes. Liberal Education Blog. https://www.aacu.org/blog/reflecting-course-redesign-how-faculty-can-measure-impact-instructional-changes

Posted in Uncategorized | Comments Off on Tips for improving courses based on learning theory

Planning for in-person instruction despite Delta: experiences of a small college

As universities plan for the coming semester, higher education administrators are thinking about what to do about Delta, taking various models into consideration while attempting to keep campuses open for in-person instruction. One small college describes their planning/modeling process:

…administrators believe they can bring the campus reproduction rate below 1 with a combination of vaccination and other measures, including entry testing, weekly surveillance testing for unvaccinated students, and a mask mandate. 

Other universities feel that vaccination rates are high enough to avoid such measures.

Read more:

Diep, F. (2021). Vaccination Alone Isn’t Enough to Keep the Virus Under Control This Fall, One Small College Warns. The Chronicle of Higher Education. https://www.chronicle.com/article/vaccination-alone-isnt-enough-to-keep-the-virus-under-control-this-fall-one-small-college-warns

Posted in COVID & Higher Ed Strategy | Comments Off on Planning for in-person instruction despite Delta: experiences of a small college