AI tools in academia: Looking back at 2023

Around this time last year, many of us had already started experimenting with the then only recently released ChatGPT (OpenAI). Some marveled at its capacity to write text according to very precise instructions, often successfully keeping to the rules of a specific style or genre. Others, especially those working in education, may have felt slight uneasiness about the rapid progress in this seemingly very new field. Of course, while the public release of ChatGPT had this effect on the general public, AI experts had been long aware of the ongoing advances in this area.

The question of how many chatbot-generated papers would be handed in by students loomed large over high schools and universities last year. The academic sector too started pondering possible rules and regulations to ensure that current standards would be maintained, especially as regards the quality of research and research assessment, as well as ethical practice more generally. In response to all of this, 2023 brought lively discussions about many aspects of generative AI tools, mostly chatbots. Updated scholarly publication and citation guidelines, as well as changes to student assessment were some of the results. Towards the end of the year, the Faculty of Business Administration at the Prague University of Economics and Business (VŠE) announced the abolishment of Bachelor theses due to the impossibility of ruling out the use of generative AI tools.

Friend or Foe?

This may have been a slightly unexpected step to those who had – by that time in November – started to see generative AI as less of a threat in that department. AI chatbots can be awe-inspiring when it comes to a variety of tasks, but their inability to provide information and sources reliably hardly makes them fit to generate a thesis that would hold up to scrutiny (without profound editing). Large Language Models (LLMs) – in simplified terms, the technology that has made AI chatbots possible – have the significant disadvantage that text is generated based on probability. This means that while grammar is usually correct and style appropriate, factual inaccuracies slip in from time to time. These are referred to as hallucinations. New users, in particular, might not think it necessary to double-check a chatbot’s output, impressed by the overall presentation. (As regards languages with less speakers, such as Czech, hallucinations occur much more frequently. Additionally, grammatical mistakes appear as well, since the LLM has been trained on a smaller dataset.)

           AI chatbots can be especially frustrating when working with references. Those who attempted to use them as search engines would generally be confronted with made-up titles of non-existent papers, including non-functional URLs or DOIs. Even if correct bibliographic information was entered as part of the prompt, the chatbot would usually corrupt it in the text it generated by creatively changing some details. Regarding ChatGPT specifically, its inability to access the internet (in the free version) is an additional hindrance. Of course, ChatGPT is not the only impressive AI chatbot on the market, with Google’s Gemini (previously Bard) currently receiving a lot of attention, in part due to its later release. Gemini, as well as Microsoft Copilot (previously Bing Chat) or Perplexity AI do have access to the internet. The latter two also consistently cite their sources as functional hyperlinks next to their answers (while still lacking the ability to generate hallucination-free text). This makes them resemble Wikipedia, and it is recommended to approach them in a similar way. That means remaining highly wary of the text itself but leveraging the sources they reference.   

           What seems to be the overall consensus among the public, and teachers and instructors more specifically, is that generative AI warrants new approaches to written assignments. However, the situation is far from black-and-white. Surely, some students will give in to the temptation to hand in AI-generated coursework even when told not to do so, which is an issue that needs to be addressed. But when used critically and with guidance, AI chatbots can also play their role in information literacy classes, serving as examples that not everything on the internet should be trusted. Additionally, their potential to save time by laying the groundwork, for instance by producing text that is then expanded by a human (or at least carefully edited), also often comes up in discussions. A brainstorming tool seems to be the most popular designation of the currently available AI chatbots.

Some AI tools are still waiting to be discovered

Perhaps the most important lesson that we can draw from 2023 is that while the world’s leading tech companies have captivated us with their race to release the most successful AI chatbot on the market, there is much more out there worth our attention, especially if we are students, scholars, or librarians. A variety of AI tools designed specifically to aid in academic research have been released – and are continuously being developed – without much notice from the general public, or even students and academics.

Even if we remain in the area of prompt-based searches, meaning that we ask the tool a fully-formed question (as opposed to using keywords, Boolean operators, and filters), there are many exciting options geared to the academic sphere. SciSpace, Elicit, Consensus, and even the above-mentioned Perplexity AI, can all be used to look for scholarly articles. If, on the other hand, you already have a collection of resources in your own digital library, you might be interested in tools that allow you to search for additional, similar papers (and books). You can upload your personal collection into the following apps and decide which of them works the best for you: Research Rabbit, Litmaps, Inciteful, or Connected Papers, among others.

If you would like to learn more, check out our guide to AI tools for academic purposes.

So, have fun exploring in 2024!

Posted in Uncategorized | Comments Off on AI tools in academia: Looking back at 2023

Open Lab Notebook / Open Notebook Science

Written by Eva Karbanová, a former NTK employee, a CCBC press secretary.

“Open notebook” science is a practice in which research scientists record their work online and make it publicly available while conducting research in near-real time. Such research is thus completely open to the public and includes all its aspects such as raw data and any associated material. The approach was described by Bradley (2007), the first to use the term, as “no insider information”. Open notebook science makes the research process transparent and provides unsuccessful, not very significant, or unpublished outcomes (sometimes called “dark data”) to anyone interested (Goetz 2007).

Let’s take a look at the primary advantages of this practice, according to the literature. According to Schapira, Harding and Consortium (2019, p. 3), open lab notebooks can save time, resources, and knowledge. Making the information accessible quickly means that other researchers will be able to build upon the open results, making it possible for others to avoid spending time and resources on redundant experiments (Powell 2016). Open lab notebooks should include detailed protocols to achieve experimental replicability. The necessity of more transparent, replicable experiments has been discussed recently (for example, by Nature 2016 and Wallach et al. 2018). Negative data from unpublished research might additionally provide important insights (Mlinarić et al. 2017; Nimpf and Keays 2020). Open lab notebooks can also give experts a space for discussion, to (for example) spot discrepancies in an experiment and so on.

Early career researchers can use their notebooks to connect with peers and experts in the field. One can also add a link to one’s research notebook in an academic CV when applying to a position in order to showcase technical skills (Schapira, Harding and Consortium 2019).

Possible drawbacks of open notebooks (Harding and Consortium 2019, Sanderson 2008, Schapira 2018, Zirnask 2014):

  • Possible data theft (being “scooped”): Risk can be mitigated by using repositories such as Zenodo, which assign a citable DOI (or other citable record) to a notebook.
  • Difficulty publishing open notebook results in traditional peer-reviewed journals.
  • Influencing other research projects before the research documented in the open notebook is complete and/or well-analysed (this is why there are experimental collaborations without open notebook use with strict rules to avoid data leakage and issues with influencing results).
  • “Data deluge”: flooding the information space with a large amount of non-peer-reviewed material.
  • Can be difficult without a smooth process: maintaining an open notebook should be executed effectively to avoid wasting too much time (as with a regular lab notebook).

What are the necessary characteristics of open notebooks? Harding (2019, p. 2) notes she designed her notebook “to be discoverable, accessible, clear, and detailed in its presentation, and to permit dialogue between readers and me, and to pave the way for collaborations.”

Examples of platforms for open notebooks:

Picking the right open notebook platform can be daunting. Every scientific field has a different environment with different data collection requirements and different data types (e.g., code, images, equations, value). Several open notebook options are listed below; you may wish to ask your mentor if they have a preference for a particular tool. One could additionally create a blog for an open notebook.

To create an open notebook, contact the coordinator. Data are uploaded and stored on (maintained by CERN as a part of the OpenAIRE initiative). You can also link zenodo files to your ORCID profile.

Used by many open sourced software developers, among others. A code repository that allows parallel code editing.

Join via a web form. Designated for biology and biological engineering.

Free software, open standards, and web services for interactive computing across all programming languages.

Various interfaces (lab book, notebook, hub, Voilá) for sharing outputs.

Use to make a jupyter notebook publicly shareable.

Open database for neuroscience projects.

If your research, for any reason, cannot be made public, another option for organisation and cooperating/sharing with other researchers or within teams, or managing protocols is the use of electronic laboratory notebooks. There are many options to choose from such as:,,, and (among others).

Openly published protocols: If you do not wish to share your process but would like to publish and share a protocol which you have designed, a tool like can be considered. After publishing a protocol on you obtain a digital object identifier (DOI). A DOI can be used in a manuscript so that it can, if the article is approved for publication in a scientific journal, be published (automatically, if a DOI is used upon article submission) at a later date.


Bradley, Jean-Claude. (2007). Open notebook science using blogs and wikis. Nature Precedings.

Goetz, Thomas. (2007). Freeing the dark data of failed scientific experiments. Wired, 15(10). Available from: 

Harding, Rachel J. (2019) Open notebook science can maximize impact for rare disease projects. PLoS Biol, 17(1).

Mliarić, Ana, Horvat, Martina. & Šupak Smolčić, Vesna. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochem Med, 27(3).

Nature. (2016). Reality check on reproducibility. Nature, 533(7604), 437-437.

Nimpf, Simon & Keays David A. (2020). Why (and how) we should publish negative data. EMBO Rep., 21(1).

Powell, Kendall. (2016). Does it take too long to publish research? Nature, 530, 148-151.

Sanderson, Katherine. (2008). Data on display. Nature, 455(18), 273.

Schapira, Matthieu. (2018). Open lab notebooks to increase impact and accelerate discovery . Springer Nature. Available at:

Schapira, Matthieu & Rachel J Harding. (2019). Open laboratory notebooks: good for science, good for society, good for scientists. F1000Res, 8(87).

Wallach, Joshua D, Boyack, Kevin W. & John P. A. Ioannidis. (2018) Reproducible research practices, transparency, and open access data in the biomedical literature, 2015-2017. PLoS Biology, 16.

Zirnask, Mart. (2014). Are open notebooks the future of science? UT Blog. Available at:

Posted in Data Repositories, Discovery, Open Science, Uncategorized | Comments Off on Open Lab Notebook / Open Notebook Science

My experience with applying for a Fulbright scholarship in the Czech Republic

Written by Michal Hubálek, a doctoral candidate at the University of Hradec Králové and a former NTK employee.

For as long as I can remember, I have somehow always known about the opportunity to do a Fulbright Scholarship in the US. Probably, I learned about Fulbright from the promotional materials provided by the University of Hradec Králové (UHK) International Office at the Philosophical Faculty. Going to study or conduct research in the US has always seemed like a big challenge to me, both in terms of my language proficiency and my academic preparedness.

I cannot say, however, that I would apply for a Fulbright under any circumstances, or just to go to the US. Even though this would make sense for me since my research is focused, among other things, on American philosophical movements such as pragmatism and naturalism. Relevant ongoing research projects and various sources for my work are thus naturally there.

I decided to apply for a specifically Czech Fulbright grant, the Masaryk-Fulbright scholarship, to work on my doctoral dissertation (I am now in the fifth year of my doctoral studies), with a topic that revolves around the concept of naturalism and historical/evolutionary explanation. “The time was right in my career” is what I listed in my application as the main reason for my submitting it. In 2017, Professor Paul A. Roth from the University of California-Santa Cruz taught a Philosophy of History course for one term at UHK (during the course, we discussed a manuscript he later published, The Philosophical Structure of Historical Explanation), and it was a personally and philosophically transformative experience for me.

So I quickly realized that having the opportunity to meet up regularly with Professor Roth again in person (and, of course, also having the much-needed time for research and writing thanks to the Fulbright scholarship), would be the best possible impulse for finishing my PhD thesis. From this perspective, my case is specific because I knew Professor Roth in advance, and I knew that he would be happy to write the invitation letter for me (this proved to be an advantage because the letter had to be re-written several times, always with a short turnaround time).

I was at a stage in my career when I also knew that starting in January 2023, I would no longer receive a PhD stipend and that I would have to find other resources to finish my studies. For these reasons, I wrote various research proposal sketches and refined my CV throughout 2022. In August 2022, I returned from an Erasmus+ traineeship at the Institute Vienna Circle in Austria, and I slowly started filling out an online application for the Fulbright-Masaryk stipend (deadline: November).

This period of time (three months) was enough for me because I already had a rough-and-ready research proposal which, moreover, substantially mirrored the topic of my PhD thesis. I appreciated that I had complete freedom to apply with a subject in which I was already interested as a PhD student and a pre-doctoral researcher. For the Fulbright-Masaryk scholarship, applicants must also prove that they are “not only outstanding experts in their scientific field but also active in the civic or public life of their institutions or communities, just like Tomáš Garrigue Masaryk.” I was pretty confident that I met these criteria, so after contacting the Fulbright Commission to hear their opinion, I simply added a “Public and Community Service Statement” to my CV emphasizing and putting into context my various past and present activities. Beyond writing the research project itself and receiving the invitation letter from the US, the most time-consuming part of preparing the proposal was for me to put together three additional recommendation letters on my behalf from colleagues and/or former instructors.

At the end of January, the Fulbright Commission informed me that I had passed the first two rounds of the selection process (meeting the formal requirements plus an anonymous review of my project by two experts in my field) and that I was invited to an in-person interview in February. The interview was relatively short (about fifteen minutes), and the committee consisted of Fulbright scholars from the US currently staying in the Czech Republic and others.

The interview was not, I felt, primarily concerned with my research proposal or my academic, scientific, or teaching achievements. It was about me as a person, citizen, and cultural ambassador, so the commission was primarily interested in my attitudes, visions, and future professional plans. Moreover, they were very interested in my practical plans related to moving to the US with my whole family and related to my research (e.g., if I had already checked the cost of living at my host institution). I recall four questions that were explicitly posed to me in this regard:

  1. Why the US? Why is it necessary to conduct your research in the US?
  2. Why did you choose this particular departure date for your research stay?
  3. What would you do if your mentor was ill or absent?
  4. What difficulties do you think you might experience in the US?

I had to wait until March to learn I had been awarded a Fulbright-Masaryk scholarship. I must say that I am delighted with how the Czech Fulbright Commission handles things and communicate; there are several handbooks and guidelines for us recipients of various Fulbright stipends, and the coordinators are very patient and swift in answering our questions and acknowledging any adjustments (I, for example, had to change my date of departure from October to July after discussions with Professor Roth). In May, all the scholars receiving a Fulbright for the 2023/2024 academic year had an informational meeting with Fulbright Commission coordinators and four former Fulbright scholars. This was a very friendly event (with free pizza!) during which we could ask any kind of question and tackle any kind of worries we had. Thus, I wholeheartedly recommend anyone interested in conducting research or studying in the US to apply for a Fulbright stipend.

As I already indicated, active researchers, scholars, teachers, and publicly-involved people are halfway there since the Fulbright Commission does appreciate this, regardless of one’s discipline or research interests. What is sometimes underestimated, I think, although the Fulbright Commission always mentions that, is that, with the current rental rates in some US states, the monthly Fulbright stipend might not cover all your costs, especially if you want to move overseas with your family. Personal savings are, therefore, really required — at least for some destinations in the US. If you are considering applying for a Fulbright stipend and want some help or just wish to chat about it all, please feel free to contact me at: Here you can also find a case study for the Fulbright application written by Stephanie Krueger.

Prague, July 2023

Posted in Early Career Researchers, Funding Opportunity | Comments Off on My experience with applying for a Fulbright scholarship in the Czech Republic

AI and writing: much ado about generated essays

A recent Reddit/Twitter discussion thread on artificial intelligence (AI) and academic writing recently emerged, following claims of a Reddit user to have used AI to write well-graded essays.

The Guardian picked up on this discussion with an article entitled “‘Full-on robot writing’: the artificial intelligence challenge facing universities.” The article provided background links on specific developments in AI writing and describes how universities are responding to new technological developments, noting how some institutions (this article was focused on Australia) are considering such works as plagiarism in their policy statements. It poses the question of how educators should view current developments:

“To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems. One could, for instance, imagine the technology facilitating a “boring dystopia”, further degrading those aspects of the university already most eroded by corporate imperatives. Higher education has, after all, invested heavily in AI systems for grading, so that, in theory, algorithms might mark the output of other algorithms, in an infinite process in which nothing whatsoever ever gets learned.

But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be. AI might spur us to recognise genuine knowledge, so that, as the university of the future embraces technology, it appreciates anew what makes us human.”

Despite all the hand-wringing, an Inside Higher Education piece written by a professor of a class (“Rhetoric and Algorithms”) outlines the results of an in-class experiment with AI tools, in which the professor encouraged undergraduate students to use as many AI tools as possible to create an essay. The professor found the overall quality of the results to be poor, but perhaps more importantly for an overarching discussion of this topic, students did not like the process using such tools:

“I asked my students to write short reflections on their AI essays’ quality and difficulty. Almost every student reported hating this assignment. They were quick to recognize that their AI-generated essays were substandard, and those used to earning top grades were loath to turn in their results. The students overwhelmingly reported that using AI required far more time than simply writing their essays the old-fashioned way would have. To get a little extra insight on the ‘writing’ process, I also asked students to hand in all the collected outputs from the AI text generation ‘pre-writing.’ The students were regularly producing 5,000 to 10,000 words (sometimes as many as 25,000 words) of outputs in order to cobble together essays that barely met the 1,800-word floor.”

The professor argues that good writers produce better AI output, noting also that can, with such assignments, be effectively used to illustrate to students about the writing submission and feedback process, with the tools providing immediate feedback to students, which motivated students could use to learn. He argues that others worried about plagiarism in their assigned essays can mitigate the risk of AI-generated work by making assignments very specific, and notes also that educators and university policymakers must take developments in this area into account:

I am deeply skeptical that even the best models will ever really allow students to produce writing that far exceeds their current ability. Effective prompt generation and revision are dependent on high-level writing skills. Even as artificial intelligence gets better, I question the extent to which novice writers will be able to direct text generators skillfully enough to produce impressive results.

I would tend to agree with this author, with the current state of technological affairs. I do wonder how current plagiarism tools would be able to track AI-written content, if it’s not in the corpus of comparative texts for a tool, and possible burdens imposed on writing instructors in determining if work is original or not.

And the more I deal with written texts, I feel more than ever that written assignments are crucial to quality higher education. The academic writing process, in my opinion, sharpens students’ skills in many areas, particularly if work is carefully reviewed by instructors with appropriate and constructive feedback. And I agree with the author of the second article, that AI tools can be helpful learning tools (I myself use AI grammar and language tools for this purpose).

I do, however, worry about a world, as alluded to in the first article, in which journalistic content is written by AI. Rather than question of the role of writing in higher education, perhaps we should question where and how AI (not just written output) interacts with the real world, perhaps skewing perceptions.

Graham, S. S. (October 24, 2022). AI-Generated Essays Are Nothing to Worry About. Inside Higher Ed.

Sparrow, J. (November 18, 2022). ‘Full-on robot writing’: the artificial intelligence challenge facing universities. The Guardian.

Posted in Academic Integrity, Academic writing in English, Plagiarism, Science Education | Comments Off on AI and writing: much ado about generated essays

New grammar and language tools helpful, but do not replace clear ideas

Many students and colleagues I know, both native and non-native speakers, are eagerly embracing new grammar and language tools, some of which “learn” over time with artificial intelligence (AI). I myself use LanguageTool, a grammar, style, and spelling checker, as an “overlay” over Google Docs whenever I can, finding myself missing the supplementary tool when I use Microsoft Word. 

While such tools are useful, they (in my opinion) do not replace clear thought. I often tell non-native students that if they cannot express themselves well in their native language, none of the English writing tools will help them present their thoughts better in this second language. 

In addition to LanguageTool, my students and colleagues find the following tools of use:

I tried to find a comparison chart created by universities or libraries for these tools, but was unsuccessful. Various lists of the so-called best tools for 2022 (scroll past the paid content) are available in this sample search.

Stepping backwards, here is a nice subject guide to editing and proofreading in English that includes a nice checklist. 

Curtain University (2021). Editing and proofreading your assignment. 

Posted in Academic writing in English, NCIP, Uncategorized | Comments Off on New grammar and language tools helpful, but do not replace clear ideas

NCIP enables participation in the HERMES project (“Strengthening Digital Resource Sharing during COVID and Beyond”)

Open, captioned video footage of the NTK NCIP-funded presentation by Dr. Stephanie Krueger in English on academic resource use cases at the PhD+ level is now available on YouTube as part of the HERMES project open learning channel. The thirty-seven-minute lecture and Q&A session covers use cases for doctoral students, early career researchers, and established researchers and explains gated and open resources useful for common tasks performed at each level. Live sessions, part of a pilot for the HERMES project, included audience members from the IFLA DDRS committee, bachelor and master students from Hacettepe University (Turkey), and members of the NTK Services team. Italian, Spanish, Arabic, and Turkish subtitles will be added over time, making the content even more accessible for learners.

Stephanie Krueger lecturing for HERMES on YouTube

Full video of presentation available at:

Krueger, S. (2022). Resource discovery: Use cases in the academic field.

Posted in Discovery, IFLA DDRS, NCIP, Open Access, Science Education | Comments Off on NCIP enables participation in the HERMES project (“Strengthening Digital Resource Sharing during COVID and Beyond”)

Tips for improving courses based on learning theory

AAC&U has provided helpful tips (including links to many useful resources) to instructors contemplating improving courses for students in the coming semester. Even if you’re a learning theory expert, these tips and resources can assist in contemplating if one’s courses are the best they can be.

Read more:

Demeter, E. (2021). Reflecting on Course Redesign: How Faculty Can Measure the Impact of Instructional Changes. Liberal Education Blog.

Posted in Uncategorized | Comments Off on Tips for improving courses based on learning theory

Planning for in-person instruction despite Delta: experiences of a small college

As universities plan for the coming semester, higher education administrators are thinking about what to do about Delta, taking various models into consideration while attempting to keep campuses open for in-person instruction. One small college describes their planning/modeling process:

…administrators believe they can bring the campus reproduction rate below 1 with a combination of vaccination and other measures, including entry testing, weekly surveillance testing for unvaccinated students, and a mask mandate. 

Other universities feel that vaccination rates are high enough to avoid such measures.

Read more:

Diep, F. (2021). Vaccination Alone Isn’t Enough to Keep the Virus Under Control This Fall, One Small College Warns. The Chronicle of Higher Education.

Posted in COVID & Higher Ed Strategy | Comments Off on Planning for in-person instruction despite Delta: experiences of a small college

Equipping students to deal with uncertainty

Many in the educational sector, including myself, have contemplated the value of the information we’re imparting on students over the past year. As we migrated to primarily online settings in many places, we had to revisit curricular ideas, course formats and plans, and learning goals and outcomes for our students. I personally have been surprised how well small and individual coursework has been received by students, and feel the highly tailored and personalized settings and interactions to be ideal for fluidly reacting to the many external challenges and pressures faced by students. Together we have, regardless of course content, helped each other navigate turbulent times, filled with uncertainty.  

How do we better-equip ourselves and our students or mentees to deal with adversity and uncertain environments? No one has the right answer yet, it seems, but I came across two recent short essays which helped me start sharpening the way I’m asking myself these questions and thinking towards the future. 

Flateby, T. L. & Rose, T.A. (2021). From College to Career Success: How Educators and Employers Talk about Skills.

Even as we make curricula to non-discipline-specific learning outcomes for graduates such as the Association for American College and University’s VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics, are we sure the outcomes we in academia create align to the expectations of the places graduates will work? Flateby and Rose (2021) describe early findings from a broader College to Career project, a small survey of line managers, which highlight gaps in graduates’ ability to think critically and communicate effectively:

Several managers observed that graduates need more experience exercising critical thinking skills throughout the curriculum and in more complex situations. Newer graduates often look for the “right” answer, the managers said, and provide employers with what they think they want to hear. Often, new graduates do not know how to proceed without direction.

For written communication, the line managers reported that newer graduates typically communicate in writing as though they are texting. Most of the written communication issues they identified pertained to a lack of audience awareness.

In response, Flateby and Rose suggest several immediate responses educators might take to address these issues in their assignments. 

But while assessing audience in writing may be easier to incorporate into assignments, “critical thinking” remains, despite an articulated VALUE rubric, a rather nebulous concept. 

Burke, T. (2021). An Unconvincing Argument for the Liberal Arts: We say we prepare students for undefined futures. Are they better for it? (NOTE: while this essay is freely-available, it is behind a “data wall” and one must provide one’s contact information to view it). 

Burke discusses how hard it is to put our educational fingers on what we are guiding our students to think critically about, particularly in uncertain times:

Our assumptions about how to teach to uncertainty are mostly unexplored, and the empirical evidence of whether we do so successfully is debatable. To the degree that we are successful, we don’t really know why. Arguably, the capacity to navigate uncertainty has less to do with student learning than with the social capital and economic resources available to our graduates. This is where “preparation for uncertainty” lives alongside other reassuring concepts like “resilience,” “emotional intelligence,” or “grit.” These concepts may not be measuring teachable skills or habits of mind so much as access to money and social networks. Dealing with rapidly changing conditions is much easier if your parents can help with the rent or if you know someone who can get you in the door in a new line of work after your current gig closes down.

Many of us would answer “critical thinking” (which may be an equally leaky terminological boat). We’d likely assert that critical thinking suffuses our institutions in such a way that their graduates learn to view the world around them skeptically and provisionally, and that this in turn prepares them to adapt rapidly to changing economic and social conditions (and to help lead or direct processes of change for others). The major problem with this answer is that any curricular structure, any pedagogy, can likely and perhaps justifiably claim to be producing critical thinking and hence to be preparation for uncertainty. It’s so truistic and underspecified that it’s hard to be satisfied with it as an answer. Possibly, we could decompose “critical thinking” to far more specific epistemological and methodological commitments in various academic disciplines: the scientific method, thought experiments, close reading, etc., and get a better account of how to teach skepticism, provisional truth-making, and so on. Possibly.

Read Burke’s concluding thoughts about this topic, and read them again, particularly the entire last paragraph of his essay. His contemplation don’t provide a clear pathway towards the future, but they do crystalize some thoughts many of us in the educational sector have been considering these past few months:

Inhabiting the foundational uncertainty of the universe is one of the deepest challenges of human life. If we have insight into that, good. If we don’t, let’s work to develop that insight. But we mustn’t confuse this work with the drive to normalize the insecurity of our present moment. Our educational job there is different: We must teach our students to reject that project entirely.

Posted in COVID & Higher Ed Strategy, Doctoral Instruction, Science Education, Tech Ethics | Comments Off on Equipping students to deal with uncertainty

TIB Germany Launching Open Journal and Conference Services

TIB Open Publishing reaches a new stage in its development. Read more:

Tullney, M. (2021). TIB becomes Major Development Partner of the Public Knowledge Project (PKP).

Posted in Advanced Search Techniques, Open Access, Open Science, Research Showcasing, Science Communications Research, Science Gateways | Comments Off on TIB Germany Launching Open Journal and Conference Services