AI in science writing
The use of AI to assist and support science is a stated priority of UF (see link) and many other institutions, and furthermore, is unavoidable. In terms of science writing, AI tools are now fully embedded into the web browsers we use (e.g., Edge), the software we work in (e.g., Copilot in Microsoft products, or Gemini AI now embedded into Google docs), in how we revise and finalize documents (e.g., Grammarly), and in how we research throughout the entire process (e.g., various LLM and other).
This does not have to be an overall negative. Like many technological advances in the past, including the use of internet resources to find citations, grammar checking functionality in writing programs, like Microsoft Word or other, or even the widespread use of statistical software versus manual computations, it can significantly enhance the quality of products, and the ease with which they can be created.
However, it is key to properly use AI resources to avoid potential issues related to originality and intellectual ownership of created content. Many institutions, including leading Universities, have now created instructions and suggestions on how to properly use AI in science and other types of writing. A small sampling of these are provided in Links 1 and 2 below. In summary, (1) use AI as a writing support and assistant; in the early writing process AI can be excellent for brainstorming, generating ideas, and developing a framework for your writing objective; (2) later in the writing process AI can be very useful to assist with, for example, providing revisions to content, reformatting reference styles, or helping to improve overall flow.
Overall, AI is a new and significantly different tool then has existed up until now, it presents both challenges - notably it is difficult to detect reliably, with numerous tools and approaches existing that purport to bypass detection capabilities, and its appropriate use is rapidly changing and poorly defined in general. However, AI has huge opportunities in science writing and communication - such as, for example, providing non-English speakers greatly enhanced capabilities to integrate into English based science venues and products. Key throughout is properly using the many new AI tools, and with the definition of exactly how it could or should be used likely going to continue evolving as rapidly as this new technology is appearing and increasing in intelligence and capabilities.
1 - https://cte.ku.edu/ethical-use-ai-writing-assignments
2 - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11015711
3 - https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers
4 - https://originality.ai/blog/ai-tools-vs-plagiarism
Cautionary example A
Although the use and results of AI detection systems is contentious (see example link here), and often unreliable, the existence and potential increased use of such systems must be considered in your writing products, whether any forms of correction, AI or none at all, are employed. To illustrate this I use a small subset of my 2012 dissertation abstract, which I then run through a common AI detection website - https://gptzero.me; The results are provided below and highlight the importance of consideration, if relevant to your work and products. Approaches to mitigate potential issues include simply avoiding use of grammatical / writing correction software, AI or LLM based approaches entirely, or using them per principles as described above, but making sure to do final checks using various approaches, such as Turnitin, GPTzero, or the myriad other similar resources purporting to provide such services that now exist, followed by corrections, principally manually revision to text to ensure results are as is best for your use case. It is worth noting that the use of Grammarly is frequently recommended and discounted by and at UF and other institutions, and in particular as of being valuable to ESL (English as Second Language) students.
|
ChatGPT revised - using request phrase "Revise for flow and grammar"
This dissertation explores various aspects of forest degradation and regeneration in the tropics, examining these processes across multiple spatial and temporal scales. The research spans from large-scale studies of the Brazilian Amazon to detailed analyses at the level of individual leaves, and it includes timeframes ranging from minute-by-minute microclimate measurements to multi-decadal remote sensing assessments. Understanding the causes and effects of forest degradation requires crossing traditional disciplinary boundaries, and this work integrates a range of methods and concepts to investigate forest dynamics across diverse study systems. A central focus of this dissertation is the development and integration of new remote sensing techniques. These techniques involve a variety of remote sensors, including satellite-based Landsat and Quickbird sensors, as well as the Carnegie Airborne Observatory (CAO), which combines airborne LiDAR with hyperspectral imagery. Additionally, Chapter 5 specifically addresses issues related to forest dynamics following the abandonment of swidden agriculture.
|