I'm no longer in academia but I was asked to review a paper recently which was similar to one of my final papers, so I accepted.
The paper was written by four Chinese authors who had previously collaborated together. It was one of the worst papers I've ever seen. It went over the same ground as previous studies so in that sense there was nothing new, but more bizarrely it did things like state well known equations incorrectly (most notably one of Maxwell's equations) incorrectly and drifted off onto total tangents about how one might calculate certain things in an idealised situation that had no relevance to the method that the authors used. My assumption is that they had generated the methods section using generative AI and not gone over it with even a cursory check.
But the worst part is that I recommended rejecting it outright, and the journal just sent it on to it's slightly less prestigious sister journal rather than doing so.
This is not surprising. People are going to seek the path of least resistance. I suspect many non english speaking academics can now get publish under their own name using LLM's as translators. My question is: Are LLMs making the papers easier to read? Some authors make it point to say what they want to say in the least understandable manner. That's been the past. Are things changing?
Curious, currently how is the use of AI being detected from papers?
From the article I saw that they're using "excess words" as an indicator, is that a reliable method?
Also, is it possible that it's just autocorrect that added "excess words" when fixing grammar? If that's the case, should that be considered as "use of AI"?
It is massive, lots of papers analyzed, but just the Abstract part of each.
One way you could look at it is: some authors used an LLM to create the abstract from the paper contents. "Hey, there's a lot of new books with AI covers"
One other way is: There could be a correlation between LLM usage in the abstract and LLM usage during the production or writing of the paper. "Hey, I wonder if this book with an AI cover also was written by an AI. It should be investigated"
This reminds me of auto-generated got commit messages. I can't fathom that someone would go to the effort of authoring a PR,and then not bothering to describe what they did. Unless of course they didn't actually go through the effort of authoring the PR, and may not even be fully aware of what's in there. I've stopped giving thorough code reviews to coworkers who can't use code generation responsibility, and often times they haven't reviewed the code themselves. Heck, I've been giving my PRs self-reviews since long before AI.
I don't get this take. If I work through a feature with an LLM and then task it with creating conventional atomic commits to persist the work then that's no different to generating any other documentation. Same for the PR description. Now, you want to make sure it's not slop, purple prose or has any emojis, but other than that commits and PR descriptions are an entirely valid automation target.
While I am wary of AI being used to pump out crappy papers with bad science, I will say that many academics can be good in their fields while being quite bad at communicating clearly. I don't think it's a bad thing for a good scientist to use AI be used to take a genuine and scientifically interesting draft paper and use it to improve the writing so that it's clearer to the reader.
I'm no longer in academia but I was asked to review a paper recently which was similar to one of my final papers, so I accepted.
The paper was written by four Chinese authors who had previously collaborated together. It was one of the worst papers I've ever seen. It went over the same ground as previous studies so in that sense there was nothing new, but more bizarrely it did things like state well known equations incorrectly (most notably one of Maxwell's equations) incorrectly and drifted off onto total tangents about how one might calculate certain things in an idealised situation that had no relevance to the method that the authors used. My assumption is that they had generated the methods section using generative AI and not gone over it with even a cursory check.
But the worst part is that I recommended rejecting it outright, and the journal just sent it on to it's slightly less prestigious sister journal rather than doing so.
This is not surprising. People are going to seek the path of least resistance. I suspect many non english speaking academics can now get publish under their own name using LLM's as translators. My question is: Are LLMs making the papers easier to read? Some authors make it point to say what they want to say in the least understandable manner. That's been the past. Are things changing?
Curious, currently how is the use of AI being detected from papers?
From the article I saw that they're using "excess words" as an indicator, is that a reliable method?
Also, is it possible that it's just autocorrect that added "excess words" when fixing grammar? If that's the case, should that be considered as "use of AI"?
I decided to look at it more closely.
It is massive, lots of papers analyzed, but just the Abstract part of each.
One way you could look at it is: some authors used an LLM to create the abstract from the paper contents. "Hey, there's a lot of new books with AI covers"
One other way is: There could be a correlation between LLM usage in the abstract and LLM usage during the production or writing of the paper. "Hey, I wonder if this book with an AI cover also was written by an AI. It should be investigated"
This reminds me of auto-generated got commit messages. I can't fathom that someone would go to the effort of authoring a PR,and then not bothering to describe what they did. Unless of course they didn't actually go through the effort of authoring the PR, and may not even be fully aware of what's in there. I've stopped giving thorough code reviews to coworkers who can't use code generation responsibility, and often times they haven't reviewed the code themselves. Heck, I've been giving my PRs self-reviews since long before AI.
I don't get this take. If I work through a feature with an LLM and then task it with creating conventional atomic commits to persist the work then that's no different to generating any other documentation. Same for the PR description. Now, you want to make sure it's not slop, purple prose or has any emojis, but other than that commits and PR descriptions are an entirely valid automation target.
While I am wary of AI being used to pump out crappy papers with bad science, I will say that many academics can be good in their fields while being quite bad at communicating clearly. I don't think it's a bad thing for a good scientist to use AI be used to take a genuine and scientifically interesting draft paper and use it to improve the writing so that it's clearer to the reader.
I just think that for the next few years at least there should be some sort of disclosure in how a paper used AI.