Mar 26

The AI Hypocrisy in Science Research

Being an astrophysics graduate I’ve always kept a close eye on the broader scientific community, and building software for stargazers involves a lot of data, logic, and computational problem solving, so I naturally have an interest in computational physics. Recently while experimenting with a new physics model I created in my spare time, I discovered that independent researchers are being treated very unfairly if they want to contribute to modern science. Whether a citizen scientist, a software engineer, or just a self-funded researcher working outside of a university, we all seem to hit the same wall… the lack of academic affiliation and the use of AI.

Any independent researcher that even dares to publish to a traditional journal will instantly be rejected if they so much as hint that a large language model helped them. The very real problem of AI-generated rubbish or “AI Slop” is being used as a convenient excuse to permanently shut the door on anyone without a university affiliation. And before you even get as far as a journal, there’s virtually no chance of getting an endorsement to publish on a preprint server like arXiv either. The current AI Slop stigma assigned to any indie researcher means most academics won’t even open your email. Yes we all know much of the AI Slop will be automated AI submissions or crackpot quasi-spiritual metaphysics essays, but something desperately needs to be done to give academic parity to qualified independent researchers.

What makes this even more frustrating is the staggering double standards… While independent researchers are being heavily policed for any AI usage, the establishment is openly embracing it for themselves. Earlier this year, a collaboration between physicists from several elite universities and a major AI company produced a breakthrough theoretical physics paper on gluons, where they used a frontier AI model to grind through complex mathematical proofs for hours on end. What makes this double standard even more striking is that the authors didn’t even try to hide their use of AI… on the very first page they openly state that the key formula for the amplitude was first conjectured by GPT-5.2 Pro and then proved by a new internal OpenAI model. An independent researcher would face immediate rejection for using a language model and their paper would be labelled as AI Slop, yet when an elite group admits that an AI quite literally derived and proved their core theoretical physics formula, it sails through the gates as a celebrated breakthrough.

So when an institutional team uses AI to do the heavy mathematical lifting, it’s celebrated as a paradigm shift. But when an indie developer uses AI to do the exact same thing on their laptop, it’s dismissed at the editorial desk as “unaffiliated slop.” The academic research community isn’t just anti-AI, it’s anti-outsider.

Physicist Sabine Hossenfelder recently highlighted this hypocrisy in a video about the state of academic publishing “AI Is About to Break Science… Then Save It“.  In the video she reports that publishers have a financial incentive to publish high volumes of papers, as long as the authors (usually funded by university grants) can pay the fee. This means that academia is using AI as a printing press to game the system for grant money while also using it to keep science a closed shop.

This hypocrisy leaves independent researchers in a tough spot. They lack the supercomputers and institutional funding, and if they just post their theories on social media, they risk well-funded university groups scooping up their open-source code, scaling it up on better hardware, and claiming the discoveries as their own. So how do they get around it? The indie science community has to bypass the walled garden entirely… Instead of begging journal editors for peer review, many are minting their own permanent Digital Object Identifiers (DOIs) on open-science platforms like Zenodo (which is operated by CERN), and putting their simulation engines and data models directly on GitHub, letting the open-source community compile their code, stress-test their algorithms, and effectively act as peer reviewers. It’s a harder path, but it’s genuine open science.

Even though my undergraduate thesis was published in the Monthly Notices of the Royal Astronomical Society,  I gain no respect from the academic community. I could have gone down the academic route, but in some ways I’m glad I didn’t as I’m now free to explore physics from any angle without being tied to a particular research group’s agenda. The downside is that even with a past publication in one of the most respected astronomy journals in the world, I’m still effectively locked out because I no longer have an institutional affiliation. I never quite realised how much of a closed shop academia really is until I started exploring the world of research from the outside.

The irony of all this is quite clear… With the advent of AI, this should be a golden age for independent research. The tools have never been more accessible or more powerful. Yet the establishment would rather use AI as an excuse to lock outsiders out, while quietly using it to bolster their own work. It reminds me a lot of what’s happened to indie musicians… They now have incredible technology in their home studios, production tools that would have cost a fortune twenty years ago, yet they still can’t make a proper living from their music because Spotify and Apple Music have handed all the power to the major labels. Different industry, same story: the gatekeepers keep changing the rules to suit themselves.

Whether you’re coding apps, building computational models or writing your own music, I think the lesson is the same: real innovation often has to come from the outside, built from the ground up and standing on its own merit, without waiting for the establishment’s stamp of approval.

Keep building, and keep looking up! 👾🔭

PS. This article was created with the help of AI 👽