Scientific discourse is considered one place where you can present certain kinds of truth as accurately as possible, regardless of whether they conform to the prevailing orthodoxies or not, whether they are truths that most people want to listen to or not, and whether they agree with political ideologies or not. It used to be the case that most of scientific discourse was on matters which did not directly and immediately interest or concern either the general public or, to a lesser extent, even the powers that be. And so, scientists were able to pursue their research with tolerable hindrance from the circumstances and people in which and among whom they lived and worked.
This started changing when the modern Industrial, and then Corporate — apart from the state — establishment developed not only huge stakes in scientific research, but started funding most of it, not just for courtly splendour as was the case in the age of old feudalism. With funding came control. Simultaneously, with the neoliberal/neoconservative dominance of the world, government funding for independent research started diminishing at an ever increasing rate. This inevitably meant that scientific community came under heavy influence of state and corporate actors.
In the 21st century, this influence is transforming into more and more tighter form of control over not just what research is carried out, but how, to what end, and even with regard to whether it produces ‘desirable’ results or not.
The Pandemic of 2020 has made this phenomenon of tight control over scientific research more widespread as well as more visible. With it, however, has come (perhaps fittingly) an extremely shrill rhetoric of “You don’t believe in science?!” and “Science says so and so”, where so and so could be a very obviously a debatable matter (or not: it doesn’t make a difference). In other words, on the one hand, science is becoming more like religion, both in terms of concepts like heresy, blasphemy and blind (or at least uncritical) belief, and in terms of censorship of expression, even scientific expression. Genuine scientific debates are becoming more like theological conflicts, as the science wars about the Pandemic have revealed.
This is also the time when Artificial Intelligence (AI) is all the rage. It is being touted as the Silver Bullet to solve all of humanity’s problems, current and future. No wonder then that AI too is seriously in danger of becoming a theology and a church, rather science and technology. Perhaps the best example of this is the recent case of a paper on ethics of AI, co-authored by mainstream AI ethicists and researchers, which caused Google to ask one of its authors to retract the paper. Timnit Gebru, the co-lead of Google’s ethical AI team, was a co-author of the paper. She has since left her job rather than agreeing to retract the paper. Many researchers cannot afford to do that, and the paper might be published, but still this case is unprecedented.
I had my own experience with scientific censorship recently. I have been working on a paper about the impossibility of humanoid artificial intelligence, but I could not think of a suitable venue for this paper, since it seems to go against one of the most dearly held ideas about AI: that true humanoid AI is not only possible, but inevitable. Since the draft was written in a semi-formal style, using arguements against the possibility of humanoid AI, analogous to the arguments philosophers have been using for and against the possibility of a Single Supreme God. In my view, building humanoid AI will require AI as a whole to become a Single Supreme God, at least as far as human affairs are concerned. The arguments centred around the distinction between Micro-AI and Macro-AI.
Then I came across an unusual research workshop at the most well known AI conference (Neural Information Processing Systems or NeurIPS 2020), which was titled ResistanceAI. It invited papers and even media, including those not in an academic form or format. It seemed perfect to me, so I decided to submit my draft at this workshop. Since it is a common practice now to post such drafts (preprints) on the best known scientific archive or preprint hosting site arXiv. I already have posted several papers on arXiv. Since such preprint sites are meant for archival purposes, they do not put the papers through a peer review process, as that is going to happen anyway when the paper is submitted to a peer reviewed venue. Usually, the paper is posted directly after a kind of sanity check. Sometimes, however, arXiv puts a paper through moderation, which usually involves reclassification of the paper under suitable categories. In very rare cases, a paper can be removed. The reasons for such removal are supposed to be:
- Unrefereeable content
- Inappropriate format
- Inappropriate topic
- Duplicated content
- Rights to submit material
- Excessive submission rate
Based on the description of these reasons given on their moderation page, none of these apply in anyway to my draft. I had submitted the paper on 8th October 2020. I first received a mail saying it will be ‘announced’ (that is, posted) the next day. Then, on 14th October 2020, I received a mail saying that the paper has been ‘put on hold’. Initially I assumed it must be for reasons of reclassification. However, on the same day, I received another mail saying the paper has been removed. The mail said:
Dear arXiv user,
Our moderators have determined that your submission is not of sufficient interest for inclusion within arXiv. The moderators have rejected your submission after examination, having determined that your article does not contain sufficient original or substantive scholarly research.
As a result, we have removed your submission.
Please note that our moderators are not referees and provide no reviews with such decisions. For in-depth reviews of your work, please seek feedback from another forum.
Please do not resubmit this paper without contacting arXiv moderation and obtaining a positive response. Resubmission of removed papers may result in the loss of your submission privileges.
For more information on our moderation policies, see:
The reason given (“your article does not contain sufficient original or substantive scholarly research”) was a kind of review itself, which is not supposed to be there as a reason for removal, since duplication means direct duplication, not extending existing ideas. The reason can be reasonably interpreted as saying simply that some references were missing from the paper, meaning that it was a kind of feedback to me about the paper, which arXiv is not supposed to give.
This came right before the deadline for submission at the ResistanceAI workshop. So I added a few of the missing references, given the page limit of four pages. The paper was, however, rejected at the workshop, although I did receive a review of the paper. Note that one of the reasons for removal from arXiv is “unrefereeable content”. So, clearly, the paper was not unrefereeable.
The review from the workshop is given below:
2. Please provide constructive feedback to the authors
This paper address some timely questions about what we might expect the “Singularity” to look like. Unfortunately, section three–the meat of the paper–is somewhat difficult to follow. Rather than listing many different arguments, it may be more helpful to focus on a subset of these arguments and explain how they are related. As currently written, it is difficult to understand the argument and how it reaches the conclusions that “Singularity at the level of Micro-AI is impossible” and that a Singularity at the “Macro-AI level” would be an existential threat to human intelligence.
3. Please give this submission a score
2. Please provide constructive feedback to the authors
1/ The paper, while looking at the impact of a hypothesized ‘Macro AI’ on human beings in the future, ignores the issues that AI technology is causing in the present.
2/ In particular, it fails to inspect and analyze the material impact that AI is already causing in the lives of human beings, whether or not it is a ‘humanoid’ AI which is doing that.
3/ Overall, the paper does not fit the theme of the workshop — which has more to do with how AI concentrates power in the hands of a few, rather than hypothesizing about the future of AI and what that means for humanity, without grounding it in a material analysis.
3. Please give this submission a score
Although I at least received reviews of the paper, the reasons given here are highly questionable, particularly in the light of the fact that the workshop has accepted not just papers, but also poems, rants, essays etc., and even an anonymous submission, which is never the case at a research venue. In particular, the reviewer statement, “ignores the issues that AI technology is causing in the present”, does not make sense. In a four page paper, when dealing with a topic like this, how can one include a survey of harms already being done by AI? I have, in the past, written at least one paper on such harms, which is (ironically) hosted on arXiv. That paper was rejected without review from the conference where it was submitted simply because I mistakenly did not notice that the paper, before submission, had (at the last moment) exceeded the four page limit by a two or three (one column) lines.
I had then two options, apart from working further on the paper and submitting it to another peer reviewed venue. One was to appeal the decision by arXiv, which I might still do, and the other was to post the draft on some other preprint site. I found two alternatives for the second option. One was the PhilSci Archive for preprints in philosophy of science. The second was HAL Archive.
I posted on both of them. The draft was again rejected from the PhilSci Archive, giving the following reason:
Unfortunately the item could not be accepted into PhilSci-Archive. The item lies outside the range of material suitable for PhilSci-Archive. We regret that because of the volume of material posted, the archive cannot enter into correspondence concerning submissions that have been refused.
This may be debatable, since it seems to me the paper is well within the scope of philosophy of science.
The preprint has finally been accepted by the HAL Archive, after they asked me to first post a paper already published in a scientific journal ‘in order to establish a confidence contract’, which sounds reasonable.
I am working on improving the draft with the possibility of submitting it to another venue, preferably peer reviewed. However, in the fifteen years since I first published a peer reviewed paper, this has been the strangest case of rejection by multiple venues, not just by peer review, but by two different preprint sites, one of them (PhilSci) does not even have a moderation process according to their policy.
Even so, this is not the first case of strange rejection that I have experienced from peer reviewed venues. Till recently, it could be attributed to the inherently imperfect nature of the peer review process, but now it seems to be clearly going beyond that, as the Google case shows, if not also the case of my paper.