Scientific discourse is considered one place where you can present certain kinds of truth as accurately as possible, regardless of whether they conform to the prevailing orthodoxies or not, whether they are truths that most people want to listen to or not, and whether they agree with political ideologies or not. It used to be the case that most of scientific discourse was on matters which did not directly and immediately interest or concern either the general public or, to a lesser extent, even the powers that be. And so, scientists were able to pursue their research with tolerable hindrance from the circumstances and people in which and among whom they lived and worked.
This started changing when the modern Industrial, and then Corporate — apart from the state — establishment developed not only huge stakes in scientific research, but started funding most of it, not just for courtly splendour as was the case in the age of old feudalism. With funding came control. Simultaneously, with the neoliberal/neoconservative dominance of the world, government funding for independent research started diminishing at an ever increasing rate. This inevitably meant that scientific community came under heavy influence of state and corporate actors.
In the 21st century, this influence is transforming into more and more tighter form of control over not just what research is carried out, but how, to what end, and even with regard to whether it produces ‘desirable’ results or not.
The Pandemic of 2020 has made this phenomenon of tight control over scientific research more widespread as well as more visible. With it, however, has come (perhaps fittingly) an extremely shrill rhetoric of “You don’t believe in science?!” and “Science says so and so”, where so and so could be a very obviously a debatable matter (or not: it doesn’t make a difference). In other words, on the one hand, science is becoming more like religion, both in terms of concepts like heresy, blasphemy and blind (or at least uncritical) belief, and in terms of censorship of expression, even scientific expression. Genuine scientific debates are becoming more like theological conflicts, as the science wars about the Pandemic have revealed.
This is also the time when Artificial Intelligence (AI) is all the rage. It is being touted as the Silver Bullet to solve all of humanity’s problems, current and future. No wonder then that AI too is seriously in danger of becoming a theology and a church, rather science and technology. Perhaps the best example of this is the recent case of a paper on ethics of AI, co-authored by mainstream AI ethicists and researchers, which caused Google to ask one of its authors to retract the paper. Timnit Gebru, the co-lead of Google’s ethical AI team, was a co-author of the paper. She has since left her job rather than agreeing to retract the paper. Many researchers cannot afford to do that, and the paper might be published, but still this case is unprecedented.
I had my own experience with scientific censorship recently. I have been working on a paper about the impossibility of humanoid artificial intelligence, but I could not think of a suitable venue for this paper, since it seems to go against one of the most dearly held ideas about AI: that true humanoid AI is not only possible, but inevitable. Since the draft was written in a semi-formal style, using arguements against the possibility of humanoid AI, analogous to the arguments philosophers have been using for and against the possibility of a Single Supreme God. In my view, building humanoid AI will require AI as a whole to become a Single Supreme God, at least as far as human affairs are concerned. The arguments centred around the distinction between Micro-AI and Macro-AI.
Then I came across an unusual research workshop at the most well known AI conference (Neural Information Processing Systems or NeurIPS 2020), which was titled ResistanceAI. It invited papers and even media, including those not in an academic form or format. It seemed perfect to me, so I decided to submit my draft at this workshop. Since it is a common practice now to post such drafts (preprints) on the best known scientific archive or preprint hosting site arXiv. I already have posted several papers on arXiv. Since such preprint sites are meant for archival purposes, they do not put the papers through a peer review process, as that is going to happen anyway when the paper is submitted to a peer reviewed venue. Usually, the paper is posted directly after a kind of sanity check. Sometimes, however, arXiv puts a paper through moderation, which usually involves reclassification of the paper under suitable categories. In very rare cases, a paper can be removed. The reasons for such removal are supposed to be:
- Unrefereeable content
- Inappropriate format
- Inappropriate topic
- Duplicated content
- Rights to submit material
- Excessive submission rate
Based on the description of these reasons given on their moderation page, none of these apply in anyway to my draft. I had submitted the paper on 8th October 2020. I first received a mail saying it will be ‘announced’ (that is, posted) the next day. Then, on 14th October 2020, I received a mail saying that the paper has been ‘put on hold’. Initially I assumed it must be for reasons of reclassification. However, on the same day, I received another mail saying the paper has been removed. The mail said:
Dear arXiv user,
Our moderators have determined that your submission is not of sufficient interest for inclusion within arXiv. The moderators have rejected your submission after examination, having determined that your article does not contain sufficient original or substantive scholarly research.
As a result, we have removed your submission.
Please note that our moderators are not referees and provide no reviews with such decisions. For in-depth reviews of your work, please seek feedback from another forum.
Please do not resubmit this paper without contacting arXiv moderation and obtaining a positive response. Resubmission of removed papers may result in the loss of your submission privileges.
For more information on our moderation policies, see:
The reason given (“your article does not contain sufficient original or substantive scholarly research”) was a kind of review itself, which is not supposed to be there as a reason for removal, since duplication means direct duplication, not extending existing ideas. The reason can be reasonably interpreted as saying simply that some references were missing from the paper, meaning that it was a kind of feedback to me about the paper, which arXiv is not supposed to give.
This came right before the deadline for submission at the ResistanceAI workshop. So I added a few of the missing references, given the page limit of four pages. The paper was, however, rejected at the workshop, although I did receive a review of the paper. Note that one of the reasons for removal from arXiv is “unrefereeable content”. So, clearly, the paper was not unrefereeable.
The review from the workshop is given below:
2. Please provide constructive feedback to the authors
This paper address some timely questions about what we might expect the “Singularity” to look like. Unfortunately, section three–the meat of the paper–is somewhat difficult to follow. Rather than listing many different arguments, it may be more helpful to focus on a subset of these arguments and explain how they are related. As currently written, it is difficult to understand the argument and how it reaches the conclusions that “Singularity at the level of Micro-AI is impossible” and that a Singularity at the “Macro-AI level” would be an existential threat to human intelligence.
3. Please give this submission a score
2. Please provide constructive feedback to the authors
1/ The paper, while looking at the impact of a hypothesized ‘Macro AI’ on human beings in the future, ignores the issues that AI technology is causing in the present.
2/ In particular, it fails to inspect and analyze the material impact that AI is already causing in the lives of human beings, whether or not it is a ‘humanoid’ AI which is doing that.
3/ Overall, the paper does not fit the theme of the workshop — which has more to do with how AI concentrates power in the hands of a few, rather than hypothesizing about the future of AI and what that means for humanity, without grounding it in a material analysis.
3. Please give this submission a score
Although I at least received reviews of the paper, the reasons given here are highly questionable, particularly in the light of the fact that the workshop has accepted not just papers, but also poems, rants, essays etc., and even an anonymous submission, which is never the case at a research venue. In particular, the reviewer statement, “ignores the issues that AI technology is causing in the present”, does not make sense. In a four page paper, when dealing with a topic like this, how can one include a survey of harms already being done by AI? I have, in the past, written at least one paper on such harms, which is (ironically) hosted on arXiv. That paper was rejected without review from the conference where it was submitted simply because I mistakenly did not notice that the paper, before submission, had (at the last moment) exceeded the four page limit by a two or three (one column) lines.
I had then two options, apart from working further on the paper and submitting it to another peer reviewed venue. One was to appeal the decision by arXiv, which I might still do, and the other was to post the draft on some other preprint site. I found two alternatives for the second option. One was the PhilSci Archive for preprints in philosophy of science. The second was HAL Archive.
I posted on both of them. The draft was again rejected from the PhilSci Archive, giving the following reason:
Unfortunately the item could not be accepted into PhilSci-Archive. The item lies outside the range of material suitable for PhilSci-Archive. We regret that because of the volume of material posted, the archive cannot enter into correspondence concerning submissions that have been refused.
This may be debatable, since it seems to me the paper is well within the scope of philosophy of science.
The preprint has finally been accepted by the HAL Archive, after they asked me to first post a paper already published in a scientific journal ‘in order to establish a confidence contract’, which sounds reasonable.
I am working on improving the draft with the possibility of submitting it to another venue, preferably peer reviewed. However, in the fifteen years since I first published a peer reviewed paper, this has been the strangest case of rejection by multiple venues, not just by peer review, but by two different preprint sites, one of them (PhilSci) does not even have a moderation process according to their policy.
Even so, this is not the first case of strange rejection that I have experienced from peer reviewed venues. Till recently, it could be attributed to the inherently imperfect nature of the peer review process, but now it seems to be clearly going beyond that, as the Google case shows, if not also the case of my paper.
Baby zebra named Hope
Born weeks into the pandemic
Died after fireworks were set off
(Courtesy: USA Today, as seen on Google News)
[A Rough Draft of a Work-in-progress.]
The idea of machines which are almost identical to human beings has been so seductive that it has captured the imaginations of the best minds as well as laypeople for at least a century and half, perhaps more. Right after Artificial Intelligence (AI) came into being, it was almost taken for granted that soon enough we will be able to build Humanoid Robots. This has also led to some serious speculation about ‘transhumanism’. So far, we do not seem to be anywhere near this goal. It may be time now to ask whether it is even possible at all. We present a set of arguments to the effect that it is impossible to create or build Humanoid Robots or Humanoid Intelligence, where the said intelligence can substitute human beings in any situation where human beings are required or exist.
1. Humanoid Intelligence, the Singularity and Transhumanism
Before we proceed to discuss the terms of the title of this section and the arguments in the following sections, we first define the foundational terms to some degree of conciseness and preciseness:
1. Human Life: Anything and everything that the full variety of human beings are capable of, both individually and collectively. This includes not just behaviour or problem solving, but the whole gamut of capabilities, emotions, desires, actions, thoughts, consciousness, conscience, empathy, creativity and so on within an individual, as well as the whole gamut of associations and relationships, and social, political and ecological structures, crafts, art and so on that can exist in a human society or societies. This is true not just at any given moment, but over the life of the planet. Perhaps it should include even spiritual experiences and ‘revelations’ or ‘delusions’, such as those hinted at in the Philip K. Dick story, Holy Quarrel [Dick et al., 1985].
2. Humanoid: A living and reproducing entity that is almost identical to humans, either with a human-like body or without it, on a different substrate (inside a computer).
3. Intelligence: Anything and everything that the full variety of human beings are capable of, both individually and collectively, as well as both synchronically and diachronically. This includes not just behaviour or problem solving, but the whole of life as defined.
4. The Singularity: The technological point at which it is possible to create (or have) intelligence that is Humanoid or better than Humanoid.
5. Transhumanism: The idea that, after the singularity, we can have a society that is far more advanced, for the better, than the current and past human societies. From 1910 to 1927, in the three volumes of Principia Mathematica [ 1925–1927], Whitehead and Russell set out to prove that mathematics is, in some significant sense, reducible to logic. This turned out to be impossible when Godel published his incompleteness theorems in 1931 [Sheppard, 2014, Nagel et al., 2001]. During the days of origins of modern Computer Science, before and in early 1930s, it would have been easy to assume that a computing machine would ultimately solve any problem at all. This also proved to be impossible with Turing’s undecidability theorem [Hopcroft et al., 2006] and the Church-Turing thesis of computability [Copeland and Shagrir, 2018]. Since then, other kinds of problem have been shown to be undecidable.
Now that we are supposed to close be enough to the Singularity [Kurzweil, 2006] so that it may happen within the lifetime of a large number of human beings, perhaps it is time to ask ourselves whether real intelligence, in particular Humanoid Intelligence (as defined above) is possible at all. We suggest that there are enough arguments to ‘prove’ (in an informal sense) that it is impossible to build, to create or to have Humanoid Intelligence. We argue that even though the Singularity is indeed possible, perhaps even very likely (unless we stop it), it may not be what it is supposed to be. The conjecture presented here is that the Singularity is not likely to be even benign, however powerful or advanced it may be. This follows from the idea of the impossibility of Humanoid Intelligence.
2 Some Notes about the Conjecture
We have not used the term theorem for the Impossibility and the reasons for this should be evident from the arguments that we present. In particular, we do not, and perhaps cannot, use formal notation for this purpose. Even the term conjecture is used in an informal sense. The usage of terms here is closer to the legal language than to the mathematical language, because that is the best that can be done here. This may be clearer from the Definition and the Story arguments. It is due to a similar reasoning that the term ‘incompleteness’ is not used and, instead, impossibility is used, which is more appropriate for our purposes here, although Godel’s term ‘essentially incomplete’ is what we are informally arguing for about Humanoid AI, and perhaps AI in general. No claim is made as to whether or not a formal proof is possible in the future at all. What we present is an informal proof. This proof has to be centred around the distinction between Micro-AI (AI at the level of an intelligent autonomous individual entity) and Macro-AI (very large intelligent autonomous systems, possibly encompassing the whole of humanity or the world). To the best of our knowledge, such a distinction has not been proposed before. While there has been some work in this direction [Brooks, 1998, Signorelli, 2018, Yampolskiy, 2020], for lack of space, we are unable to explain how this work differs from previous such works, except by noting that the argumentation and some of the terms are novel, a bit like in the case of arguments for or against the existence of God, which question has been debated by the best of philosophers again and again over millennia, which as we will see at the end, is relevant to our discussion.
3 The Arguments for the Impossibility Conjecture for Micro-AI
The Definition Argument): Even the Peano Arithmetic [Nagel et al., 2001] is based on three undefined terms (zero, number and is successor of ), which are relatively trivial terms compared to the innumerable terms required for AI (the core terms like intelligence and human, or terms like the categories of emotions, leave alone the terms like consciousness).
The Category Argument: A great deal of AI is about classifying things into categories, but most of these categories (e.g. anger, disgust, good or bad) have no scientifically defined boundaries. This is related to the following argument.
The Story Argument: It is almost established now that many of the essential concepts of our civilisation are convenient fictions or stories [Harari, 2015] and these often form categories and are used in definitions.
The Cultural Concept Argument: Many of the terms, concepts and stories are cultural constructs. They have a long history, most of which is unknown, without which they cannot be modelled.
The Individuality, or the Nature Argument: An individual intelligent autonomous entity has to be unique and distinct from all other such entities. It originates in nature and we have no conception of how it can originate in machines. We are not even sure what this individuality exactly is. However, all through history, we have assigned some degree of accountability to human individual and we have strict provisions for punishment of individuals based on this, that indicates that we believe in the concept of the ‘self’ or the ‘autonomous individual’, even when we deny its existence, as is becoming popular today.
The Genetic Determinism Argument: Individuality is not completely determined by nature (e.g. by our genes) at birth or creation once and for all. It also develops and changes constantly as it interacts with the environment, preserving its uniqueness.
The Self-organising System Argument: Human beings and the human societies are most likely self-organising [Shiva and Shiva, 2020] and organic systems, or they are complex, non-equilibrium systems [Nicolis and Prigogine, 1977]. If so, they are unlikely to be modelled for exact replication or reproduction. The Environment, or the Nurture Argument: Both intelligence and individuality depend on the environment (or on nature). Therefore, they cannot be modelled without completely modelling the environment, i.e., going for Macro-AI. The Memory, or the Personality Argument: Both intelligence and individuality are aspects of personality, which is known to be dependent on the complete life-memory (conscious and unconscious) of an intelligent being. There is not enough evidence that it is possible to recover or model this complete temporal and environmental history of memory. A lot of our memory, and therefore our individuality and personality is integrally connected with our bodily memories.
The Susbstrsate Argument: It is often taken for granted that intelligence can be separated from the substrate and planted on a different substrate. This may be a wrong assumption. Perhaps our intelligence is integrally tied with the substrate and it is not possible to separate the body from the mind, following the previous argument.
The Causality Argument: There is little progress in modelling causality. Ultimately, the cause of an event or occurrence is not one but many, perhaps even the complete history of the universe.
The Consciousness Argument: Similarly, there is no good enough theory of consciousness even for human understanding. It is very unlikely that we can completely model human consciousness, nor is there a good reason to believe that it can emerge spontaneously under the right conditions (which conditions?).
The Incompleteness/Degeneracy of Learning Source and Representation Argument: No matter how much data or knowledge we have, it will always be both incomplete and degenerate, making it impossible to completely model intelligence.
The Explainability Argument: Deep neural networks, which are the state-of-the-art for AI, have serious problems with explainability even for specific isolated problems. Without it, we cannot be sure whether our models are developing in the right direction.
The Test Incompleteness Argument: Perfect measures of performance are not available even for problems like machine translation. We have no idea what will be the overall measure of Humanoid Intelligence. It may always be incomplete and imperfect, leading to uncertainty about intelligence.
The Parasitic Machine Argument: Machines completely depend for learning on humans and on data and knowledge provided by humans. But humans express or manifest only a small part of their intelligent capability. So machines cannot completely learn from humans without first being as intelligent as humans.
The Language Argument: Human(oid) Intelligence and its modelling depend essentially on human language(s). There is no universally accepted theory of how language works.
The Perception Interpretation Argument: Learning requires perception and perception depends on interpretation (and vice-versa), which is almost as hard a problem as modelling intelligence itself.
The Replication Argument: We are facing a scientific crisis of replication even for isolated problems. How could we be sure of replication of Humanoid Intelligence, preserving individual uniqueness?
The Human-Human Espitemic Asymmetry Argument: There is widespread inequality in human society not just in terms of money and wealth, but also in terms of knowledge and its benefits. This will not only reflect in modelling, but will make modelling harder.
The Diversity Representation Argument: Humanoid Intelligence that truly works will have to model the complete diversity of human existence in all its aspects, most of which are not even known or documented. It will have to at least preserve that diversity, which is a tall order.
The Data Colonialism Argument: Data is the new oil. Those with more power, money and influence (the Materialistic Holy Trinity) can mine more data from others, without sharing their own data. This is a classic colonial situation and it will hinder the development of Humanoid Intelligence.
The Ethical-Political Argument: Given some of the arguments above, and many others such as data bias, potential for weaponisation etc., there are plenty of ethical and political reasons that have to be taken into account while developing Humanoid Intelligence. We are not sure whether they can all be fully addressed.
The Prescriptivastion Argument: It is now recognised that ‘intelligent’ technology applied at large scale not only monitors behaviour, but changes it [Zuboff, 2018]. This means we are changing the very thing we are trying to model, and thus laying down new mechanical rules for what it means to be human.
The Wish Fulfilment (or Self-fulfilling Prophecy) Argument: Due to prescriptivisation of life itself by imperfect and inadequately intelligent machines, the problem of modeling of Humanoid Intelligence becomes a self-fulfilling prophecy, where we end up modeling not human life, but some corrupted and simplified form of life that we brought into being with ‘intelligent’ machines.
The Human Intervention Argument: There is no reason to believe that Humanoid Intelligence will develop freely of its own and will not be influenced by human intervention, quite likely to further vested interests. This will cripple the development of true Humanoid Intelligence. This intervention can take the form of secrecy, financial influence (such as research funding) and legal or structural coercion.
The Deepfake Argument: Although we do not yet have truly intelligent machines, we are able to generate data through deepfakes which are not recognisable as fakes by human beings. This deepfake data is going to proliferate and will become part of the data from which the machines learn, effectively modeling not human life, but something else.
The Chain Reaction Argument (or the Law of Exponential Growth Argument): As machines become more ‘intelligent’ they affect more and more of life and change it, even before achieving true intelligence. The speed of this change will increase exponentially and it will cause a chain reaction, leading to unforeseeable consequences, necessarily affecting the modelling of Humanoid Intelligence.
4 The Implications of the Impossibility
It follows from the above arguments that Singularity at the level of Micro-AI is impossible. In trying to achieve that, and to address the above arguments, the only possible outcome is some kind of Singularly at Macro-AI level. Such a Singularity will not lead to replication of human intelligence or its enhancement, but something totally different. It will, most probably, lead to extinction (or at least subservience, servitude) of human intelligence. To achieve just Humanoid Intelligence (Human Individual Micro-AI), even if nothing more, the AI system required will have to be nothing short of the common notion of a Single Supreme God. Singularity at the macro level will actually make the AI system, or whoever is controlling it, individual or (most probably small) collective, a Single Supreme God for all practical purposes, as far as human beings are concerned. But this will not be an All Powerful God, and not a a Kind God, for it will be Supreme within the limited scope of humanity and what humanity can have an effect on, and it will be kind only to itself, or perhaps not even that. It may be analogous to the God in the Phiilip K. Dick story Faith of Our Fathers [Dick and Lethem, 2013], or to the Big Brother of Orwell’s 1984 [Orwell, 1950]. We cannot be sure of the outcome,
of course, but those as likely outcomes as any others. That is reason enough to be very wary of
developing Humanoid Intelligence and any variant thereof.
Philip K. Dick, Paul Williams, and Mark. Hurst. I hope I shall arrive soon / Philip K. Dick ; edited by Mark Hurst and Paul Williams. Doubleday New York, 1st ed. edition, 1985. ISBN 0385195672.
Alfred North Whitehead and Bertrand Russell. Principia Mathematica. Cambridge University Press, 1925–1927.
Barnaby Sheppard. Gödel’s Incompleteness Theorems, page 419–428. Cambridge University Press, 2014. doi: 10.1017/CBO9781107415614.016.
E. Nagel, J.R. Newman, and D.R. Hofstadter. Godel’s Proof. NYU Press, 2001. ISBN 9780814758014. URL https://books.google.co.in/books?id=G29G3W_hNQkC.
John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321455363.
B. Jack Copeland and Oron Shagrir. The church-turing thesis: Logical limit or breachable barrier? Commun. ACM, 62(1):66–74, December 2018. ISSN 0001-0782. doi: 10.1145/3198448. URL https://doi.org/10.1145/3198448.
Ray Kurzweil. The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics), 2006. ISBN 0143037889.
Rodney Brooks. Prospects for human level intelligence for humanoid robots. 07 1998. Camilo Miguel Signorelli. Can computers become conscious and overcome humans? Frontiers in Robotics and AI, 5:121, 2018. doi: 10.3389/frobt.2018.00121. URL https://www.frontiersin. org/article/10.3389/frobt.2018.00121.
Roman V. Yampolskiy. Unpredictability of ai: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 07(01):109–118, 2020. doi: 10.1142/S2705078520500034.
Y.N. Harari. Sapiens: A Brief History of Humankind. Harper, 2015. ISBN 9780062316103. URL https://books.google.co.in/books?id=FmyBAwAAQBAJ.
V. Shiva and K. Shiva. Oneness Vs. the 1 Percent: Shattering Illusions, Seeding Freedom. CHELSEA GREEN PUB, 2020. ISBN 9781645020394. URL https://books.google.co.in/books?
G. Nicolis and I. Prigogine. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. A Wiley-Interscience publication. Wiley, 1977. ISBN 9780471024019. URL https://books.google.co.in/books?id=mZkQAQAAIAAJ.
Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st edition, 2018. ISBN 1610395697.
P.K. Dick and J. Lethem. Selected Stories of Philip K. Dick. Houghton Mifflin Harcourt, 2013. ISBN 9780544040540. URL https://books.google.co.in/books?id=V1z9rzfTb2EC.
George Orwell. 1984. Tandem Library, centennial. edition, 1950. ISBN 0881030368. URL http://www.amazon.de/1984-Signet-Classics-George-Orwell/dp/0881030368.
मलबे का नहीं, गलबे का
गैजेटियर सुने हैं ना?
नहीं गैज़ेटियर नहीं भाई
गैजेटियर की बात हो रही है
अच्छा गैजेट तो सुने ही होंगे
वो सरकारी वाला नहीं
नेमड अंटटी वाला भी नहीं
वो फुनवा वाला गैजेट
अउर कैमरा वाला गैजेट
लेपटपवा वाला भी तो
वही सब गैजेट बतिया रहे हैं
अइसा है की हमरे पास जो है
इन सब की भरमार है घर में
इनमें ज्यादातर जो है हमरे पास
ऊ सब तो मलबा गया है जो है
मार पइसा डूब गइल ई सब में
बात कुछ अईसी ठहरी है कि भैया
कुछ जन हम से दुसमनी समझ लिए हैं
अब काहे समझ लिए हैं ई न मालूम
तो हम तो जो है कंगाली पे खड़े हैं हियाँ
गैजेट का मलबा हमरे पास जमा ही जमा है
हम इसको गलबे का नाम दिए हैं जो है
काहे की हम किसी जनम में इक ठो
उपनियास पढ़े रहे ऊ मोहन रकेसवा का
उही से हमरे दिमाग का बलब जल पड़ा
अउर एको बात है, आपसे ही बतिया रहे हैं
किसी भी और से नहीं बतइबे का, समझे?
एगो दौर माँ हमहु को गैजेट का सौक रहा
काहे की हम रहे मार गरीब तभ भी कंगाल
अउर ऊ रजिबवा देस को इकअईसीं सदी
में जो है ले जाब का बात करी गलोबवा माँ
ऊ बात अब आप जानत हैं तमाम बढ़िया गई है दुनिया माँ
कऊ बचा हब ई बाढ़ से तो हमको तो नहीं दिखता नहीं बा
दिल्ली में इंतजाम जो हब ऊ तो सभी कुछ गजटियाव का
पेट पर पट्टी बाँध कर ठान लिया है सुना कनुनवा के साथ
चुनाव-उनाव भी ऊ सब तमाम गजटिया दिए है ई सुना है
गजट से परेसानी है तो कौनो कोना पकड़ लो और राम भजो
अऊर कोई आपसन नहीं है काहे के अब दुनिया गलोब बा
तो जो है हमहुँ बह गए ऊ धार में उस बखत
माने रजिबवा के बखत जब हम पढ़त रहे
हम भी तो अंजीनियर रहे ना आखिर तो
चाहे सौक असल सौक हमारा तो जानत हैं
उही सब लिखबा पढ़ी करे का रकेसवा माफिक
तो अब जो है हम अपने को गलबे का मालिक समझबा करी
बहुत जोर ताले में बंद कर के रखी हम अपना अनमोल गलबा
पर जो दुसमनी बना लिए हैं और हमारे गलबे के जिम्मेदार हैं
ऊ सब के मन में हमरी कंगाली से अब भी जो है ठंड न पड़ी
मतलब ये की आए दिन नया खेल होवत है परेसान करी को
अब कोई हम के बताय सकत है की ई सब दुसमनी काहे है
तो भइया आगे आव और हमें कुछ समझाव की ई मामला
है क्या आखिर? क्या हम किसी का कुछ बिगाड़ दिए हैं
तो साफ साफ बताव सायद कुछ नतीजा लिकलबा करे
तब सायद उनके दिल में ठंडक पड़े अउर हमरे दिल में भी
तनिक जो विदवान लोग ठहरे ऊ ही से बिनती है
इस मामले का कुछ खुलासा होय दोनों तरफ से
नहीं तो भैया हम तो इसको ज्यादती समझत हैं
अउर आप लोग तो हम सुने हैं सबहुँ तरह की
ज्यादती के खिलाफ हमेला संघरस करत बा
हमरी गिनती नहीं है क्या आपके दरबार मे?
हैलो, हाँ बताइबा …
It seems that these days everyone is saying that the world is undergoing a radical change, and rightly so. It may be that the reasons for saying so and the motivations behind it span the whole of social, political, moral, economic and technological spectrum. It is also widely recognised that this change has been underway for at least two decades now. During much of this period, one has been following discussions on various kinds of forums such as mailing lists, group discussions and open digital publication venues, including blogs.
More recently, one has been following (and to some extent participating in) this particular forum*. Going through discussions like those on this forum on the one hand, and some other usual kinds of forums on the other, one can’t help observing that:
* Don’t form an opinion about the forum based on a couple of posts as there are wide variety of people on it.
1. If the people in a forum are only after totally selfish gains, solidarity consolidates extremely rapidly to the lowest point possible, as if enabled by gravity. It is like collective free fall that does not even harm the people involved in it, as they get a kind of immunity and can say or do things with impunity. It is like leaping down a cliff collectively. Of course, there will be a crash at some point, but things can move from one crash to another as if nothing happened, as long as life itself doesn’t become totally impossible for everyone on the planet.
2. If, on the other hand, the people in a forum are motivated by completely or mostly unselfish concerns, it is extremely hard to achieve even the minimum level of solidarity, climbing against all odds, as if against gravity, a bit like a group of people trying to fly together. Even if a good degree of solidarity is established, it comes at a great cost. And it can fall apart quite easily.
This has become more true in the last two or three decades, as we enter the hyper-digital age. One can think of it as binarisation of politics and of all the social and political (and other) issues of life on the planet. Some examples are given below.
The world, it seems, is divided into binary classes and all you have to do (in fact the only thing you are allowed to do in terms of political decision making) is to perform instant binary classification on all the individuals and groups in the world. Some example binary classes are:
– Those who are against J. K. Rowling and those who are not
– Those who are in favour of ‘fighting the virus’ and those who are not
– Those who are against Putin and those who are not
– Those who are strictly in favour of masks and those who are not
– Those who are against the new Michael Moore film and those who are not
– Those who are against Israel and those who are not
– Those who support Israel and those who don’t
– Those who are in favour of the ‘cancel culture’ and those who are not
– Most importantly, for the last four years, those who are against the Reality TV POTUS and those who are not
These are like binary constraints and after a point it becomes impossible to satisfy all the constraints in any way at all. Everyone is forced into innumerable binary classes, because if you are not in one class (that is, declare yourself into one class), then you are, by definition, in the other class. As a result, the good kind of solidarity becomes impossible. It may still be achieved, but only by collectively ignoring the existence of some or many such constraints and collectively pretending they don’t exist. This naturally implies implicit sacrifice from a large number of people who are affected by these ignored constraints, who are usually already the people most at disadvantag as far as life on the planet is concerned. It should be pointed out that most of the ignored constraints, in reality, are not binary.
There is a name for this phenomenon and it is well known: polarisation. It was always there, but the difference is that, in the hyper-digital age, binaries are not about complicated matters like the interactions between global social welfare, human rights and justice and truth and sustainable growth. They are like being against J. K. Rowling or not and so on.
How do we deal with this hard hard problem of solidarity for unselfish purposes without sacrificing a (large?) number of people? This is perhaps the biggest challenge facing us, if we stick to truth and justice both (not one or the other).
The irony is that this is happening at a time when a consensus is emerging (rightly) all over the globe against a specific kind of binarisation which had existed for ages: Gender binaries.
Why do binaries exist? Why do they proliferate? Why do they dominate?
One can try to answer in common sense terms, using informal logic and common sense psychology.
One reason is a deterministic view of the world, but that alone does not explain it, as even that view allows for non-binaries.
Another reason that seems obvious is along the same lines as why did religions, full of superstition, originate?
In a world that they could not understand and were afraid of, human being tried to make sense of it. As a secular view of the reality became more and more popular and established, this need did seem to decrease with scientific and technological developments. However, these developments, along with social, political and economic developments (or regressions) brought about radical changes in societies.
At this point, in 21C, we have reached a situation where, due to things like Reality Shows and Social Media (among other things) it is more and more possible to manipulate the perception of what is the reality, thus making it difficult to make sense of the world again. One could say more even than during prehistoric days.
So once more we look for certainties where none exist, at least as far as known human knowledge is concerned. Perhaps none exist in reality.
Every belief in total certainty about any non-trivial matter usually gives rise to a new binary opposition, perhaps more than one. Sometimes binary oppositions are created through diktats. When faced with any complicated matter which leads to some kind of fear(s), a perhaps natural response of those in power (i.e., those with the blessings of the materialistic Holy Trinity, even if they claim divine blessings), particularly those with regressive minds, is to issue a diktat. A common kind of diktat is to ban something, to prohibit something, as if by that act alone the problems that give rise to the fear(s) will magically disappear.
Strict binary oppositions are very much like using diktats to ban things, even if the motives are driven by the urge to achieve truth and justice.
So again, in a world full of deadly uncertainties, we seek refuge in creating artificial certainties of our own.
If we are secular, we might even try to use science to justify these artificial certainties, working backwards with logic and evidence.
One way to deal with uncertainties is to abandon all principles and become totally cynical, as some ideologies and their followers do.
Another way is to ignore uncertainties and pretend they don’t exist, that everything has been worked out by groups of some seemingly superhuman people with some authoritative labels.
Still another way is to stick to the principles and at the same time face the uncertainties of life. This is much more difficult and it imposes a great deal more responsibility on us.
It is true that such responsibility is too much for us, but the question is should we still face it? Because that is the ‘path of truth’. So far so good, because if we only care for the truth, then it is still relatively easy to make good enough decisions and to act on them. But if we care equally for justice (recognising the fact about the uncertainties even there), then it is much more difficult to make decisions, to act upon them and to explain them and to justify them. This is often called, in the age of neoliberalism and neoconservatism, ‘policy paralysis’. This is supposed to characterise the total inability to act, as if by just taking some action rapidly, any action, even radical action, we would have solved the problem. This is the “do something, anything, *now*!” philosophy/ideology, which has an infamous historical record. It even has a name: Kissingerism, as described> so well by Greg Grandin in Kissinger’s Shadow.
One is not suggesting that all those creating these strict binaries are followers of Kissingerism. The truth is, whether we like it or not, this calamitous ideology has seeped into our global social, political and economic fabric, and is corroding that fabric quite fast. No political faction seems to be immune to this societal toxin. It has affected even arts and literature. One can argue it is not an ideology, but a meta-ideology. And a dangerously fallacious one.
Do something (specific) now is never the only option. There is always an obvious alternative: Do something else. Or do something later. Or both. Statistically speaking, it is common sense to say that if we have a strict binary opposition between doing something or not doing something, then, all things being equal (which is the case when we don’t know *exactly what* to do), doing something (specific) is likely to be more dangerous than doing something. It should be emphasised that not doing something (specific) is very different from doing nothing. You can always do something else. Or do something later. Even doing nothing at a certain moment or duration can actually be sometimes far better than doing anything at all right at that time or duration. It’s true of individuals, but it more true of collectives because collective action has much bigger consequences. This is, perhaps, a lesson for achieving sustainability, as even some regressive people understand. So do many progressive people, but less so now. This common sense should not be mistaken for ‘historical imperative’.
Coming back to well-intentioned people, perhaps naturally (?) we shy away from taking the last way, the most difficult way. And so we take refuge in either cynicality (as opposed to skepticism), or in artificial certainties (maybe for the Greater Good).
But science says there is no justice in nature, doesn’t it? I don’t agree with that. Why? That is for another day.
As I posted the above comment early morning today, a shout of “O Chhakke!” (“Hey *untranslatable*), loud and clear enough for me to hear inside my house today evening, full of contempt, reminded me that my statement about an emerging global consensus against gender binaries was perhaps an overstatement. Or not very accurate.
The *untranslatable* Hindi word (also used in many other South Asian languages) is the foulest word used by homophobic and transphobic people, and it is used very commonly. A bit like ‘faggot’, but more offensive. Some other English words or terms similar to this are ‘fudge packer’, ‘pouf’, ‘fairy’, but they all are unambiguously (less) offensive. The main offence is the knowledge of the impunity that it provides, and therefore the helpless humiliation it causes.
The word literally means a ‘sixer’, which is the cricketing term for when the batsman hits a ball out of the ground, earning six ‘runs’, the maximum you can earn in a single ball. It is a word that can be used in normal conversation, but also as an expletive. Like other common expletives, for example the four letter f* word in English, it has many meanings, and fluid meaning under different circumstance.
It may even be possible to write an academic paper in Linguistics or Sociolinguistics, like that famous paper on the word (?) ‘OK’. Perhaps the word originated in card games, or became common due to them. Or it may have a relation to, yes, the number theory. The logic seems to be this. There are ten basic numbers in the decimal system: 1 to 10. The number six, even though it may be called the first Perfect Number in number theory, it is seen as the middle number. It perhaps then got associated with the ‘middle sex’, or the third sex. That is why the closes translation of this word in English is ‘eunuch’, and its closest synonym in Hindi is also ‘hijra’ (made famous recently by Arundhati Roy in her novel Ministry of Utmost Happiness), which also translates directly to eunuch or hermaphrodite. However, since there were no terms in Indian languages in common usage (as far as I know) for other non-binary genders, these highly pejorative words are used for all people who identify as (which is rare) or are seen as belonging to to any non-binary gender. So, these words are used for homosexuals also and for effeminate men or impotent men.
Apart from the literary meaning, in which it is used rarely, it is more commonly used as a slur, to insult someone or even a whole community. Communities abuse each others with these terms. However, if the word ‘hijra’ is used, then it is clearly an insult, but the c* word can be used in the normal course of a conversation as a dog whistle. Certain kinds of dog whistles are more hurtful and dangerous and actual unambiguous expletives.
In the context of this article, the word can be seen as manifestation of the dangers of having strict binary oppositions. If you don’t belong to one of the two genders, then you are outside genders, or belong to the third gender (or sex). That makes you fair game for everyone. You didn’t join either of the allowed binary categories, so you are a danger to the society and will be treated like that, even more than the members of our opposing binary category (think of misogyny).
You, however, have the the to option to join one of the categories. Since the third (or fourth or fifth, or a scale-based) category is not allowed, you can save yourself from social condemnation and censure (abuse, even violence) by joining one of the categories (as per your ‘biological gender’) by going through the necessary ritual: getting married. Once married, you are, so to say, one of us. This is why the criticism against J. K. Rowling has a validity. But cancelling her is another matter.
How does it concern me personally? That is a long story that has to told some other time.
I understand personally how this word (or any other word like this) hurts. Should the word be banned? I think it is counterproductive. If you send ideas — dangerous idea — underground, they have a way of coming back at us in unexpected ways and then we may not have any defences against them. Just as words like the c* word reduce a human being to a single trait or tendency, a binary based on whether someone uses this word or not will also reduce people to a single trait or tendency, and is not a good idea. Something similar applies to J. K. Rowling, in spite of her latest defiant action of announcing her new novel as a kind of revenge (or justification?) for the criticism against her.
If we ban certain things, people are likely to find ways around them. It takes time for deep seated prejudice to *really* go away. The c* word has multiple senses and it is hard to ban it as India is a cricket crazy coutry and hitting a sixer is like the ultimate momentary action in a game, like getting someone ‘out’ on a ball. It invites the loudest cheers. This point is related to the idea that it is perhaps impossible to ban dog whistles, because they are born out of ambiguity of language and interpretation of linguistic expression. It is also about one way that impunity works.
Even though we can’t ban the above, it is still offensive and hurtful. People still need to realise this. Related to this is the point about how widespread homophobia and transphobia are in our region.
Even so, while the strict gender binary still continues in some places, those most vociferously fighting against this binary are also creating their own strict binary oppositions, believing in often non-existent certainties. When you do that, you force those people who don’t fit neatly in either of the binary categories into a catch all ‘illegitimate’ category, just like in the case of the c* word.
Here I am in 2020. And here The Will that was written in 2009:
(Damn Hitchcock. May he rot in hell, eternally. Perhaps he is.)
Hope someone volunteers to be the faithful executioner of The Will. If you do, please stick to the letter and spirit of the document, kind of poetic though it is. Hope you don’t mind it: the poetic part.
Two minutes silence not necessary, but if you insist … get it done at the drain where the The Will is fulfilled. Better still, cheat on the two minutes. And steal a laugh or two, as one knows from experience it is difficult not to do so.
It could be any drain. The more stinky, the better, unless you can’t bear that smell. Or those smells. In that case, please just find the one you can bear, i.e, as stinky as you can bear.
Signed in full sanity,
Dated: 30th June, 2020
Place: Varanasi (not really Kashi, but there is no harm in pretending, if you so wish)
(But The Will applies to any relevant place.)
Note: This is a serious document. Don’t take it lightly. That is, if you volunteer to be the executioner of The Will. Otherwise, of course, you can. That’s your freedom of expression, short of gaslighting.
Confirming that stimulus and reward change behaviour.
Participant (Subject): Consent not necessary, as it avoids chances of bias.
A Human Lab Rat
Waking up early based on the stimulus.
Holding a gun to the head of the participant and threatening to shoot him/her if they don’t wake up at the pre-decided time.
The participant wakes up. It may not happen the first time, as they might not understand or believe that the threat is real. But ultimately, as it is made clear to them that the threat is, indeed, real, they will ultimately wake up on the intended time. The intermediate steps might involve hitting them on the head with the gun with increasing force or frequency with each passing day.
The hitting on the head is the reward. The ultimate reward is shooting in the head. This is useful if you have spectators, either physically or virtually. These are negative rewards (punishments). There might also be positive rewards, which could be anything. One low cost reward can be designed like this:
Hit the participant on the head arbitrarily at any time of the day. Rewards can mean decreasing the force or the frequency of this hitting on the head.
The participant (subject) wakes up on providing the stimulus.
The above is a crude experiment, a kind of thought experiment, as it is possible only in certain settings such as physical concentration camps. A more realistic experiment is given below, which has become possible with the latest developments in technology, as we move towards the technological Utopia of the 2030s.
The Realistic Experiment
Confirming that stimulus and reward change behaviour.
Participant (Subject): Consent not necessary, as it avoids chances of bias.
A Human Lab Rat
Waking up early based on the stimulus.
Holding a ray gun that produces painful levels of radiation (radio frequency, electric field, magnetic field or any combination of these: ionizing radiation should be avoided, but can be used in exceptional cases) or physical Dog Whistles based on untrasound or infrasound to any part of the body of the participant and pushing the button on the emission device if they don’t wake up at the pre-decided time.
The participant wakes up. It may not happen the first time, as they might ascribe the pain and the discomfort to some illness or other transient problem. They may blame themselves or their bodies. Even when they finally realise the cause, they might not understand or believe that the threat is real. But ultimately, as it is made clear to them that the threat is, indeed, real, they will ultimately wake up on the intended time. The intermediate steps might involve radiating them (with electromagnetic or sonic pulses) with increasing intensity/power or frequency with each passing day.
The electromagnetic or sonic radiation on various parts of the body is the reward. The ultimate reward is *__redacted__*. This is useful if you have spectators, either physically or virtually. These are negative rewards (punishments). There might also be positive rewards, which could be anything. One low cost reward can be designed like this:
Hit the participant on any part of the body or the whole body with radiation (electromagnetic or sonic) arbitrarily at any time of the day. Rewards can mean decreasing the force or the frequency of this hitting on the body or body parts.
The participant (subject) wakes up on providing the stimulus.
Many experiments have been conducted based on the second design and they have produced (and reproduced) the expected results with exceptionally high accuracy. The results have been released in certain forums. The forum membership is strictly by invitation only. The results may be released publicly at an appropriate time.
The same results can be obtained even after reversing the genders.
And the results are far more diabolical when the individual mademoiselle is replaced with a collective mademoiselle. Or monsieur, or whatever other gender on the spectrum, because the phenomenon is gender-neutral.
The results are already quite diabolical due to the effect of the collective gravitating towards the individual evil, but they become exponentially more diabolical when the evil itself is collective and even bigger collective gravitates towards the collective evil.
The above is an example of the malignant type of this phenomenon.
In a highly organised social collection of individuals, as we have in our world at a global scale, individual evil is (at the worst) like a cancerous cell. There exists what we call cancer only when there are a very large number of such cancerous cells. Individual cancerous cells can’t do much damage.
Even a small group of cancerous cells is usually benign. Unless, of course, the collective gravitates towards it.
Here is benign type of the same, that is, some of the seeds of it, lest we forget completely, shown in a very much sanitized version:
We all carry some seeds of individual evil: some more, some less. Most of these seeds are supposed to lie dormant and they often do. They are there, at least partially, for evolutionary reasons. There are more than enough technologies of power (in the Foucauldian sense) to keep individual evil in check (but also keep individual good in check if it conflicts with the interests of the powers that be).
The problem is, these same technologies of power create and facilitate collective evil and/or make the collective gravitate towards it for reasons of their own (such as The Greater Good or The Higher Cause, whichever way these causes are defined, which may not be really good or higher).
So, yes, in that sense it is more a political matter, less a psychological matter.
Who decides what is Good or Higher? Who decides who decides? The collective? Those who represent the collective? Those who claim to represent the collective? Those who have the power to decide on behalf of the collective? Those who have the power and just pretend to decide on behalf of the collective? Those who convince the collective that they are deciding on behalf of the collective or for the good of the collective?
To convert a mainly political matter into a totally psychological matter has always been a tactic dear to socio-political establishments to maintain their power and to maintain the status quo (or to change it to their interests), particularly to totalitarian systems such as the Stalinist Soviet Union or the Maoist China or Nazi Germany. That is what the Re-education Camps and Gulags were for, in terms of the justification given for their existence.
There is no reason why a Capitalist Establishment can’t or won’t use this tactic.
We do know for sure about the use of medical ‘treatment’ for gender-related ‘illnesses’ or ‘disorders’ or ‘diseases’. That is not a Conspiracy Theory. The people — good people, nice people — genuinely hated and dreaded the people with such ‘illnesses’ or ‘disorders’ or ‘diseases’, to the extent we hate pedophiles, for example. In many societies, such gender related phobias (is that the right word, considering what I just said about the psychological and the political?) are still the norm. Not just phobias (or whatever is the right term), there are still laws applying them.
The one below is a less benign case of the same phenomenon, hinting towards the malignant form:
This one, as the others, shows the pushes and pulls (well, technically only pulls) of gravitation between entities, both good and evil, whether in the same person or not, and also (more importantly) between the individual evil and the collective evil. The political here is much more explicit. The psychological is just what humans are. The political is what humans have made for themselves, collectively. That last one is the keyword.
In that case, are there some Special Ones or Chosen Ones, or is the Higher or the Good for everyone?
In the fight between good and evil, the evil always has the upper hand. This is almost a cliche. But also in the fight between the individual evil and the collective evil, the latter is a guaranteed winner.
The collective just brushes aside the individual good. And it crushes the individual evil as a giant can crush a little thing. It does that only when the interests between the two don’t align well. Otherwise, they can get along just fine. That is part of how the world works.
There is less evil in a room with a view. A room at the top, however, is a very different matter. The evil there is immeasurably more.
The room at the top is the control centre of the technologies of power. An evil Mademoiselle or a Monsieur is just the kind of asset that they need there.
Only as long as the interests align.
A room at the top comes, not only with a view, but with much evil, with or without the Mademoiselle or the Monsieur.
Ever since our fabulous victory
And after that a repeat triumph
We have been working very hard
But what you have seen so far
Is just a small trailer, a teaser
The full movie is yet to come
That kick you feel in the stomach
That sent you all reeling for long
Accompanied by a hit on the head
As well as our aim for the jugular
Is all just one of our action scenes
Don’t you know that poem?
This is just the beginning
Of our love affair with you
You just wait and watch to see
What is still waiting to happen