I have to move tomorrow morning, early by my standards. Whereas the on the usual days, they don’t let me sleep till late. Sometimes they don’t let me sleep when I go to bed.
So after doing some last minute packing, I lie in the bed. And there is the same familiar congestion in the upper abdomen, affecting probably lungs and esophagus. This causes difficulty in at least two ways. One is due to the lung being affected. The other is due to arrested flatulence that is caused by magnetic force, as I had mentioned in an earlier post about the symptoms of radiation, sonic and EM.
So I take the triaxial EMF meter and place it near me. No reading. Then I place it somewhat away from my chest. Still no reading. Finally I place it on my chest. And there it is a reading that fluctuates and goes above 4mG. That is, only when the meter is exactly above the chest near the lung and heart or esophagus area.
As I start taking photos of the meter readings, they stop. And immediately, the chest congestion goes away and the flatulence is relieved by burping (no point at this stage to worry about embarrassment). I wait. No reading. Then I put back the meter a feet and a half away.
I lie down on the bed and the congestion starts in exactly the same way. It stay. So I pick up the triaxial meter again and repeat the above experiments. Exactly the same results.
The same thing, i.e., my taking photos stops readings. So I put back the meter half a feet away. I lie down and the same congestion again. I pick up the meter the same results are replicated exactly.
My guest is, there is something in the body which allows radiation to be directed towards a particular part of the body, controlled by either AI or human torturers, it hardly matters.
In case this seems like implausible, there is serious research going on on bio-cyber-physical systems. And it is so far advance that there is now research starting on cyber security of bio-cyber-physical system.
The idea of machines which are almost identical to human beings has been so seductive that it has captured the imaginations of the best minds as well as laypeople for at least a century and half, perhaps more. Right after Artificial Intelligence (AI) came into being, it was almost taken for granted that soon enough we will be able to build Humanoid Robots. This has also led to some serious speculation about ‘transhumanism’. So far, we do not seem to be anywhere near this goal. It may be time now to ask whether it is even possible at all. We present a set of arguments to the effect that it is impossible to create or build Humanoid Robots or Humanoid Intelligence, where the said intelligence can substitute human beings in any situation where human beings are required or exist.
1. Humanoid Intelligence, the Singularity and Transhumanism
Before we proceed to discuss the terms of the title of this section and the arguments in the following sections, we first define the foundational terms to some degree of conciseness and preciseness:
1. Human Life: Anything and everything that the full variety of human beings are capable of, both individually and collectively. This includes not just behaviour or problem solving, but the whole gamut of capabilities, emotions, desires, actions, thoughts, consciousness, conscience, empathy, creativity and so on within an individual, as well as the whole gamut of associations and relationships, and social, political and ecological structures, crafts, art and so on that can exist in a human society or societies. This is true not just at any given moment, but over the life of the planet. Perhaps it should include even spiritual experiences and ‘revelations’ or ‘delusions’, such as those hinted at in the Philip K. Dick story, Holy Quarrel [Dick et al., 1985].
2. Humanoid: A living and reproducing entity that is almost identical to humans, either with a human-like body or without it, on a different substrate (inside a computer).
3. Intelligence: Anything and everything that the full variety of human beings are capable of, both individually and collectively, as well as both synchronically and diachronically. This includes not just behaviour or problem solving, but the whole of life as defined.
4. The Singularity: The technological point at which it is possible to create (or have) intelligence that is Humanoid or better than Humanoid.
5. Transhumanism: The idea that, after the singularity, we can have a society that is far more advanced, for the better, than the current and past human societies. From 1910 to 1927, in the three volumes of Principia Mathematica [ 1925–1927], Whitehead and Russell set out to prove that mathematics is, in some significant sense, reducible to logic. This turned out to be impossible when Godel published his incompleteness theorems in 1931 [Sheppard, 2014, Nagel et al., 2001]. During the days of origins of modern Computer Science, before and in early 1930s, it would have been easy to assume that a computing machine would ultimately solve any problem at all. This also proved to be impossible with Turing’s undecidability theorem [Hopcroft et al., 2006] and the Church-Turing thesis of computability [Copeland and Shagrir, 2018]. Since then, other kinds of problem have been shown to be undecidable.
Now that we are supposed to close be enough to the Singularity [Kurzweil, 2006] so that it may happen within the lifetime of a large number of human beings, perhaps it is time to ask ourselves whether real intelligence, in particular Humanoid Intelligence (as defined above) is possible at all. We suggest that there are enough arguments to ‘prove’ (in an informal sense) that it is impossible to build, to create or to have Humanoid Intelligence. We argue that even though the Singularity is indeed possible, perhaps even very likely (unless we stop it), it may not be what it is supposed to be. The conjecture presented here is that the Singularity is not likely to be even benign, however powerful or advanced it may be. This follows from the idea of the impossibility of Humanoid Intelligence.
2 Some Notes about the Conjecture
We have not used the term theorem for the Impossibility and the reasons for this should be evident from the arguments that we present. In particular, we do not, and perhaps cannot, use formal notation for this purpose. Even the term conjecture is used in an informal sense. The usage of terms here is closer to the legal language than to the mathematical language, because that is the best that can be done here. This may be clearer from the Definition and the Story arguments. It is due to a similar reasoning that the term ‘incompleteness’ is not used and, instead, impossibility is used, which is more appropriate for our purposes here, although Godel’s term ‘essentially incomplete’ is what we are informally arguing for about Humanoid AI, and perhaps AI in general. No claim is made as to whether or not a formal proof is possible in the future at all. What we present is an informal proof. This proof has to be centred around the distinction between Micro-AI (AI at the level of an intelligent autonomous individual entity) and Macro-AI (very large intelligent autonomous systems, possibly encompassing the whole of humanity or the world). To the best of our knowledge, such a distinction has not been proposed before. While there has been some work in this direction [Brooks, 1998, Signorelli, 2018, Yampolskiy, 2020], for lack of space, we are unable to explain how this work differs from previous such works, except by noting that the argumentation and some of the terms are novel, a bit like in the case of arguments for or against the existence of God, which question has been debated by the best of philosophers again and again over millennia, which as we will see at the end, is relevant to our discussion.
3 The Arguments for the Impossibility Conjecture for Micro-AI
The Definition Argument): Even the Peano Arithmetic [Nagel et al., 2001] is based on three undefined terms (zero, number and is successor of ), which are relatively trivial terms compared to the innumerable terms required for AI (the core terms like intelligence and human, or terms like the categories of emotions, leave alone the terms like consciousness).
The Category Argument: A great deal of AI is about classifying things into categories, but most of these categories (e.g. anger, disgust, good or bad) have no scientifically defined boundaries. This is related to the following argument.
The Story Argument: It is almost established now that many of the essential concepts of our civilisation are convenient fictions or stories [Harari, 2015] and these often form categories and are used in definitions.
The Cultural Concept Argument: Many of the terms, concepts and stories are cultural constructs. They have a long history, most of which is unknown, without which they cannot be modelled.
The Individuality, or the Nature Argument: An individual intelligent autonomous entity has to be unique and distinct from all other such entities. It originates in nature and we have no conception of how it can originate in machines. We are not even sure what this individuality exactly is. However, all through history, we have assigned some degree of accountability to human individual and we have strict provisions for punishment of individuals based on this, that indicates that we believe in the concept of the ‘self’ or the ‘autonomous individual’, even when we deny its existence, as is becoming popular today.
The Genetic Determinism Argument: Individuality is not completely determined by nature (e.g. by our genes) at birth or creation once and for all. It also develops and changes constantly as it interacts with the environment, preserving its uniqueness.
The Self-organising System Argument: Human beings and the human societies are most likely self-organising [Shiva and Shiva, 2020] and organic systems, or they are complex, non-equilibrium systems [Nicolis and Prigogine, 1977]. If so, they are unlikely to be modelled for exact replication or reproduction. The Environment, or the Nurture Argument: Both intelligence and individuality depend on the environment (or on nature). Therefore, they cannot be modelled without completely modelling the environment, i.e., going for Macro-AI. The Memory, or the Personality Argument: Both intelligence and individuality are aspects of personality, which is known to be dependent on the complete life-memory (conscious and unconscious) of an intelligent being. There is not enough evidence that it is possible to recover or model this complete temporal and environmental history of memory. A lot of our memory, and therefore our individuality and personality is integrally connected with our bodily memories.
The Susbstrsate Argument: It is often taken for granted that intelligence can be separated from the substrate and planted on a different substrate. This may be a wrong assumption. Perhaps our intelligence is integrally tied with the substrate and it is not possible to separate the body from the mind, following the previous argument.
The Causality Argument: There is little progress in modelling causality. Ultimately, the cause of an event or occurrence is not one but many, perhaps even the complete history of the universe.
The Consciousness Argument: Similarly, there is no good enough theory of consciousness even for human understanding. It is very unlikely that we can completely model human consciousness, nor is there a good reason to believe that it can emerge spontaneously under the right conditions (which conditions?).
The Incompleteness/Degeneracy of Learning Source and Representation Argument: No matter how much data or knowledge we have, it will always be both incomplete and degenerate, making it impossible to completely model intelligence.
The Explainability Argument: Deep neural networks, which are the state-of-the-art for AI, have serious problems with explainability even for specific isolated problems. Without it, we cannot be sure whether our models are developing in the right direction.
The Test Incompleteness Argument: Perfect measures of performance are not available even for problems like machine translation. We have no idea what will be the overall measure of Humanoid Intelligence. It may always be incomplete and imperfect, leading to uncertainty about intelligence.
The Parasitic Machine Argument: Machines completely depend for learning on humans and on data and knowledge provided by humans. But humans express or manifest only a small part of their intelligent capability. So machines cannot completely learn from humans without first being as intelligent as humans.
The Language Argument: Human(oid) Intelligence and its modelling depend essentially on human language(s). There is no universally accepted theory of how language works.
The Perception Interpretation Argument: Learning requires perception and perception depends on interpretation (and vice-versa), which is almost as hard a problem as modelling intelligence itself.
The Replication Argument: We are facing a scientific crisis of replication even for isolated problems. How could we be sure of replication of Humanoid Intelligence, preserving individual uniqueness?
The Human-Human Espitemic Asymmetry Argument: There is widespread inequality in human society not just in terms of money and wealth, but also in terms of knowledge and its benefits. This will not only reflect in modelling, but will make modelling harder.
The Diversity Representation Argument: Humanoid Intelligence that truly works will have to model the complete diversity of human existence in all its aspects, most of which are not even known or documented. It will have to at least preserve that diversity, which is a tall order.
The Data Colonialism Argument: Data is the new oil. Those with more power, money and influence (the Materialistic Holy Trinity) can mine more data from others, without sharing their own data. This is a classic colonial situation and it will hinder the development of Humanoid Intelligence.
The Ethical-Political Argument: Given some of the arguments above, and many others such as data bias, potential for weaponisation etc., there are plenty of ethical and political reasons that have to be taken into account while developing Humanoid Intelligence. We are not sure whether they can all be fully addressed.
The Prescriptivastion Argument: It is now recognised that ‘intelligent’ technology applied at large scale not only monitors behaviour, but changes it [Zuboff, 2018]. This means we are changing the very thing we are trying to model, and thus laying down new mechanical rules for what it means to be human.
The Wish Fulfilment (or Self-fulfilling Prophecy) Argument: Due to prescriptivisation of life itself by imperfect and inadequately intelligent machines, the problem of modeling of Humanoid Intelligence becomes a self-fulfilling prophecy, where we end up modeling not human life, but some corrupted and simplified form of life that we brought into being with ‘intelligent’ machines.
The Human Intervention Argument: There is no reason to believe that Humanoid Intelligence will develop freely of its own and will not be influenced by human intervention, quite likely to further vested interests. This will cripple the development of true Humanoid Intelligence. This intervention can take the form of secrecy, financial influence (such as research funding) and legal or structural coercion.
The Deepfake Argument: Although we do not yet have truly intelligent machines, we are able to generate data through deepfakes which are not recognisable as fakes by human beings. This deepfake data is going to proliferate and will become part of the data from which the machines learn, effectively modeling not human life, but something else.
The Chain Reaction Argument (or the Law of Exponential Growth Argument): As machines become more ‘intelligent’ they affect more and more of life and change it, even before achieving true intelligence. The speed of this change will increase exponentially and it will cause a chain reaction, leading to unforeseeable consequences, necessarily affecting the modelling of Humanoid Intelligence.
4 The Implications of the Impossibility
It follows from the above arguments that Singularity at the level of Micro-AI is impossible. In trying to achieve that, and to address the above arguments, the only possible outcome is some kind of Singularly at Macro-AI level. Such a Singularity will not lead to replication of human intelligence or its enhancement, but something totally different. It will, most probably, lead to extinction (or at least subservience, servitude) of human intelligence. To achieve just Humanoid Intelligence (Human Individual Micro-AI), even if nothing more, the AI system required will have to be nothing short of the common notion of a Single Supreme God. Singularity at the macro level will actually make the AI system, or whoever is controlling it, individual or (most probably small) collective, a Single Supreme God for all practical purposes, as far as human beings are concerned. But this will not be an All Powerful God, and not a a Kind God, for it will be Supreme within the limited scope of humanity and what humanity can have an effect on, and it will be kind only to itself, or perhaps not even that. It may be analogous to the God in the Phiilip K. Dick story Faith of Our Fathers [Dick and Lethem, 2013], or to the Big Brother of Orwell’s 1984 [Orwell, 1950]. We cannot be sure of the outcome,
of course, but those as likely outcomes as any others. That is reason enough to be very wary of
developing Humanoid Intelligence and any variant thereof.
References
Philip K. Dick, Paul Williams, and Mark. Hurst. I hope I shall arrive soon / Philip K. Dick ; edited by Mark Hurst and Paul Williams. Doubleday New York, 1st ed. edition, 1985. ISBN 0385195672.
Alfred North Whitehead and Bertrand Russell. Principia Mathematica. Cambridge University Press, 1925–1927.
John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321455363.
B. Jack Copeland and Oron Shagrir. The church-turing thesis: Logical limit or breachable barrier? Commun. ACM, 62(1):66–74, December 2018. ISSN 0001-0782. doi: 10.1145/3198448. URL https://doi.org/10.1145/3198448.
Ray Kurzweil. The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics), 2006. ISBN 0143037889.
Rodney Brooks. Prospects for human level intelligence for humanoid robots. 07 1998. Camilo Miguel Signorelli. Can computers become conscious and overcome humans? Frontiers in Robotics and AI, 5:121, 2018. doi: 10.3389/frobt.2018.00121. URL https://www.frontiersin. org/article/10.3389/frobt.2018.00121.
Roman V. Yampolskiy. Unpredictability of ai: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 07(01):109–118, 2020. doi: 10.1142/S2705078520500034.
V. Shiva and K. Shiva. Oneness Vs. the 1 Percent: Shattering Illusions, Seeding Freedom. CHELSEA GREEN PUB, 2020. ISBN 9781645020394. URL https://books.google.co.in/books?
id=4TmTzQEACAAJ.
G. Nicolis and I. Prigogine. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. A Wiley-Interscience publication. Wiley, 1977. ISBN 9780471024019. URL https://books.google.co.in/books?id=mZkQAQAAIAAJ.
Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st edition, 2018. ISBN 1610395697.
Holding a gun to the head of the participant and threatening to shoot him/her if they don’t wake up at the pre-decided time.
Response
The participant wakes up. It may not happen the first time, as they might not understand or believe that the threat is real. But ultimately, as it is made clear to them that the threat is, indeed, real, they will ultimately wake up on the intended time. The intermediate steps might involve hitting them on the head with the gun with increasing force or frequency with each passing day.
Reward/Punishment
The hitting on the head is the reward. The ultimate reward is shooting in the head. This is useful if you have spectators, either physically or virtually. These are negative rewards (punishments). There might also be positive rewards, which could be anything. One low cost reward can be designed like this:
Hit the participant on the head arbitrarily at any time of the day. Rewards can mean decreasing the force or the frequency of this hitting on the head.
Outcome
The participant (subject) wakes up on providing the stimulus.
***
The above is a crude experiment, a kind of thought experiment, as it is possible only in certain settings such as physical concentration camps. A more realistic experiment is given below, which has become possible with the latest developments in technology, as we move towards the technological Utopia of the 2030s.
***
The Realistic Experiment
Confirming that stimulus and reward change behaviour.
Participant (Subject): Consent not necessary, as it avoids chances of bias.
Holding a ray gun that produces painful levels of radiation (radio frequency, electric field, magnetic field or any combination of these: ionizing radiation should be avoided, but can be used in exceptional cases) or physical Dog Whistles based on untrasound or infrasound to any part of the body of the participant and pushing the button on the emission device if they don’t wake up at the pre-decided time.
Response
The participant wakes up. It may not happen the first time, as they might ascribe the pain and the discomfort to some illness or other transient problem. They may blame themselves or their bodies. Even when they finally realise the cause, they might not understand or believe that the threat is real. But ultimately, as it is made clear to them that the threat is, indeed, real, they will ultimately wake up on the intended time. The intermediate steps might involve radiating them (with electromagnetic or sonic pulses) with increasing intensity/power or frequency with each passing day.
Reward/Punishment
The electromagnetic or sonic radiation on various parts of the body is the reward. The ultimate reward is *__redacted__*. This is useful if you have spectators, either physically or virtually. These are negative rewards (punishments). There might also be positive rewards, which could be anything. One low cost reward can be designed like this:
Hit the participant on any part of the body or the whole body with radiation (electromagnetic or sonic) arbitrarily at any time of the day. Rewards can mean decreasing the force or the frequency of this hitting on the body or body parts.
Outcome
The participant (subject) wakes up on providing the stimulus.
***
Many experiments have been conducted based on the second design and they have produced (and reproduced) the expected results with exceptionally high accuracy. The results have been released in certain forums. The forum membership is strictly by invitation only. The results may be released publicly at an appropriate time.
(You just have to trust them. There is no reason not to if you have nothing to hide. Moreover, if the happy healthy Scandinavians are planning this, it definitely could not be a bad thing.)
Sometime after I started this blog, I looked up the stats page to see how was the viewership. I didn’t expect large numbers, but I wanted to check if anyone was reading it at all. It turned out that, at least officially (in a way that would register in WordPress stats), not that many were (except for short periods), considering that even personal Facebook pages or single (personal) YouTube videos can often have very large viewership. At the same time, a lot of people seemed to be aware of what I was writing, because either the content of my posts or the blog itself were often referred to in my conversations with other people. That’s a different story, which I am not going into today.
I also noticed that on the stats page, there was a place where you could see the search queries that were put in the Search box of your blog (blog-specific queries, not web-wide queries), which is supposed to help people find content in a specific blog. It seems only I use this box for this purpose. Because, what I saw was that most of the searches were completely irrelevant to the blog. They were not attempts to find content in the blog at all.
Over the period of last 15 years, I have maintained several websites, one being my personal webpage (now defunct), one an activist website (Hindi version of ZNet, now defunct), a website for an Open Source toolkit that I had developed (also defunct) etc. I was maintaining these at my own expense and now I can’t afford to.
On all these, I noticed the same pattern. No queries to actually find content. They were all either insults hurled at me in this oblique and anonymous manner or sometimes they even sounded like threats. I even mentioned this to some of my colleagues.
As a result, when I joined post-doc in 2012 in another country, I was already aware of weaponization of local (e.g. blog-specific) search queries.
Right from the day I arrived in that country, I had strange experiences. At the workplace, no one would even talk to me (except one Indian post-doc who joined roughly at the same time and occasionally one or two others who seemed to be sympathetic to me, all girls, or as we say in India, ladies), or when they rarely did, they were not really talking, there were doing something else. More about that later.
There was one person (younger than me, but relatively high in the hierarchy of the lab). When we passed by, he would make what sounded like unsavoury comments clearly directed at me, because there was no one else in sight except his friend(s). He would look at me so I knew he was commenting about me.
One day, while I was coming to the office, perhaps a day I was not feeling well or was somehow not in a good mood, he and one of his friends passed me by (no one nearby again). He looked at me and sort of shouted something like ‘le pouet a vendu’. I could guess the meaning, or at least the word ‘vendu’, but still when I reached the computer, I typed in the sentence into Google Translate, according to which the translation was ‘the squealer (or squeaker) has been sold’. This was soon after I had joined the job. Right now, today, I tried again after all these years and Google now says ‘squirrel’. I positively remember the word ‘pouet’, not just because I heard it used near me many times again, but also because I was so fed up with it that I once put it in one of my passwords. It is probably an ambiguous colloquial word.
When I had joined, I was given a copy of the contract and I was asked to go through it, which I did quickly, as I can read fast. What caught my attention was that it clearly mentioned the fact (in 2012) that various ‘tracking devices’ were placed in various places and the activities of the employees will be monitored. This was not very surprising in itself to me, but the fact that it was clearly written was. This was a government research centre. I had already experienced online and other kinds of surveillance.
So, that day, that comment really got on my nerves and finally I thought I should respond in some way, but what could I do? I was in a foreign country. I needed the job and I had not even yet received my work permit (which is another story). I had no friends there. So I remembered the weaponized queries which were being used against me even then. I had also once been to a Google office and had seen Google search queries being displayed on a large board in the welcome room. I then opened a Google search page on my work terminal and typed the following (perhaps not exact words, but very close):
Why does X alway keep yapping at Singh? What has Singh ever done to X? Is there a secret history between them?
There is also a story behind why I used ‘Singh’ and not my first or second name, or both. There is even a story behind why I used the word ‘yapping’.
The office of the head of the lab was right in front of my room and from where I was asked to sit for work, I could clearly see him through glass walls. I could even see his computer, which was in a corner, though obviously I could not read what was on the screen from that distance. He could see me too and perhaps that was the point of making me sit there.
Barely a minute after my typing in the query, a person (also a post-doc, I think) whose responsibilities included working as kind of systems administrator for the lab, came to the head’s office and said something to him. I was expecting something like this to happen, because I already knew how things work in places with total surveillance. From where I was sitting, it seemed he was reporting to him something about which something should be done. He asked the head to go to the computer and have a look at something. The head did that, read something. He too seemed concerned, but he basically shrugged his shoulders.
From that moment on, person X never made any comments to me any more. He never even acknowledged my presence. Not that the people there started treating me any better. In some ways, it only got worse.
This was not all. When I was nearing the completion of my contract, I went to my supervisor and asked him if my contract will be renewed. He evaded the question first, but then he said he will tell me sometime later. Later, when I asked again, we had a long conversation (which is also worth going into later), where he gave various reasons, but clearly said that my work was not the problem. Finally, when I countered all his arguments, he said in any case he will not be associated with the lab soon and X will be in-charge of the lab.
He then said, I can’t see you working together with X. I had never mentioned X to him or to anyone else.
I never even had a conversation with X. I had never said anything to him, nor even commented back at him, except that search query. There was no reason why anyone would say that my relations with him were bad (or good). In fact, there were no relations of any kind, as far as I was concerned and, if he had talked to me and wanted to work with me, I would most probably have agreed, even after that. After all, I did not really have relations (good or bad) even with my supervisor. We just discussed some research questions, mostly over email.
I did respect him (the supervisor), though. He is a seasoned and very good researcher and certainly not a bad person. The same goes for the head. X is also an accomplished researcher, although I hesitate to say that he is a good person.
Did that query cost me the extension?
***
A couple of days after I started keeping the Zersetzung 21C Journal on my blog, there was this local query in my blog Search box:
Although I have no idea what it means, it (the first one) is clearly not a genuine query.
***
And this when I had gone to my home town where my parents reside:
Is it (the first one), as it appears to be, just vile abuse? Or is supposed to be some kind of twisted sermon in vile abusive language (and with the same kind of sick thinking). Is it also a some kind of Skinnerian or Zersetzung device?