Minimum Required Conditions for Life in an Acceptable Degree of Natural Harmony

There cannot be any duties without any rights. This is true of even machine parts, because if you deliberately interfere with the working of a machine part with malevolent or even unclear or uncertain purpose, then you are likely to break down the machine, perhaps even irreparably.

Human beings are anyway not machines and they cannot be supposed to behave like machines. It cannot be expected that on giving a certain aggregated input, a certain aggregated output, previously decided, will be produced. This can work with machines under certain well-designed conditions, but it is guaranteed to fail with human beings.

It may, however be possible for human beings to live in a certain degree of natural (not artificial) harmony, where the meaning of the word ‘harmony’ can be taken as similar to that for notes in good music, keeping in mind that being off-key within limits can, in some ways, enhance the quality of the harmony, not spoil it.

In order for such a society of human beings to exist in a natural harmony, which allows occasional off-key notes and even violations of human-made rules of music in order to allow the music to evolve, diversify and improve, certain conditions must exist for individual human beings, excluding none whatsoever, including the ‘freaks’ and those found (or suspected to be) ‘criminal’ or some such, based again on human-made rules, which might go wrong (or be insufficient) sometimes, as can rules of music. In any case, the human-made rules keep changing all the time, so they are self-evidently more than imperfect.

As a first attempt to define such minimum conditions for naturally harmonious society to exist, the following are proposed:

Rights:

1. Basic human rights, which are already well-defined and globally accepted, at least in theory

2. Minimal basic income: This too is well known, and is now finding increasing favour globally. Minimal here means an as high a basic income as is possible under the circumstances, without giving precedence to any particular individuals, sets of individuals or collectives (including entities like corporations) or sets of collectives

3. Total freedom of thought and maximal freedom of speech, without which no human being can really *be* a human being. This too is well-studied, but has now, unfortunately, become a matter of contention. Maximal here means maximum possible, i.e., maximum subject to some minimum required constraints, the less the better (constraints, that is, not freedom).

4. Minimal freedom of action. This is required for humans to not act like machines, because if humans are forced to behave like machines, nothing good can come out of that. The word minimal is defined in a way similar to the point 2.

5. Minimal knowledge: Just as Minimal Basic Income is required for a minimal degree of economic equality, so is minimal knowledge required for minimal epistemic equality. As the common saying goes, knowledge is power. It is also essential on principle, because a human being, for harmonious existence, is required to think, speak, listen and act on the basis of truth. And knowledge provides that truth. It’s not a static entity, whether for an individual, or locally or globally. Instead, it’s a constantly evolving dynamic ‘corpus’, for want of a better word.

6. Maximal Justice: Maximum justice has to be available to any and every individual, subject to the real Rule of Law as it was supposed to be in theory when this field has matured and had reached its peak. This is the formal justice. This too is already well-studied, well-documented and, in some cases, well-practiced at some points of time.

7. The Right to Physical and Intellectual Sanctuary: This is as proposed in the landmark book Surveillance Capitalism. The reader must refer to that book.

Conditions of Social Participation:

1. Knowledge: It is generally believed that in social interactions, particularly of private or intimate variety, there has to be consent. This is definitely true. However, there is another even more basic condition that has to be satisfied even before the matter of consent comes up. And this is the condition of knowledge. This point may, perhaps, be better understood with reference to crimes, particularly of the private and intimate nature — crimes, that is, where one or more individuals (or collectives) assault or harm other one or more individuals or even collectives. A simple question can illustrate the overarching importance of the condition of knowledge in social interaction. What are the most terrifying crime thrillers or horror films involving crimes of private and intimate nature, i.e., involving violations of not only privacy, but also consent? The answer is these are the ones where the victim(s) have no knowledge about the perpetrator(s) or their motivations etc. This absence of the condition of knowledge make the crimes (whether supernatural or not) not only more terrifying, but far more abhorrent than even those involving the lack of consent, because the presence of at least such knowledge, even without consent still leaves the victims with some degree of human dignity as victims (or survivors, if they do survive) of the crime. Absence of knowledge take away even this dignity. More importantly, and more diabolically, it almost eliminates any possibility of seeking redressal and the perpetrators and their motivations are not known and it is almost impossible to prove the crime. This was not possible earlier, except in thrillers about psychopaths, but now it is possible even for ‘normal’ people to participate in such crimes due to weaponisable remotely operated technologies that are may not be very ‘intelligent’, but they are capable of untold and unimaginable cruelty. This should not be surprising, as it is well-known, well-documented and well-studied that it takes much less to turn a powerful technology into a weapon, than it does to make it really proven source of good. You need to have talent and skill and immense amount of practice, for example, to throw a basketball in the net perfectly to score points, or for repairing a broken machine, which may involve the use of, say, a hammer. However, you don’t need much intelligence, skill, talent or practice to throw a rock or use a hammer to hurt someone. This should be very obvious with regard to technologies, but for some ‘understandable’ reasons, it is not.

2. Consent: After knowledge, comes consent. This too well-known to be elaborated here.

3. Acknowledgement: With the presence of knowledge and consent, if one participates in social interaction or activities, the least that is expected is acknowledgement from others. In other word, no individual can be made an unperson or outcaste.

4. Social Justice: Most of the above are conditions required for individuals. However, since individuals exist in and interact with other individuals and collectives in a society, the conditions above have be equal as far as possible for all individuals. This is where social justice comes in. It too is well-know and well-studied, but it has to be reimagined in the light of the above conditions required for individuals.

Meta-Rights:

1. Natural Self: There has been a great deal of philosophical and other kinds of debate about the existence or non-existence of the self. It seems obvious to the author that most of such debate is the result of confusing individuality with extreme individualism. For us to be human beings at all, we have to have a self. If there is no self, than the whole framework within which we live, whatever may be the political system, ideology or the local culture, breaks down completely. Just to give one example of schizophrenic nature of the debate (in many circles, not all), the same people who deny the existence of a self, are the most extreme in ascribing accountability exclusively to individuals. If individuals have no self, then how can they be accountable for anything. This brings us back to the idea of humans as machines. If humans are just machines or machine parts, then they have no accountability. The designers, producers, maintainers, and operators etc. of the machine can and should be held accountable. That obviously is a nonsense scenario. Yes, circumstances do matter, but they do for all individuals, less for som and more for some. So, as is also well known (outside of the self-denying ideology), individual behaviour is the result of both the self (the nature) and the societal and environmental circumstances (the nurture). Note that the term used Natural Self, which means the nature endows us all with some kind of natural self, which can’t be wished away, if we want to avoid catastrophic results.

2. Maximal Natural Privacy: Apart from the self, which only needs to argued for, as it is a natural phenomenon, privacy is a function of circumstances and the environment. So, it has to be fought for. It is the most basic or root condition for any other condition above to exist, as it is a meta-right. Even the self can be crushed without privacy. Privacy, i.e., maximal privacy, not unlimited privacy, is not a matter of luxury. It is the most fundamental requirement for our existence as human beings. It is not possible to exist as a human being without meeting this condition. Like in the case of the self, this is a much misunderstood topic. It has been claimed that, like the self, privacy is also the creation of a particular kind of ideology, such as the ideologies based on the idea of private property. This confusion between privacy and private property has led to much wrong in though and action in our modern history. Just like there is a natural self, so there is natural privacy. It is the advent of invasive technologies that has converted privacy into something like property (private or otherwise). Nature didn’t evolve it that way.

3. Maximal Autonomy: Once we are allowed to be our-selves and have the maximal natural privacy, we can try to fulfil our responsibilities to society as autonomous sentient, conscious, self-aware and moral living beings, as human beings. Otherwise we are either machines, or at most pets or cattle.

4. Minimal Secrecy of those in power: The primary reason why the above conditions are not fulfilled is that those in power operate in secrecy, and therefore without accountability. This is not a new idea, of course. We just note here, again, that in order to ensure the above conditions to avail for a naturally harmonious human existence in a society, the secrecy of those in power has to be mimimised. Otherwise, there is no possibility of achieving the above conditions or natural harmony. The only possible harmony with maximum secrecy is 16 ton weight kind of harmony, as we know from Monty Python sketches, to end on a lighter note.

Zersetzung Vicious Loop

The most dreaded surveillance agency

Was as we know very well was the Stasi

They developed a truly diabolical system

Naively they did name it and called it

Zersetzung as bad luck would have it

It means some say like decomposition

While some say it is biodegradation

Of living human beings as try to live

Not the natural of course as you guess

It may be Human Artificial Intelligence

Later on it got integrated with the other

Artificial Intelligence to reduce clutter

It’s open season for them out in the open

Fair game to everyone, nary a just token

Why do you use it, it being diabolical?

Well, it is for us the most antithetical

We only ever use it as the last resort

For you used it against us all the time

That’s not true at all, and it’s a crime

You used it first in your very prime

We only use it because you used it

Now you are lying it was in your kit

You originated it and used it all the while

You were the ones who sat on its pile

For as long as we can go back in history

Our resort to it is just very recent story

We never used it you are peddling lies

You are dealing in conspiracy theories

We are not shape shifting reptilians, Icke

Never mind that we use it only for a cause

You just use it for your egotistic advance

Don’t hide behind your high moral ground

Your true reasons we have actually found

So don’t try to bluff us and stop doing this

We will then also refrain from the practice

No you won’t and you know that very well

It is embedded in your very way of life

You thrive on the whole mankind’s strife

Don’t you dare blame us for your crimes

We have proved this many many times

It’s no use talking to you, waste of time

Same here you sure don’t give it a dime

So it went on till very end of the times

Crime that encompasses all other crimes

Compared to it others are walks in the park

I am no prophet just made some rhymes

Apocalyptic Blues, the Hard Ones

The Bard just sang in the 21C

That someone had told him

In the last century that the Age

Of the Antichrist had begun

I don’t know about that as

I was too young and ignorant

Then, perhaps not even born

Nonetheless, I dig what he says

For from where I stand today

It seems almost too clear to me

That the Apocalypse is already

On, is on and is rolling out on

Though the Sun still rises and sets

It seems the accounts are being

Settled by whoever God has

Appointed, or whoever has

Become God or many Gods

At this point, I am tempted to

Say the same old so far so good

But there has been a catastrophic

Glitch in the divine machinary

In now settling the accounts, for

It is clear as glass that the God’s

Recordkeepers have jumbled up

All the accounts so that there are

Cases galore of one Ramprasad

Having committed the crime but

It being Shyamprasad who was

Held to account and sent to hell

Literally, that is, not as metaphor

Tons of records of actual crimes

Have gone missing and have

Evaporated, never to be found

At the same time there are too

Tons of records of many crimes

Uncommitted and misattributed

Misrepresented, miscommunicated

Ramprasads have become the Heroes

Of heavenly abodes that have come

And still to come in the Apocalypse

That is already ongoing here now

On, is on and is rolling out on

Thus even the Apocalypse has

Gone horribly wrong, all just

Because of a technical glitch

Or the carelessness of the God’s

Recordkeepers, or some mischief

By some heavenly Ramprasad

The purpose of which may be

Divine secret as they tend to be

Or may be it’s just an indexing

Error, a kind of human error

Err, I mean a divine error here

Still one can’t completely rule

Out the possibility of the Bard’s

Song’s truth being the real truth

Channelling Bresson, it can be

Said that who is leading us by

The nose? To which the answer

Being simply as the man on the bus said

Le Diable Probablement, quoi d’autre?

Not Letting Sleep

I have to move tomorrow morning, early by my standards. Whereas the on the usual days, they don’t let me sleep till late. Sometimes they don’t let me sleep when I go to bed.

So after doing some last minute packing, I lie in the bed. And there is the same familiar congestion in the upper abdomen, affecting probably lungs and esophagus. This causes difficulty in at least two ways. One is due to the lung being affected. The other is due to arrested flatulence that is caused by magnetic force, as I had mentioned in an earlier post about the symptoms of radiation, sonic and EM.

So I take the triaxial EMF meter and place it near me. No reading. Then I place it somewhat away from my chest. Still no reading. Finally I place it on my chest. And there it is a reading that fluctuates and goes above 4mG. That is, only when the meter is exactly above the chest near the lung and heart or esophagus area.

As I start taking photos of the meter readings, they stop. And immediately, the chest congestion goes away and the flatulence is relieved by burping (no point at this stage to worry about embarrassment). I wait. No reading. Then I put back the meter a feet and a half away.

I lie down on the bed and the congestion starts in exactly the same way. It stay. So I pick up the triaxial meter again and repeat the above experiments. Exactly the same results.

The same thing, i.e., my taking photos stops readings. So I put back the meter half a feet away. I lie down and the same congestion again. I pick up the meter the same results are replicated exactly.

My guest is, there is something in the body which allows radiation to be directed towards a particular part of the body, controlled by either AI or human torturers, it hardly matters.

In case this seems like implausible, there is serious research going on on bio-cyber-physical systems. And it is so far advance that there is now research starting on cyber security of bio-cyber-physical system.

It seems to be a part of the Zerzetsung 21C now.

EMF Reading - 5
EMF Reading – 5

EMF Reading - 4
EMF Reading – 4

EMF Reading - 2
EMF Reading – 2

Sun is Not an IT So Far

It is being said the Star

Of our own Solar System

The one with Seven Horses

That it can be dimmed

 

It can be made to dim they say

They want it as an IT, you know

 

You don’t know IT?

You say everyone does

But what I say doesn’t

Make any sense at all

 

That’s because the IT

For you is something else

Which everyone knows

 

That’s not the IT I am talking about

I mean quite another, is what I say

Very different from what you think

Yet not completely unrelated, I say

 

I am talking possibility of cosmic Zersetzung

I say, the Sun is not an Individual Target

 

The Sun is dimming, yes that’s very true

But it is dimming on its own, of its own

The Sun is decomposing, yes, we know

And Zersetzung means decomposition

 

It is natural decomposition, is what I say

It’s not planned artificial decomposition

By a gang playing God for the targeted

And perhaps for the whole world, who knows?

 

But the Sun is out of their reach so far

What lies beyond now, I don’t know I say

The Sun is not yet targeted is what I say

The Sun’s so far still an independent Star

 

But Zersetzung in all its variations

Has really Big Dreams, I can’t deny

The odds are against them, I say …

So I will still go to bet for the Sun

 

Though I am not at all very sure

Of the fate the Mars and Moon

And of Venus as well as the noon

 

That’s because a chorus is heralding

The coming of Trillion Suns Shining

Shining and connecting and mining

Chiming and dining and wining, sort of

 

That’s more’n enough cause for

Serious human concern, I do say

 

Serious human concern, I do say

Then I also say, what do you say

 

Unprecedented Scientific Censorship

Scientific discourse is considered one place where you can present certain kinds of truth as accurately as possible, regardless of whether they conform to the prevailing orthodoxies or not, whether they are truths that most people want to listen to or not, and whether they agree with political ideologies or not. It used to be the case that most of scientific discourse was on matters which did not directly and immediately interest or concern either the general public or, to a lesser extent, even the powers that be. And so, scientists were able to pursue their research with tolerable hindrance from the circumstances and people in which and among whom they lived and worked.

This started changing when the modern Industrial, and then Corporate — apart from the state — establishment developed not only huge stakes in scientific research, but started funding most of it, not just for courtly splendour as was the case in the age of old feudalism. With funding came control. Simultaneously, with the neoliberal/neoconservative dominance of the world, government funding for independent research started diminishing at an ever increasing rate. This inevitably meant that scientific community came under heavy influence of state and corporate actors.

In the 21st century, this influence is transforming into more and more tighter form of control over not just what research is carried out, but how, to what end, and even with regard to whether it produces ‘desirable’ results or not.

The Pandemic of 2020 has made this phenomenon of tight control over scientific research more widespread as well as more visible. With it, however, has come (perhaps fittingly) an extremely shrill rhetoric of “You don’t believe in science?!” and “Science says so and so”, where so and so could be a very obviously a debatable matter (or not: it doesn’t make a difference). In other words, on the one hand, science is becoming more like religion, both in terms of concepts like heresy, blasphemy and blind (or at least uncritical) belief, and in terms of censorship of expression, even scientific expression. Genuine scientific debates are becoming more like theological conflicts, as the science wars about the Pandemic have revealed.

This is also the time when Artificial Intelligence (AI) is all the rage. It is being touted as the Silver Bullet to solve all of humanity’s problems, current and future. No wonder then that AI too is seriously in danger of becoming a theology and a church, rather science and technology. Perhaps the best example of this is the recent case of a paper on ethics of AI, co-authored by mainstream AI ethicists and researchers, which caused Google to ask one of its authors to retract the paper. Timnit Gebru, the co-lead of Google’s ethical AI team, was a co-author of the paper. She has since left her job rather than agreeing to retract the paper. Many researchers cannot afford to do that, and the paper might be published, but still this case is unprecedented.

I had my own experience with scientific censorship recently. I have been working on a paper about the impossibility of humanoid artificial intelligence, but I could not think of a suitable venue for this paper, since it seems to go against one of the most dearly held ideas about AI: that true humanoid AI is not only possible, but inevitable. Since the draft was written in a semi-formal style, using arguements against the possibility of humanoid AI, analogous to the arguments philosophers have been using for and against the possibility of a Single Supreme God. In my view, building humanoid AI will require AI as a whole to become a Single Supreme God, at least as far as human affairs are concerned. The arguments centred around the distinction between Micro-AI and Macro-AI.

Then I came across an unusual research workshop at the most well known AI conference (Neural Information Processing Systems or NeurIPS 2020), which was titled ResistanceAI. It invited papers and even media, including those not in an academic form or format. It seemed perfect to me, so I decided to submit my draft at this workshop. Since it is a common practice now to post such drafts (preprints) on the best known scientific archive or preprint hosting site arXiv. I already have posted several papers on arXiv. Since such preprint sites are meant for archival purposes, they do not put the papers through a peer review process, as that is going to happen anyway when the paper is submitted to a peer reviewed venue. Usually, the paper is posted directly after a kind of sanity check. Sometimes, however, arXiv puts a paper through moderation, which usually involves reclassification of the paper under suitable categories. In very rare cases, a paper can be removed. The reasons for such removal are supposed to be:

  • Unrefereeable content
  • Inappropriate format
  • Inappropriate topic
  • Duplicated content
  • Rights to submit material
  • Excessive submission rate

Based on the description of these reasons given on their moderation page, none of these apply in anyway to my draft. I had submitted the paper on 8th October 2020. I first received a mail saying it will be ‘announced’ (that is, posted) the next day. Then, on 14th October 2020, I received a mail saying that the paper has been ‘put on hold’. Initially I assumed it must be for reasons of reclassification. However, on the same day, I received another mail saying the paper has been removed. The mail said:

Dear arXiv user,

Our moderators have determined that your submission is not of sufficient interest for inclusion within arXiv. The moderators have rejected your submission after examination, having determined that your article does not contain sufficient original or substantive scholarly research.

As a result, we have removed your submission.

Please note that our moderators are not referees and provide no reviews with such decisions. For in-depth reviews of your work, please seek feedback from another forum.

Please do not resubmit this paper without contacting arXiv moderation and obtaining a positive response. Resubmission of removed papers may result in the loss of your submission privileges.

For more information on our moderation policies, see:

https://arxiv.org/help/moderation

Regards,
arXiv moderation

The reason given (“your article does not contain sufficient original or substantive scholarly research”) was a kind of review itself, which is not supposed to be there as a reason for removal, since duplication means direct duplication, not extending existing ideas. The reason can be reasonably interpreted as saying simply that some references were missing from the paper, meaning that it was a kind of feedback to me about the paper, which arXiv is not supposed to give.

This came right before the deadline for submission at the ResistanceAI workshop. So I added a few of the missing references, given the page limit of four pages. The paper was, however, rejected at the workshop, although I did receive a review of the paper. Note that one of the reasons for removal from arXiv is “unrefereeable content”. So, clearly, the paper was not unrefereeable.

The review from the workshop is given below:

Reviewer #1
Questions

2. Please provide constructive feedback to the authors
This paper address some timely questions about what we might expect the “Singularity” to look like. Unfortunately, section three–the meat of the paper–is somewhat difficult to follow. Rather than listing many different arguments, it may be more helpful to focus on a subset of these arguments and explain how they are related. As currently written, it is difficult to understand the argument and how it reaches the conclusions that “Singularity at the level of Micro-AI is impossible” and that a Singularity at the “Macro-AI level” would be an existential threat to human intelligence.
3. Please give this submission a score
Weak Reject

Reviewer #2
Questions

2. Please provide constructive feedback to the authors
1/ The paper, while looking at the impact of a hypothesized ‘Macro AI’ on human beings in the future, ignores the issues that AI technology is causing in the present.
2/ In particular, it fails to inspect and analyze the material impact that AI is already causing in the lives of human beings, whether or not it is a ‘humanoid’ AI which is doing that.
3/ Overall, the paper does not fit the theme of the workshop — which has more to do with how AI concentrates power in the hands of a few, rather than hypothesizing about the future of AI and what that means for humanity, without grounding it in a material analysis.
3. Please give this submission a score
Strong Reject

Although I at least received reviews of the paper, the reasons given here are highly questionable, particularly in the light of the fact that the workshop has accepted not just papers, but also poems, rants, essays etc., and even an anonymous submission, which is never the case at a research venue. In particular, the reviewer statement, “ignores the issues that AI technology is causing in the present”, does not make sense. In a four page paper, when dealing with a topic like this, how can one include a survey of harms already being done by AI? I have, in the past, written at least one paper on such harms, which is (ironically) hosted on arXiv. That paper was rejected without review from the conference where it was submitted simply because I mistakenly did not notice that the paper, before submission, had (at the last moment) exceeded the four page limit by a two or three (one column) lines.

I had then two options, apart from working further on the paper and submitting it to another peer reviewed venue. One was to appeal the decision by arXiv, which I might still do, and the other was to post the draft on some other preprint site. I found two alternatives for the second option. One was the PhilSci Archive for preprints in philosophy of science. The second was HAL Archive.

I posted on both of them. The draft was again rejected from the PhilSci Archive, giving the following reason:

Unfortunately the item could not be accepted into PhilSci-Archive. The item lies outside the range of material suitable for PhilSci-Archive. We regret that because of the volume of material posted, the archive cannot enter into correspondence concerning submissions that have been refused.

This may be debatable, since it seems to me the paper is well within the scope of philosophy of science.

The preprint has finally been accepted by the HAL Archive, after they asked me to first post a paper already published in a scientific journal ‘in order to establish a confidence contract’, which sounds reasonable.

I am working on improving the draft with the possibility of submitting it to another venue, preferably peer reviewed. However, in the fifteen years since I first published a peer reviewed paper, this has been the strangest case of rejection by multiple venues, not just by peer review, but by two different preprint sites, one of them (PhilSci) does not even have a moderation process according to their policy.

Even so, this is not the first case of strange rejection that I have experienced from peer reviewed venues. Till recently, it could be attributed to the inherently imperfect nature of the peer review process, but now it seems to be clearly going beyond that, as the Google case shows, if not also the case of my paper.

The Impossibility Conjecture of Humanoid Artificial Intelligence and the Non-Benign Singularity

Abstract

[A Rough Draft of a Work-in-progress.]

The idea of machines which are almost identical to human beings has been so seductive that it has captured the imaginations of the best minds as well as laypeople for at least a century and half, perhaps more. Right after Artificial Intelligence (AI) came into being, it was almost taken for granted that soon enough we will be able to build Humanoid Robots. This has also led to some serious speculation about ‘transhumanism’. So far, we do not seem to be anywhere near this goal. It may be time now to ask whether it is even possible at all. We present a set of arguments to the effect that it is impossible to create or build Humanoid Robots or Humanoid Intelligence, where the said intelligence can substitute human beings in any situation where human beings are required or exist.

1. Humanoid Intelligence, the Singularity and Transhumanism

Before we proceed to discuss the terms of the title of this section and the arguments in the following sections, we first define the foundational terms to some degree of conciseness and preciseness:

1. Human Life: Anything and everything that the full variety of human beings are capable of, both individually and collectively. This includes not just behaviour or problem solving, but the whole gamut of capabilities, emotions, desires, actions, thoughts, consciousness, conscience, empathy, creativity and so on within an individual, as well as the whole gamut of associations and relationships, and social, political and ecological structures, crafts, art and so on that can exist in a human society or societies. This is true not just at any given moment, but over the life of the planet. Perhaps it should include even spiritual experiences and ‘revelations’ or ‘delusions’, such as those hinted at in the Philip K. Dick story, Holy Quarrel [Dick et al., 1985].

2. Humanoid: A living and reproducing entity that is almost identical to humans, either with a human-like body or without it, on a different substrate (inside a computer).

3. Intelligence: Anything and everything that the full variety of human beings are capable of, both individually and collectively, as well as both synchronically and diachronically. This includes not just behaviour or problem solving, but the whole of life as defined.

4. The Singularity: The technological point at which it is possible to create (or have) intelligence that is Humanoid or better than Humanoid.

5. Transhumanism: The idea that, after the singularity, we can have a society that is far more advanced, for the better, than the current and past human societies. From 1910 to 1927, in the three volumes of Principia Mathematica [ 1925–1927], Whitehead and Russell set out to prove that mathematics is, in some significant sense, reducible to logic. This turned out to be impossible when Godel published his incompleteness theorems in 1931 [Sheppard, 2014, Nagel et al., 2001]. During the days of origins of modern Computer Science, before and in early 1930s, it would have been easy to assume that a computing machine would ultimately solve any problem at all. This also proved to be impossible with Turing’s undecidability theorem [Hopcroft et al., 2006] and the Church-Turing thesis of computability [Copeland and Shagrir, 2018]. Since then, other kinds of problem have been shown to be undecidable.

Now that we are supposed to close be enough to the Singularity [Kurzweil, 2006] so that it may happen within the lifetime of a large number of human beings, perhaps it is time to ask ourselves whether real intelligence, in particular Humanoid Intelligence (as defined above) is possible at all. We suggest that there are enough arguments to ‘prove’ (in an informal sense) that it is impossible to build, to create or to have Humanoid Intelligence. We argue that even though the Singularity is indeed possible, perhaps even very likely (unless we stop it), it may not be what it is supposed to be. The conjecture presented here is that the Singularity is not likely to be even benign, however powerful or advanced it may be. This follows from the idea of the impossibility of Humanoid Intelligence.

2 Some Notes about the Conjecture

We have not used the term theorem for the Impossibility and the reasons for this should be evident from the arguments that we present. In particular, we do not, and perhaps cannot, use formal notation for this purpose. Even the term conjecture is used in an informal sense. The usage of terms here is closer to the legal language than to the mathematical language, because that is the best that can be done here. This may be clearer from the Definition and the Story arguments. It is due to a similar reasoning that the term ‘incompleteness’ is not used and, instead, impossibility is used, which is more appropriate for our purposes here, although Godel’s term ‘essentially incomplete’ is what we are informally arguing for about Humanoid AI, and perhaps AI in general. No claim is made as to whether or not a formal proof is possible in the future at all. What we present is an informal proof. This proof has to be centred around the distinction between Micro-AI (AI at the level of an intelligent autonomous individual entity) and Macro-AI (very large intelligent autonomous systems, possibly encompassing the whole of humanity or the world). To the best of our knowledge, such a distinction has not been proposed before. While there has been some work in this direction [Brooks, 1998, Signorelli, 2018, Yampolskiy, 2020], for lack of space, we are unable to explain how this work differs from previous such works, except by noting that the argumentation and some of the terms are novel, a bit like in the case of arguments for or against the existence of God, which question has been debated by the best of philosophers again and again over millennia, which as we will see at the end, is relevant to our discussion.

3 The Arguments for the Impossibility Conjecture for Micro-AI

The Definition Argument): Even the Peano Arithmetic [Nagel et al., 2001] is based on three undefined terms (zero, number and is successor of ), which are relatively trivial terms compared to the innumerable terms required for AI (the core terms like intelligence and human, or terms like the categories of emotions, leave alone the terms like consciousness).

The Category Argument: A great deal of AI is about classifying things into categories, but most of these categories (e.g. anger, disgust, good or bad) have no scientifically defined boundaries. This is related to the following argument.

The Story Argument: It is almost established now that many of the essential concepts of our civilisation are convenient fictions or stories [Harari, 2015] and these often form categories and are used in definitions.

The Cultural Concept Argument: Many of the terms, concepts and stories are cultural constructs. They have a long history, most of which is unknown, without which they cannot be modelled.

The Individuality, or the Nature Argument: An individual intelligent autonomous entity has to be unique and distinct from all other such entities. It originates in nature and we have no conception of how it can originate in machines. We are not even sure what this individuality exactly is. However, all through history, we have assigned some degree of accountability to human individual and we have strict provisions for punishment of individuals based on this, that indicates that we believe in the concept of the ‘self’ or the ‘autonomous individual’, even when we deny its existence, as is becoming popular today.

The Genetic Determinism Argument: Individuality is not completely determined by nature (e.g. by our genes) at birth or creation once and for all. It also develops and changes constantly as it interacts with the environment, preserving its uniqueness.

The Self-organising System Argument: Human beings and the human societies are most likely self-organising [Shiva and Shiva, 2020] and organic systems, or they are complex, non-equilibrium systems [Nicolis and Prigogine, 1977]. If so, they are unlikely to be modelled for exact replication or reproduction. The Environment, or the Nurture Argument: Both intelligence and individuality depend on the environment (or on nature). Therefore, they cannot be modelled without completely modelling the environment, i.e., going for Macro-AI. The Memory, or the Personality Argument: Both intelligence and individuality are aspects of personality, which is known to be dependent on the complete life-memory (conscious and unconscious) of an intelligent being. There is not enough evidence that it is possible to recover or model this complete temporal and environmental history of memory. A lot of our memory, and therefore our individuality and personality is integrally connected with our bodily memories.

The Susbstrsate Argument: It is often taken for granted that intelligence can be separated from the substrate and planted on a different substrate. This may be a wrong assumption. Perhaps our intelligence is integrally tied with the substrate and it is not possible to separate the body from the mind, following the previous argument.

The Causality Argument: There is little progress in modelling causality. Ultimately, the cause of an event or occurrence is not one but many, perhaps even the complete history of the universe.

The Consciousness Argument: Similarly, there is no good enough theory of consciousness even for human understanding. It is very unlikely that we can completely model human consciousness, nor is there a good reason to believe that it can emerge spontaneously under the right conditions (which conditions?).

The Incompleteness/Degeneracy of Learning Source and Representation Argument: No matter how much data or knowledge we have, it will always be both incomplete and degenerate, making it impossible to completely model intelligence.

The Explainability Argument: Deep neural networks, which are the state-of-the-art for AI, have serious problems with explainability even for specific isolated problems. Without it, we cannot be sure whether our models are developing in the right direction.

The Test Incompleteness Argument: Perfect measures of performance are not available even for problems like machine translation. We have no idea what will be the overall measure of Humanoid Intelligence. It may always be incomplete and imperfect, leading to uncertainty about intelligence.

The Parasitic Machine Argument: Machines completely depend for learning on humans and on data and knowledge provided by humans. But humans express or manifest only a small part of their intelligent capability. So machines cannot completely learn from humans without first being as intelligent as humans.

The Language Argument: Human(oid) Intelligence and its modelling depend essentially on human language(s). There is no universally accepted theory of how language works.

The Perception Interpretation Argument: Learning requires perception and perception depends on interpretation (and vice-versa), which is almost as hard a problem as modelling intelligence itself.

The Replication Argument: We are facing a scientific crisis of replication even for isolated problems. How could we be sure of replication of Humanoid Intelligence, preserving individual uniqueness?

The Human-Human Espitemic Asymmetry Argument: There is widespread inequality in human society not just in terms of money and wealth, but also in terms of knowledge and its benefits. This will not only reflect in modelling, but will make modelling harder.

The Diversity Representation Argument: Humanoid Intelligence that truly works will have to model the complete diversity of human existence in all its aspects, most of which are not even known or documented. It will have to at least preserve that diversity, which is a tall order.

The Data Colonialism Argument: Data is the new oil. Those with more power, money and influence (the Materialistic Holy Trinity) can mine more data from others, without sharing their own data. This is a classic colonial situation and it will hinder the development of Humanoid Intelligence.

The Ethical-Political Argument: Given some of the arguments above, and many others such as data bias, potential for weaponisation etc., there are plenty of ethical and political reasons that have to be taken into account while developing Humanoid Intelligence. We are not sure whether they can all be fully addressed.

The Prescriptivastion Argument: It is now recognised that ‘intelligent’ technology applied at large scale not only monitors behaviour, but changes it [Zuboff, 2018]. This means we are changing the very thing we are trying to model, and thus laying down new mechanical rules for what it means to be human.

The Wish Fulfilment (or Self-fulfilling Prophecy) Argument: Due to prescriptivisation of life itself by imperfect and inadequately intelligent machines, the problem of modeling of Humanoid Intelligence becomes a self-fulfilling prophecy, where we end up modeling not human life, but some corrupted and simplified form of life that we brought into being with ‘intelligent’ machines.

The Human Intervention Argument: There is no reason to believe that Humanoid Intelligence will develop freely of its own and will not be influenced by human intervention, quite likely to further vested interests. This will cripple the development of true Humanoid Intelligence. This intervention can take the form of secrecy, financial influence (such as research funding) and legal or structural coercion.

The Deepfake Argument: Although we do not yet have truly intelligent machines, we are able to generate data through deepfakes which are not recognisable as fakes by human beings. This deepfake data is going to proliferate and will become part of the data from which the machines learn, effectively modeling not human life, but something else.

The Chain Reaction Argument (or the Law of Exponential Growth Argument): As machines become more ‘intelligent’ they affect more and more of life and change it, even before achieving true intelligence. The speed of this change will increase exponentially and it will cause a chain reaction, leading to unforeseeable consequences, necessarily affecting the modelling of Humanoid Intelligence.

4 The Implications of the Impossibility

It follows from the above arguments that Singularity at the level of Micro-AI is impossible. In trying to achieve that, and to address the above arguments, the only possible outcome is some kind of Singularly at Macro-AI level. Such a Singularity will not lead to replication of human intelligence or its enhancement, but something totally different. It will, most probably, lead to extinction (or at least subservience, servitude) of human intelligence. To achieve just Humanoid Intelligence (Human Individual Micro-AI), even if nothing more, the AI system required will have to be nothing short of the common notion of a Single Supreme God. Singularity at the macro level will actually make the AI system, or whoever is controlling it, individual or (most probably small) collective, a Single Supreme God for all practical purposes, as far as human beings are concerned. But this will not be an All Powerful God, and not a a Kind God, for it will be Supreme within the limited scope of humanity and what humanity can have an effect on, and it will be kind only to itself, or perhaps not even that. It may be analogous to the God in the Phiilip K. Dick story Faith of Our Fathers [Dick and Lethem, 2013], or to the Big Brother of Orwell’s 1984 [Orwell, 1950]. We cannot be sure of the outcome,
of course, but those as likely outcomes as any others. That is reason enough to be very wary of
developing Humanoid Intelligence and any variant thereof.

References

Philip K. Dick, Paul Williams, and Mark. Hurst. I hope I shall arrive soon / Philip K. Dick ; edited by Mark Hurst and Paul Williams. Doubleday New York, 1st ed. edition, 1985. ISBN 0385195672.

Alfred North Whitehead and Bertrand Russell. Principia Mathematica. Cambridge University Press, 1925–1927.

Barnaby Sheppard. Gödel’s Incompleteness Theorems, page 419–428. Cambridge University Press, 2014. doi: 10.1017/CBO9781107415614.016.

E. Nagel, J.R. Newman, and D.R. Hofstadter. Godel’s Proof. NYU Press, 2001. ISBN 9780814758014. URL https://books.google.co.in/books?id=G29G3W_hNQkC.

John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321455363.

B. Jack Copeland and Oron Shagrir. The church-turing thesis: Logical limit or breachable barrier? Commun. ACM, 62(1):66–74, December 2018. ISSN 0001-0782. doi: 10.1145/3198448. URL https://doi.org/10.1145/3198448.

Ray Kurzweil. The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics), 2006. ISBN 0143037889.

Rodney Brooks. Prospects for human level intelligence for humanoid robots. 07 1998. Camilo Miguel Signorelli. Can computers become conscious and overcome humans? Frontiers in Robotics and AI, 5:121, 2018. doi: 10.3389/frobt.2018.00121. URL https://www.frontiersin. org/article/10.3389/frobt.2018.00121.

Roman V. Yampolskiy. Unpredictability of ai: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 07(01):109–118, 2020. doi: 10.1142/S2705078520500034.

Y.N. Harari. Sapiens: A Brief History of Humankind. Harper, 2015. ISBN 9780062316103. URL https://books.google.co.in/books?id=FmyBAwAAQBAJ.

V. Shiva and K. Shiva. Oneness Vs. the 1 Percent: Shattering Illusions, Seeding Freedom. CHELSEA GREEN PUB, 2020. ISBN 9781645020394. URL https://books.google.co.in/books?
id=4TmTzQEACAAJ.

G. Nicolis and I. Prigogine. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. A Wiley-Interscience publication. Wiley, 1977. ISBN 9780471024019. URL https://books.google.co.in/books?id=mZkQAQAAIAAJ.

Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st edition, 2018. ISBN 1610395697.

P.K. Dick and J. Lethem. Selected Stories of Philip K. Dick. Houghton Mifflin Harcourt, 2013. ISBN 9780544040540. URL https://books.google.co.in/books?id=V1z9rzfTb2EC.

George Orwell. 1984. Tandem Library, centennial. edition, 1950. ISBN 0881030368. URL http://www.amazon.de/1984-Signet-Classics-George-Orwell/dp/0881030368.

गलबे का मालिक

मलबे का नहीं, गलबे का

गैजेटियर सुने हैं ना?

नहीं गैज़ेटियर नहीं भाई

गैजेटियर की बात हो रही है

अच्छा गैजेट तो सुने ही होंगे

वो सरकारी वाला नहीं

नेमड अंटटी वाला भी नहीं

वो फुनवा वाला गैजेट

अउर कैमरा वाला गैजेट

लेपटपवा वाला भी तो

वही सब गैजेट बतिया रहे हैं

अइसा है की हमरे पास जो है

इन सब की भरमार है घर में

इनमें ज्यादातर जो है हमरे पास

ऊ सब तो मलबा गया है जो है

मार पइसा डूब गइल ई सब में

बात कुछ अईसी ठहरी है कि भैया

कुछ जन हम से दुसमनी समझ लिए हैं

अब काहे समझ लिए हैं ई न मालूम

तो हम तो जो है कंगाली पे खड़े हैं हियाँ

गैजेट का मलबा हमरे पास जमा ही जमा है

हम इसको गलबे का नाम दिए हैं जो है

काहे की हम किसी जनम में इक ठो

उपनियास पढ़े रहे ऊ मोहन रकेसवा का

उही से हमरे दिमाग का बलब जल पड़ा

अउर एको बात है, आपसे ही बतिया रहे हैं

किसी भी और से नहीं बतइबे का, समझे?

एगो दौर माँ हमहु को गैजेट का सौक रहा

काहे की हम रहे मार गरीब तभ भी कंगाल

अउर ऊ रजिबवा देस को इकअईसीं सदी

में जो है ले जाब का बात करी गलोबवा माँ

ऊ बात अब आप जानत हैं तमाम बढ़िया गई है दुनिया माँ

कऊ बचा हब ई बाढ़ से तो हमको तो नहीं दिखता नहीं बा

दिल्ली में इंतजाम जो हब ऊ तो सभी कुछ गजटियाव का

पेट पर पट्टी बाँध कर ठान लिया है सुना कनुनवा के साथ

चुनाव-उनाव भी ऊ सब तमाम गजटिया दिए है ई सुना है

गजट से परेसानी है तो कौनो कोना पकड़ लो और राम भजो

अऊर कोई आपसन नहीं है काहे के अब दुनिया गलोब बा

तो जो है हमहुँ बह गए ऊ धार में उस बखत

माने रजिबवा के बखत जब हम पढ़त रहे

हम भी तो अंजीनियर रहे ना आखिर तो

चाहे सौक असल सौक हमारा तो जानत हैं

उही सब लिखबा पढ़ी करे का रकेसवा माफिक

तो अब जो है हम अपने को गलबे का मालिक समझबा करी

बहुत जोर ताले में बंद कर के रखी हम अपना अनमोल गलबा

पर जो दुसमनी बना लिए हैं और हमारे गलबे के जिम्मेदार हैं

ऊ सब के मन में हमरी कंगाली से अब भी जो है ठंड न पड़ी

मतलब ये की आए दिन नया खेल होवत है परेसान करी को

अब कोई हम के बताय सकत है की ई सब दुसमनी काहे है

तो भइया आगे आव और हमें कुछ समझाव की ई मामला

है क्या आखिर? क्या हम किसी का कुछ बिगाड़ दिए हैं

तो साफ साफ बताव सायद कुछ नतीजा लिकलबा करे

तब सायद उनके दिल में ठंडक पड़े अउर हमरे दिल में भी

तनिक जो विदवान लोग ठहरे ऊ ही से बिनती है

इस मामले का कुछ खुलासा होय दोनों तरफ से

नहीं तो भैया हम तो इसको ज्यादती समझत हैं

अउर आप लोग तो हम सुने हैं सबहुँ तरह की

ज्यादती के खिलाफ हमेला संघरस करत बा

हमरी गिनती नहीं है क्या आपके दरबार मे?

हैलो, हाँ बताइबा …

%d bloggers like this: