The Impossibility Conjecture of Humanoid Artificial Intelligence and the Non-Benign Singularity

Abstract

[A Rough Draft of a Work-in-progress.]

The idea of machines which are almost identical to human beings has been so seductive that it has captured the imaginations of the best minds as well as laypeople for at least a century and half, perhaps more. Right after Artificial Intelligence (AI) came into being, it was almost taken for granted that soon enough we will be able to build Humanoid Robots. This has also led to some serious speculation about ‘transhumanism’. So far, we do not seem to be anywhere near this goal. It may be time now to ask whether it is even possible at all. We present a set of arguments to the effect that it is impossible to create or build Humanoid Robots or Humanoid Intelligence, where the said intelligence can substitute human beings in any situation where human beings are required or exist.

1. Humanoid Intelligence, the Singularity and Transhumanism

Before we proceed to discuss the terms of the title of this section and the arguments in the following sections, we first define the foundational terms to some degree of conciseness and preciseness:

1. Human Life: Anything and everything that the full variety of human beings are capable of, both individually and collectively. This includes not just behaviour or problem solving, but the whole gamut of capabilities, emotions, desires, actions, thoughts, consciousness, conscience, empathy, creativity and so on within an individual, as well as the whole gamut of associations and relationships, and social, political and ecological structures, crafts, art and so on that can exist in a human society or societies. This is true not just at any given moment, but over the life of the planet. Perhaps it should include even spiritual experiences and ‘revelations’ or ‘delusions’, such as those hinted at in the Philip K. Dick story, Holy Quarrel [Dick et al., 1985].

2. Humanoid: A living and reproducing entity that is almost identical to humans, either with a human-like body or without it, on a different substrate (inside a computer).

3. Intelligence: Anything and everything that the full variety of human beings are capable of, both individually and collectively, as well as both synchronically and diachronically. This includes not just behaviour or problem solving, but the whole of life as defined.

4. The Singularity: The technological point at which it is possible to create (or have) intelligence that is Humanoid or better than Humanoid.

5. Transhumanism: The idea that, after the singularity, we can have a society that is far more advanced, for the better, than the current and past human societies. From 1910 to 1927, in the three volumes of Principia Mathematica [ 1925–1927], Whitehead and Russell set out to prove that mathematics is, in some significant sense, reducible to logic. This turned out to be impossible when Godel published his incompleteness theorems in 1931 [Sheppard, 2014, Nagel et al., 2001]. During the days of origins of modern Computer Science, before and in early 1930s, it would have been easy to assume that a computing machine would ultimately solve any problem at all. This also proved to be impossible with Turing’s undecidability theorem [Hopcroft et al., 2006] and the Church-Turing thesis of computability [Copeland and Shagrir, 2018]. Since then, other kinds of problem have been shown to be undecidable.

Now that we are supposed to close be enough to the Singularity [Kurzweil, 2006] so that it may happen within the lifetime of a large number of human beings, perhaps it is time to ask ourselves whether real intelligence, in particular Humanoid Intelligence (as defined above) is possible at all. We suggest that there are enough arguments to ‘prove’ (in an informal sense) that it is impossible to build, to create or to have Humanoid Intelligence. We argue that even though the Singularity is indeed possible, perhaps even very likely (unless we stop it), it may not be what it is supposed to be. The conjecture presented here is that the Singularity is not likely to be even benign, however powerful or advanced it may be. This follows from the idea of the impossibility of Humanoid Intelligence.

2 Some Notes about the Conjecture

We have not used the term theorem for the Impossibility and the reasons for this should be evident from the arguments that we present. In particular, we do not, and perhaps cannot, use formal notation for this purpose. Even the term conjecture is used in an informal sense. The usage of terms here is closer to the legal language than to the mathematical language, because that is the best that can be done here. This may be clearer from the Definition and the Story arguments. It is due to a similar reasoning that the term ‘incompleteness’ is not used and, instead, impossibility is used, which is more appropriate for our purposes here, although Godel’s term ‘essentially incomplete’ is what we are informally arguing for about Humanoid AI, and perhaps AI in general. No claim is made as to whether or not a formal proof is possible in the future at all. What we present is an informal proof. This proof has to be centred around the distinction between Micro-AI (AI at the level of an intelligent autonomous individual entity) and Macro-AI (very large intelligent autonomous systems, possibly encompassing the whole of humanity or the world). To the best of our knowledge, such a distinction has not been proposed before. While there has been some work in this direction [Brooks, 1998, Signorelli, 2018, Yampolskiy, 2020], for lack of space, we are unable to explain how this work differs from previous such works, except by noting that the argumentation and some of the terms are novel, a bit like in the case of arguments for or against the existence of God, which question has been debated by the best of philosophers again and again over millennia, which as we will see at the end, is relevant to our discussion.

3 The Arguments for the Impossibility Conjecture for Micro-AI

The Definition Argument): Even the Peano Arithmetic [Nagel et al., 2001] is based on three undefined terms (zero, number and is successor of ), which are relatively trivial terms compared to the innumerable terms required for AI (the core terms like intelligence and human, or terms like the categories of emotions, leave alone the terms like consciousness).

The Category Argument: A great deal of AI is about classifying things into categories, but most of these categories (e.g. anger, disgust, good or bad) have no scientifically defined boundaries. This is related to the following argument.

The Story Argument: It is almost established now that many of the essential concepts of our civilisation are convenient fictions or stories [Harari, 2015] and these often form categories and are used in definitions.

The Cultural Concept Argument: Many of the terms, concepts and stories are cultural constructs. They have a long history, most of which is unknown, without which they cannot be modelled.

The Individuality, or the Nature Argument: An individual intelligent autonomous entity has to be unique and distinct from all other such entities. It originates in nature and we have no conception of how it can originate in machines. We are not even sure what this individuality exactly is. However, all through history, we have assigned some degree of accountability to human individual and we have strict provisions for punishment of individuals based on this, that indicates that we believe in the concept of the ‘self’ or the ‘autonomous individual’, even when we deny its existence, as is becoming popular today.

The Genetic Determinism Argument: Individuality is not completely determined by nature (e.g. by our genes) at birth or creation once and for all. It also develops and changes constantly as it interacts with the environment, preserving its uniqueness.

The Self-organising System Argument: Human beings and the human societies are most likely self-organising [Shiva and Shiva, 2020] and organic systems, or they are complex, non-equilibrium systems [Nicolis and Prigogine, 1977]. If so, they are unlikely to be modelled for exact replication or reproduction. The Environment, or the Nurture Argument: Both intelligence and individuality depend on the environment (or on nature). Therefore, they cannot be modelled without completely modelling the environment, i.e., going for Macro-AI. The Memory, or the Personality Argument: Both intelligence and individuality are aspects of personality, which is known to be dependent on the complete life-memory (conscious and unconscious) of an intelligent being. There is not enough evidence that it is possible to recover or model this complete temporal and environmental history of memory. A lot of our memory, and therefore our individuality and personality is integrally connected with our bodily memories.

The Susbstrsate Argument: It is often taken for granted that intelligence can be separated from the substrate and planted on a different substrate. This may be a wrong assumption. Perhaps our intelligence is integrally tied with the substrate and it is not possible to separate the body from the mind, following the previous argument.

The Causality Argument: There is little progress in modelling causality. Ultimately, the cause of an event or occurrence is not one but many, perhaps even the complete history of the universe.

The Consciousness Argument: Similarly, there is no good enough theory of consciousness even for human understanding. It is very unlikely that we can completely model human consciousness, nor is there a good reason to believe that it can emerge spontaneously under the right conditions (which conditions?).

The Incompleteness/Degeneracy of Learning Source and Representation Argument: No matter how much data or knowledge we have, it will always be both incomplete and degenerate, making it impossible to completely model intelligence.

The Explainability Argument: Deep neural networks, which are the state-of-the-art for AI, have serious problems with explainability even for specific isolated problems. Without it, we cannot be sure whether our models are developing in the right direction.

The Test Incompleteness Argument: Perfect measures of performance are not available even for problems like machine translation. We have no idea what will be the overall measure of Humanoid Intelligence. It may always be incomplete and imperfect, leading to uncertainty about intelligence.

The Parasitic Machine Argument: Machines completely depend for learning on humans and on data and knowledge provided by humans. But humans express or manifest only a small part of their intelligent capability. So machines cannot completely learn from humans without first being as intelligent as humans.

The Language Argument: Human(oid) Intelligence and its modelling depend essentially on human language(s). There is no universally accepted theory of how language works.

The Perception Interpretation Argument: Learning requires perception and perception depends on interpretation (and vice-versa), which is almost as hard a problem as modelling intelligence itself.

The Replication Argument: We are facing a scientific crisis of replication even for isolated problems. How could we be sure of replication of Humanoid Intelligence, preserving individual uniqueness?

The Human-Human Espitemic Asymmetry Argument: There is widespread inequality in human society not just in terms of money and wealth, but also in terms of knowledge and its benefits. This will not only reflect in modelling, but will make modelling harder.

The Diversity Representation Argument: Humanoid Intelligence that truly works will have to model the complete diversity of human existence in all its aspects, most of which are not even known or documented. It will have to at least preserve that diversity, which is a tall order.

The Data Colonialism Argument: Data is the new oil. Those with more power, money and influence (the Materialistic Holy Trinity) can mine more data from others, without sharing their own data. This is a classic colonial situation and it will hinder the development of Humanoid Intelligence.

The Ethical-Political Argument: Given some of the arguments above, and many others such as data bias, potential for weaponisation etc., there are plenty of ethical and political reasons that have to be taken into account while developing Humanoid Intelligence. We are not sure whether they can all be fully addressed.

The Prescriptivastion Argument: It is now recognised that ‘intelligent’ technology applied at large scale not only monitors behaviour, but changes it [Zuboff, 2018]. This means we are changing the very thing we are trying to model, and thus laying down new mechanical rules for what it means to be human.

The Wish Fulfilment (or Self-fulfilling Prophecy) Argument: Due to prescriptivisation of life itself by imperfect and inadequately intelligent machines, the problem of modeling of Humanoid Intelligence becomes a self-fulfilling prophecy, where we end up modeling not human life, but some corrupted and simplified form of life that we brought into being with ‘intelligent’ machines.

The Human Intervention Argument: There is no reason to believe that Humanoid Intelligence will develop freely of its own and will not be influenced by human intervention, quite likely to further vested interests. This will cripple the development of true Humanoid Intelligence. This intervention can take the form of secrecy, financial influence (such as research funding) and legal or structural coercion.

The Deepfake Argument: Although we do not yet have truly intelligent machines, we are able to generate data through deepfakes which are not recognisable as fakes by human beings. This deepfake data is going to proliferate and will become part of the data from which the machines learn, effectively modeling not human life, but something else.

The Chain Reaction Argument (or the Law of Exponential Growth Argument): As machines become more ‘intelligent’ they affect more and more of life and change it, even before achieving true intelligence. The speed of this change will increase exponentially and it will cause a chain reaction, leading to unforeseeable consequences, necessarily affecting the modelling of Humanoid Intelligence.

4 The Implications of the Impossibility

It follows from the above arguments that Singularity at the level of Micro-AI is impossible. In trying to achieve that, and to address the above arguments, the only possible outcome is some kind of Singularly at Macro-AI level. Such a Singularity will not lead to replication of human intelligence or its enhancement, but something totally different. It will, most probably, lead to extinction (or at least subservience, servitude) of human intelligence. To achieve just Humanoid Intelligence (Human Individual Micro-AI), even if nothing more, the AI system required will have to be nothing short of the common notion of a Single Supreme God. Singularity at the macro level will actually make the AI system, or whoever is controlling it, individual or (most probably small) collective, a Single Supreme God for all practical purposes, as far as human beings are concerned. But this will not be an All Powerful God, and not a a Kind God, for it will be Supreme within the limited scope of humanity and what humanity can have an effect on, and it will be kind only to itself, or perhaps not even that. It may be analogous to the God in the Phiilip K. Dick story Faith of Our Fathers [Dick and Lethem, 2013], or to the Big Brother of Orwell’s 1984 [Orwell, 1950]. We cannot be sure of the outcome,
of course, but those as likely outcomes as any others. That is reason enough to be very wary of
developing Humanoid Intelligence and any variant thereof.

References

Philip K. Dick, Paul Williams, and Mark. Hurst. I hope I shall arrive soon / Philip K. Dick ; edited by Mark Hurst and Paul Williams. Doubleday New York, 1st ed. edition, 1985. ISBN 0385195672.

Alfred North Whitehead and Bertrand Russell. Principia Mathematica. Cambridge University Press, 1925–1927.

Barnaby Sheppard. Gödel’s Incompleteness Theorems, page 419–428. Cambridge University Press, 2014. doi: 10.1017/CBO9781107415614.016.

E. Nagel, J.R. Newman, and D.R. Hofstadter. Godel’s Proof. NYU Press, 2001. ISBN 9780814758014. URL https://books.google.co.in/books?id=G29G3W_hNQkC.

John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., USA, 2006. ISBN 0321455363.

B. Jack Copeland and Oron Shagrir. The church-turing thesis: Logical limit or breachable barrier? Commun. ACM, 62(1):66–74, December 2018. ISSN 0001-0782. doi: 10.1145/3198448. URL https://doi.org/10.1145/3198448.

Ray Kurzweil. The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics), 2006. ISBN 0143037889.

Rodney Brooks. Prospects for human level intelligence for humanoid robots. 07 1998. Camilo Miguel Signorelli. Can computers become conscious and overcome humans? Frontiers in Robotics and AI, 5:121, 2018. doi: 10.3389/frobt.2018.00121. URL https://www.frontiersin. org/article/10.3389/frobt.2018.00121.

Roman V. Yampolskiy. Unpredictability of ai: On the impossibility of accurately predicting all actions of a smarter agent. Journal of Artificial Intelligence and Consciousness, 07(01):109–118, 2020. doi: 10.1142/S2705078520500034.

Y.N. Harari. Sapiens: A Brief History of Humankind. Harper, 2015. ISBN 9780062316103. URL https://books.google.co.in/books?id=FmyBAwAAQBAJ.

V. Shiva and K. Shiva. Oneness Vs. the 1 Percent: Shattering Illusions, Seeding Freedom. CHELSEA GREEN PUB, 2020. ISBN 9781645020394. URL https://books.google.co.in/books?
id=4TmTzQEACAAJ.

G. Nicolis and I. Prigogine. Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. A Wiley-Interscience publication. Wiley, 1977. ISBN 9780471024019. URL https://books.google.co.in/books?id=mZkQAQAAIAAJ.

Shoshana Zuboff. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st edition, 2018. ISBN 1610395697.

P.K. Dick and J. Lethem. Selected Stories of Philip K. Dick. Houghton Mifflin Harcourt, 2013. ISBN 9780544040540. URL https://books.google.co.in/books?id=V1z9rzfTb2EC.

George Orwell. 1984. Tandem Library, centennial. edition, 1950. ISBN 0881030368. URL http://www.amazon.de/1984-Signet-Classics-George-Orwell/dp/0881030368.

Weaponizable Technologies

250px-Panopticon

Weapon are devices

That can harm people

Can also harm property

But that’s less important

 

Weapons are technologies

Not necessarily physical

As in the Foucauldian sense

 

In that sense,

They can also

Harm society

And culture,

Civilizations

Humanity itself

 

And,

More importantly

The very idea of

What humanity is

 

In the Foucauldian sense, they

Can generate chain reactions

Just like nuclear technologies

And they can destroy humanity

Just like fission-fusion weapons

 

Weapons or technologies

Are not tied to a particular

Ideology or even a religion

 

In the Foucauldian sense,

Conventional technologies

 

Are clandestinely

Or benevolently

Developed, and

Are weaponized

 

They are proliferated

Then are exposed

Are opposed, and

Then, gradually

Are normalized

Are assimilated

Into our social fabric

 

The protests against the weapons

And weaponized technologies

As in the world we have made

Not necessarily in the world

That we could perhaps make

Are very predictable phonomena

 

They can start out very strong

Then they become a shadow of

Themselves, or even a parody

 

At best they can become, and

Exist for a longish time, even

Perhaps with ups and downs

 

With limited longish term achievements

Or with very impressive short term ones

Or with no effect on the status quo at all

 

A connoisseur’s delight

They often are reduced to

 

At worst they may become

Freak shows on the fringes

As Kipling showed in a story

Even if they are genuine

Not the fake ones: A part

Of Manufactured Dissent

 

A protest is like a lot like a balm

A protest that is for a single issue

Or, at most, a few such issues

For the people who are hurting

 

In that sense, they are a good thing

But pardon me, for I feel duty bound

To spoil the positivity with some

Unallied and honest bit of truth

 

For they are mostly just balms

That give temporary relief

From the symptoms only

 

They are necessary, but not sufficient

They are not cures in the end

And they come at the expense

Of some other people, who are

Also very much hurting, and

Their issues, symptomatically,

Can be very much different

 

In fact, they can be the exact

Contraries of the issues of the

First set of people who are hurting

 

The powers that be are apt to play

The one against the other, and

The little or large bits of evil

In all of us, ensures that we play

That game, of our own volition

Collectively, so that none feels guilty

 

On our own initiative even, or

So we might convince ourselves

 

Weaponised technologies then

Not just weaponizables ones

 

Are morally

And ethically

And legally

Sanctioned finally

 

That means that

They are approved

By general society

 

And they become

An integral part

A necessary part

Of the civilization

 

They are never

Ever sufficient

 

They become fait accompli

Which is a terrifying phrase

 

After enough time

They are taken

For granted

Are not even

Noticed in our

Everyday life

 

Most of us forget what they mean

Or what they are, how they work

They become part of our natural

Reality, our very natural universe

 

Who can use weapons?

 

Anyone can use them

If they can get access

 

To them, somehow, anyhow

 

And they will be used

Later on, if not sooner

Over there, if not here

At least in the beginning

 

The good guys can use them

Or those who claim to be so

We all know what that means

 

The bad guys can use them

The ugly guys can use them

The evil guys can use them

 

Individually evil can use them

Collectively evil can use them

 

More likely the latter

 

Anyone anywhere anytime on

The whole political spectrum

Can use them, if less or more

Individually or collectively

 

More likely the latter

 

There is absolutely

No guarantee that

Any of the above

Or indeed all of them

Can’t use them at all

Ever and anywhere

 

But can the weak and the meek

Or the tired and the poor

Use them as much as the

Strong and the powerful

To the same extent, even

For the purpose of self-defense?

 

Can single individuals use them

As much as the collective

To the same extent, even

For the purpose of self-defense?

 

First they are used over there

On those we don’t care about

Then they are used over here

 

And when that happens

There are fresh protests

 

We all care about ourselves

Even if we don’t about them

 

Once again, they

Are exposed: For us

Are opposed, and

Then, gradually

Are normalized

Are assimilated

Into our social fabric

Our very own life

 

Excluding them over there

They are already included

We still don’t care about them

We still care only for ourselves

 

Like before, again

They are morally

And ethically

And legally

Sanctioned finally

 

This time, however

For us, not just them

 

Some weaponized technologies

Are so totally unthinkably evil

That their existence is not even

Acknowledged, for preserving

Collective sense of being good

 

Such technologies are only used

Clandestinely, outside all records

So they leave no evidence at all

 

Who do they mean to target?

The demonized are targeted

Mentally-ill may be targeted

Truly subversive freethinkers

May be targeted, selectively

Misfits and loners can also

Be targeted with these ones

 

And, above all

 

The uncontaminated

(Unalloyed, if you like

Or unallied, if you like)

The incorrigible

Truth seekers, As

They may be called

Justice seekers also

Unalloyed or unallied

Can be targeted with

These unacknowledged

Weaponized technologies

In the Foucauldian sense

 

For The Greater Good

Seems they are called

Coal Mine Canaries

Freelance Test Rats

They may not be paid

May not even consent

 

They don’t even know this

That have been made that

This is the most evil part

Of the scheme, in which

 

All “schematism” had to be avoided

 

So they can’t even share

Without anyone at all

Let alone lodge a protest

 

They become Dead Canaries

If they come uncomfortably

Close to the truths that matter

 

In fact, these technologies

Are, by their very nature

Made only for selective use

Personalization is their

Key feature, their identifier

 

One of them had even

Got put on the record

Perhaps due to naïveté

It was called Zersetzung

It specifically recorded

Naïvely, as it turned out

It specifically wrote down

 

This kind of weaponised technology

Is a collective, organised and mobilised

Version of what is called gaslighting

 

A later version of it was called COINTELPRO

Who knows how many different versions of it

Exist today in how many places

Officially or unofficially

Recorded or unrecorded

 

In the original version called Zersetzung

All “schematism” had to be avoided

Because that would make opposition

And protest against it easily possible

 

It being: The collective using it?

 

Individual simply can’t use it

Not to the same degree and reach

Not anywhere remotely close

 

Or the technology itself only?

 

Or why not both of them?

 

But we had better not forget

Technologies are the means

Religions and ideologies are

About the ends, not the means

For them, practically speaking

Ends always justify the means

 

Even if they are, unthinkably

Unredeemably, only pure evil

 

However, we are all endowed with

The extreme powers of self-deception

Individually yes, but also collectively

 

So we still manage to think that they

Are still for them, over there, not us

They are within our society, never us

They are still for them, not over here

Over there can be much nearer now

But it is still over there, and for them

 

Thus, once more magically

They become fait accompli

With a very different context

But actually the same context

 

They are always necessary

So it is claimed, benevolently

But they are never sufficient

 

This is a universal theorem

If you like to be very precise

Then it is at the very least

A pretty likely conjecture

 

And so we march on forward

Or even backward oftentimes

Or sideways, if necessary

Which can be very effective

If you know what I mean

 

In search of new weapons

And ever new technologies

 

That can be weaponized

Easily and yes, inevitably

Even if you don’t believe

In Inevitabilism at all

 

What really is inevitable

However, is the fact that

Some weak, or the meek

Or an isolated individual

Perhaps crazy, perhaps not

Will use them occasionally

Usually after provocation

But sometimes without it

 

Or some collective

Rogue or not rogue

 

A matter of definition

 

Will also make use of them

Regularly or occasionally

 

That is a great opportunity

A motivation for finding

Implementing and using

Ever more lethal weapons

Weaponized technologies

And some non-lethal ones

In the Foucauldian sense

 

We find new evils

We define new evils

We create new evils

 

We get new weapons

To fight newest evils

Which creates even

More ever new evils

 

Thus the circle of evil

Closes in upon us all

Over there, over here

 

So what do you think about it?

***

Originally published on 14th August, 2019. Updated on 20th September, 2019.

How Many Grams?

There is an automatically (intelligently) generated blog which I have read recently.

It appears to be (let’s give ‘seems’ some rest) quite a popular one in a certain section.

I know the corpus on which it was trained.

And the corpus on which it was retrained.

(Including most of the quotes and the comments, especially the long ones).

But I wonder whether the order of n-grams was five or six.

It is definitely better than four grams.

It could even be Se7en.

This brings up a new idea.

What about writing a paper on automatically guessing the order of n-grams, given some generated text?

It may be difficult in the general case, but in our case we know the corpus on which it was trained.

Any takers?

संचय का परिचय

पिछली पोस्ट (शर्म के साथ कहना पड़ रहा है कि पोस्ट के लिए कोई उपयुक्त शब्द नहीं ढूंढ पा रहा हूं) में मैंने (अंग्रेज़ी में) संचय के नये संस्करण के बारे में लिखा था। मज़े की बात है कि संचय के बारे में मैंने अभी हिंदी में शायद ही कुछ लिखा हो। इस भूल को सुधारने की कोशिश में अब अगले कुछ हफ्तों में संचय के बारे में कुछ लिखने का सोचा है।

तो संचय कौन है? या संचय क्या है?

पहले सवाल का तो जवाब (अमरीकी शब्दावली में) यह है कि संचय एक सिंगल पेरेंट चाइल्ड है जिसे किसी वेलफेयर का लाभ तो नहीं मिल रहा पर जिस पर बहुत सी ज़िम्मेदारियाँ हैं।

दूसरे सवाल का जवाब यह है कि संचय सांगणिक भाषाविज्ञान (कंप्यूटेशनल लिंग्विस्टिक्स) या भाषाविज्ञान के क्षेत्र में काम कर रहे शोधकर्ताओं के लिए उपयोगी सांगणिक औजारों का एक मुक्त (मुफ्त भी कह सकते हैं) तथा ओपेन सोर्स संकलन है। पर खास तौर से यह कंप्यूटर पर भारतीय भाषाओं का उपयोग करने वाले किसी भी व्यक्ति के काम आ सकता है। इसकी एक विशेषता है कि इसमें नयी भाषाओं तथा एनकोडिंगों को आसानी से शामिल किया जा सकता है। लगभग सभी प्रमुख भारतीय भाषाएं इसमें पहले से ही शामिल हैं और संचय में उनके उपयोग के लिए ऑपरेटिंग सिस्टम पर आप निर्भर नहीं है, हालांकि अगर ऑपरेटिंग सिस्टम में ऐसी कोई भी भाषा शामिल है तो उस सुविधा का भी आप उपयोग संचय में कर सकते हैं। यही नहीं, संचय का एक ही संस्करण विंडोज़ तथा लिनक्स/यूनिक्स दोनों पर काम करता है, बशर्ते आपने जे. डी. के. (जावा डेवलपमेंट किट) इंस्टॉल कर रखा हो। यहाँ तक कि आपकी भाषा का फोंट भी ऑपरेटिंग सिस्टम में इंस्टॉल होना ज़रूरी नहीं है।

संचय का वर्तमान संस्करण 0.3.0 है। इस संस्करण में पिछले संस्करण से सबसे बड़ा अंतर यह है कि अब एक ही जगह से संचय के सभी औजार इस्तेमाल किए जा सकते हैं, अलग-अलग स्क्रिप्ट का नाम याद रखने की ज़रूरत नहीं है। कुल मिला कर बारह औजार (ऐप्लीकेशंस) शामिल किए गए हैं, जो हैं:

  1. संचय पाठ संपादक (टैक्सट एडिटर)
  2. सारणी संपादक (टेबल एडिटर)
  3. खोज-बदल-निकाल औजार (फाइंड रिप्लेस ऐक्सट्रैक्ट टूल)
  4. शब्द सूची निर्माण औजार (वर्ड लिस्ट बिल्डर)
  5. शब्द सूची विश्लेषण औजार (वर्ड लिस्ट ऐनेलाइज़र ऐंड विज़ुअलाइज़र)
  6. भाषा तथा एनकोडिंग पहचान औजार (लैंग्वेज ऐंड एनकोडिंग आइडेंटिफिकेशन)
  7. वाक्य रचना अभिटिप्पण अंतराफलक (सिन्टैक्टिक ऐनोटेशन इंटरफेस)
  8. समांतर वांगमय अभिटिप्पण अंतराफलक (पैरेलल कोर्पस ऐनोटेशन इंटरफेस)
  9. एन-ग्राम भाषाई प्रतिरूपण (एन-ग्राम लैंग्वेज मॉडेलिंग टूल)
  10. संभाषण वांगमय अभिटिप्पण अंतराफलक (डिस्कोर्स ऐनोटेशन इंटरफेस)
  11. दस्तावेज विभाजक (फाइल स्प्लिटर)
  12. स्वचालित अभिटिप्पण औजार (ऑटोमैटिक ऐनोटेशन टूल)

अगर इनमें से अधिकतर का सिर-पैर ना समझ आ रहा हो तो थोड़ा इंतज़ार करें। आगे इनके बारे में अधिक जानकारी देने की कोशिश रहेगी।

शायद इतना और जोड़ देने में कोई बुराई नहीं है कि संचय पिछले कुछ सालों से इस नाचीज़ के जिद्दी संकल्प का परिणाम है, जिसमें कुछ और लोगों का भी सहयोग रहा है, चाहे थोड़ा-थोड़ा ही। उन सभी लोगों के नाम संचय के वेबस्थल पर जल्दी ही देखे जा सकेंगे। ये लगभग सभी विद्यार्थी हैं (या थे) जिन्होंने मेरे ‘मार्गदर्शन’ में किसी परियोजना – प्रॉजेक्ट – पर काम किया था या कर रहे हैं।

उम्मीद है कि संचय का इससे भी अगला संस्करण कुछ महीने में आ पाएगा और उसमें और भी अधिक औजार तथा सुविधाएं होंगी।

A Tryst with the Soul of Paris (1)

As I promised, I am going to write about the movie ‘La Môme’, also known as ‘La Vie en Rose’ (‘The Life in the Pink’). The movie is about the legendary French popular singer Édith Piaf, real name Édith Giovanna Gassion, but earlier known as La Môme Piaf (The Little Sparrow).

For the last many weeks, I have been soaking myself in her songs. Not her alone, because I am never ever an exclusivist, but my playlist during this period has been almost half full of her songs. Or songs related to her, i.e., songs sung by her which were later also sung by others. As far as music is concerned, this has been one of the major obsessions so far. And it doesn’t look like I am going to get over it soon. I don’t mind it, of course.

I even found some notes and tunes familiar from Hindi film songs, which are the true melting pot of music like nothing else.

Did I say I will talk about it later?

Let it be said that I have listened to a very wide variety of music from around the world and claim to have a very good musical sense. So, now that you know about my qualifications for writing about her and the movie based on her (I guess you already know that I also claim to have a very good cinematic sense), I can get on and you better take me seriously.

Heh! Heh! Where is your degree?

First, I will say what has already been said by all. Marion Cotillard has given a great performance in this movie as the legendary singer. It’s hard for me to forget that she is not really Édith Piaf.

By the way, she became the first actor (or actress) to “ever win an Academy Award for Best Actress (“Oscar”) for a performance entirely in French”. Given that winning an Academy Award is considered the height of achievement for people working in the movies, doesn’t it sound a bit strange? I mean French directors (along with directors from other countries from Europe and Asia) have been making movies and setting the standards for others for a long time now and French actors have been acting in them. Well enough to deserve world class awards.

How easy it is to forget that the Oscars, the Academy Awards, are mainly meant for English movies. There is just one magnanimous (or guest, if you like) category for ‘Foreign language movies’. But everyone behaves as if the Academy Awards are equally for all movies of the world.

Can we expect globalization of the Academy Awards? I won’t bet on it.

Except that I have never bet.

The spell checker has identified ‘globalization’ as an invalid word. I am adding it to the dictionary. The spell checker also doesn’t recognize ‘exclusivist’ as a valid word. I am adding this word too.

I have heard the term ‘Artificial Intelligence’ somewhere. I also heard a rumor (rumour for the non-dominant party) that computers now have some of it. Why do I feel a bit relieved that it is just a rumor?

Coming back to the movie, it is about a singer who, as someone said, “belts them out, doesn’t she?”. She does indeed. And she does just great. I have become her lifetime admirer. For whatever is left.

She was a born singer. She started on the street. She was the daughter of an acrobat and a street singer. For some time she lived in a brothel managed by her grandmother, where she was treated very well. One of the prostitutes became so fond of her that she was heartbroken and hysterical when the father came back for his daughter. With her father, she (the singer to be) lived in a circus. Later she accompanied her father on his acrobatic (contortionist) street shows and started singing. Then she sang on the streets with her half-sister, who remained close to her till her death, except for some time when she felt ignored and abandoned by the star singer.

She was discovered by a nightclub owner. She was suspected of involvement in his murder, but was cleared. She denied that she had anything to do with that and I would prefer to believe that. I would rather give her the benefit of doubt than to Henry Kissinger. Or so many like him, even if not his equal in douchehood.

She sang under the protection of local mafia men, who took their share, obviously. She met a composer, Marguerite Monnot, who also became her ‘most loyal friend’ for the rest of her life. Then she was mentored by a composer who was also a poet and a businessman. She became popular on the radio as well as on the stage. She became a star. Actually, in France, she became a super star. She mentored many people and helped them launch their career. And ‘dropped’ them when they became successful and no longer needed her mentoring. She helped launch many careers, including that of another legendary singer Yves Montand. Jean Cocteau wrote a successful one-act play ‘Le Bel Indifférent’ specially for her and she acted in it.

She was severely injured in a major car accident. Then she suffered more car accidents. Partly because of injuries from the car crashes, she got into addiction and suffered more. She fell in love with a married French boxer (who was a star in his own right in France) …

Well, according to the ethics of movie reviewing, I shouldn’t divulge too much. Suffice it, as the phrase goes, to say that if there was anyone whose life was the stuff of legend, she was the one.

I would say even more than Howard Hughes.

So much about her, what about the movie? It is one of best biopics I have ever seen. It is better than ‘The Aviator’. It is better than ‘Capote’, even though I have more than a soft spot for movies made about writers or about literature. It is better even than ‘Gandhi’. More about that last movie later.

Now the reasons why it is better. First is simply that I like it more. But more specifically, everything is almost perfect in this biopic. Direction (Olivier Dahan) is really good without being pretentious or stiff. Screenplay (Isabelle Sobelman and Olivier Dahan) is as it should be for a biopic. Realistic but still interesting. Not over the top. Neither starry eyed, nor of the kind which seems to be declaring ‘I will (academically) judge this person’s personal life and cut him or her to size’.

Marion Cotillard actually became The Little Sparrow. I don’t know whether it was with or without Method Acting. The rest of the cast also gave very convincing performances, including the actress who played Marlene Dietrich. I should make special mention of Sylvie Testud who played the role of Mômone (Simone Berteaut), Édith’s half-sister and her lifelong friend. Her lifelong partner in mischief.

For now, I will stop talking about the movie here as I intend to write a second installment of this post.

I would be proud to have lived a life like the one she lived. With warts and all.

Even now, as I write, she is singing in the background. Literally.

In the words of the movie’s Marlene Dietrich, she is taking me on a voyage to Paris. Where (unlike Marlene Dietrich) I have never been, except for half an hour at the airport when I had to keep sitting in the plane as there was a strike at the airport. So I have yet to set my feet on the soil of Paris, but The Little Sparrow, who really belts them out and who embodies the soul of Paris, has flown me around there plenty of times now.

P.S.: The strike in the above paragraph doesn’t mean terrorist strike. It means labour strike. Just in case.

And yes, labor for the dominant party.