Inspiration behind Authoring the Book
Shri Rajiv elucidates:
“So, you know, when I did artificial intelligence as a computer science student in the US, nearly 50 years ago, it was a very basic field, not so advanced. Then I set it aside and I got into humanities. Social sciences, started a foundation, a think tank to promote ideas about our civilization and do a lot of original research myself. But about five years ago, I decided to go back and update my knowledge on AI and bring it in the context of Indian civilization, India, Indian thought issues in India, because I felt I knew lagging behind. And of course, India is now about 10 years behind China and the US in artificial intelligence. We have a large amount of manpower trained, but these people get outsourced. They’re working to create intellectual property for other people and not Indian intellectual property. And also, a lot of the work that the artificial intelligence train people in doing is very basic and low-level kind of work. So, I felt that this needs to be addressed, and I was not satisfied. I’m still not satisfied with the policies in India. On data protection on the way in [which it is] going about, managing his artificial intelligence program, and not fully aware of the dangers and
threats that a foreign artificial intelligence brings, for India’s national security.”
Indian Journal of Artificial Intelligence and Law 12 Shri Rajiv explains that the 5 focus areas of his book are 5 different battle fields like Kurukshetra. He further explains:
“One of the legal issues is: if an algorithm makes a choice, you know, whether to turn left or right in a crisis mode, it’s driving a car. If I turn left, I killed this fellow had turned, right? I mean, just that fellow who made the choice, who decided which one, which one gets hurt, you know, so as algorithms make choices on hiring people who, how, what are the consequences of hiring X and not Y as algorithms decide that is a rare organ and organ transplant is to happen.” He further dives deep into the issue of algorithmic accountability and elucidates: “Who’s liable? (now you may say that) The Person who trained the desirable? But that’s not so clear who trained it because you know, when a child is growing up, there are many influences to any of the child’s bed and train them school training; friends trained him – media also trained him. So just like the child looks at so many examples and learns from them and it gets influenced and he’s a product of all kinds of influence […] So same way in the case of algorithmic training, training the machines up, you know, a machine gets big data and this big data gives us examples of what to do, what not to do. […] It’s constantly changing. It is not a fixed algorithm. The machine learning is a dynamic algorithm is learning from experiences. So, like for instance, if
you have an algorithm that learning about case law, In India and you keep feeding it case law. And so, it, it processes all the case. Law understands the language could be in the English, whatever language the case is written in. And that it’s able to derive, you know, what, what actions produce, what consequences, what’s the likelihood. If [you strategize] like this, then you get a favorable outcome or kind of a legal understanding, just like lawyers, human beings have. It can also, it can be augmented by algorithms. So, the question is somebody asked you, okay, show me logic in your algorithm, how you came up with this. The person cannot because [it is] just so complex. It is so complex. And the algorithm has learned from many, many cases [and] from many, many examples, much, so much big data. You cannot say that the algorithm works exactly this way. Algorithm works this way in some cases in that way, in another situation, in that case, in another
situation, and the algorithm is always learning. If I explained it to you today, how it’s working, then tomorrow, it’s different because it’s got new data. So, this is another challenge for the law.” Shri Rajiv elucidates that the algorithms in general used by the big tech and even in general by mainstream entities have a Westernized perspective, when it comes to studying India’s demography. He explains: “Another thing that I’m concerned about is that, these algorithms have been trained on [the basis of] Western [methodology & perception, i.e.,] the Western study of India. […] The community is looked at through what I call Western universalism,
which is the lens of Western, Western people, their history, their philosophy and what e-ISSN: 2582-6999 | isail.in/journal 13 happened in Europe. […] Based on all that, they’ve come up with a theory that this is normal for everybody in the whole world; but that’s not true. It’s normal for them, but
may not be normal for us. So, this is kind of training based on Western universalism, even of Indian culture is quite misleading.” About Vedic AI and Biological Materialism Shri Rajiv elucidates about biological materialism and explains how biological materialism develops in general, in the realm of AI as an industry. “The AI can understand you very well, better than human psychology scan. And then the, I can artificially give you that also. So, you know, people will end up with artificial life with some kind of fake life, and lot of people will become total moron, Stoker zombies. Living in this kind of world and the digital companies become worth even more. They’re already so rich, the richest companies in the world. Now they become even richer because they are hacking the deepest desire side of human beings. So you see the, the world becoming FIC experience becoming FIC less and less real away from the journey of a Dante is a serious issue and gurus need to understand it.” Shri Rajiv elucidates about the role of Vedic and Indic literature and scriptures in the fostering of technology & about the manipulation of the human mind: “It can help you in agriculture. It can help you in medical surgery. It can, you know, so many things it can do. It could also probably help in more efficient. It’s more [based upon] efficient energy generation and timing, change areas – all those kinds of things. […] But what concerns me is when it starts manipulating the human mind. When
[the companies] are doing it. When it is used by human mind to solve disease and to solve problems here and there, all of that seems to be fine; that is one thing. But when you turn the surveillance into the human person himself and then the person becomes an object, or the person becomes an object controlled by whoever is controlling this AI machine, then I think there is a serious ethical problem.” Shri Rajiv explains algorithmic biology:
“So, modernization means that as machines get smarter, people are getting dumber. Machines, getting smarter people getting dumber people saying, ah, we will ask Google, how do I need to know anything, sir? Why do I need to study law? […] You don’t even have to type; you just speak and you know, you get your answer. So, this business, the source of knowledge and authority is shifted to the digital algorithms at people. And these people who own these algorithms are feeding part of it, some other country, and they don’t have any [concern] in what is happening in India. They just do marketing and make money. So, the whole generation is being raised on these digital gurus, these digital “devtas”, you know, and we are getting dumb. Indian Journal of Artificial Intelligence and Law 14 […] They can gamify this community, whether it is farmers, whether it is [of any identity]. They can take a community, understand what their hot buttons are, how
they respond, what they will respond to, who their leaders are, what their ideology is [in order] to manipulate them, how to make them think a certain way. And then they can bring in the kind of content to motivate them in a certain direction. They can have divide and rule. They can create divide and rule like the British East India company did. […] So this kind of insight information and the ability to manipulate while the public becoming morons becoming, you know, not leaders, not understanding this, our leaders don’t understand what I’m telling you about. […] People who’ve got this huge AI machinery going are 10, 15 years ahead of India. […] We are users of somebody else’s technology. We may have a largest number of cell
phones, but these are hardware to Chinese and the operating system is American. […] We are proud as consumers of somebody else’s product. […] If our community somewhere in the Amazon jungle has been using a particular plant to treat a certain disease, the pharma industry, [where] people go around the world looking for such things and they take cuttings from that tree and they bring it back to their labs [which they] find out which molecule out of all the plant complicated stuff, which is a molecule, they can
isolate that is active molecule. Then they get patent on it. And then they sell that medicine back to make a lot of money. [Under international IP & cultural law] the community can claim that because it was based on their plant product, even though they did not patent it, [their ownership matters since] it was based on their plant product, which they were using for a certain purpose. […] They get a certain percentage share of that intellectual property. Now the proposal that I’m making is that just like plant is raw material for discovering drugs, similarly, big data is raw material for discovering algorithms for making the algorithm stronger. So, if the foreign company comes, they do some surveillance, they got a lot of diversity of genetics. They got diversity of language and culture and economic strata and you know, all kinds of social situations. And there, the algorithm, the studying, all this, they’re really studying a very complex microcosm of the whole world in one place. […] So, they’re studying this and that’s very precious big data. Why are we giving it away free? Why are we even giving it at all? There is no shortage of smart people in India who could do all this. Why don’t you go to 5-10 Indian universities and put up tender and say, okay, we want to give three or four contracts to you. People come up with a proposal. Why would you outsource this to foreign people? I cannot understand why Yogi Adityanath did that [for the Kumbh Mela in 2017].
According to me, [it was] a serious blunder, especially after I had gone to him personally and briefed him what the problem is. Yet they still did it.
So that is my situation, my position on looking at biology as algorithms, as machines. […] So, the human being becomes a biological machine, which is, operated by some AI system, you know, and so this way they can treat so many people in India, e-ISSN: 2582-6999 | isail.in/journal 15 sort of, you know, biological objects that are working for them. And they are busy collecting data out of it. […] You know, there is research on making viruses that will only attack a particular DNA type. This is not science fiction. There are viruses that will spare a particular kind of DNA, which means that it’ll go for anybody, but this particular DNA, it will not attack that part of the DNA.
So, this, this is our big data. Biology has become part of AI.” Social Media Companies and Algorithmic Censorship Shri Rajiv elucidates: “The reason Facebook, (if you take Facebook as a competitor or let’s say Twitter as a competitor) – the reason they are able to invest so much in artificial intelligence is because they make a lot of money on advertising. So, you have to fund it. […] You cannot expect some government or somebody will fund you $50,000 a year. That’s the scale I’m talking about. […] And this requires several thousands of man years to develop this kind of ecommerce background because Facebook did not invent it overnight. […] So, they have the experience lead of 10-15 years. […] In 2022, Facebook is going to introduce augmented reality goggles and so will Apple. So, now Facebook will become a hardware-related company. So, they will combine, they will have a huge base like Apple and they will have these governments and they’re testing them. I know some people who are involved in the testing
of this, so this is pretty awesome stuff. They will give you amazing experiences, which is what Facebook is about: people wanting experiences, having friends and what not. So, these augmented goggles will give you that and eventually it will be implanted. Aesthetic & Pragmatic Influence of AI Shri Rajiv elucidates: “I started this philosophically. I started by asking, whether the universe is pragmatic. If all the things that moving very pragmatically, there’s no aesthetic aspect. […] If the universe is an algorithm, it’s all very pragmatic. So, I was actually studying this for all my life from a philosophical point of view. Then if it is a pragmatic algorithm, where does aesthetics fit in? What is the role of aesthetics and what is the role of pragmatics and how do they fit with each other? This is a kind of inquiry. Then I combine this inquiry with a different inquiry because Karl Marx came up with the idea of the theory of aestheticization of power. […] But AI is now getting into the emotional dimension in terms of understanding what kind of emotion this guy has, what is he like you to & how is his behavior being affected by his emotions. […] So, the Indian Journal of Artificial Intelligence and Law 16 psychological warfare is getting better and involves the use of aesthetics. I think is a very big topic. I’m glad you mentioned it, but it deserves a lot of time.” Artificial General Intelligence
Shri Rajiv elucidates: “So, you know, the thing is that AGI is not as far away as many people might think, or let’s just say there is no disconnect between AGI and non-AGI. Yet there’s much continuum. It will be gradual. It’s like, you’re climbing the steps towards AGI, but you’re climbing some steps – you are five steps [ahead], then you will be seven steps [ahead]. So,
you will be approaching it. […] The algorithms are learning faster than human child can learn. It takes a long time to train the child. It doesn’t take that long to train an algorithm. […] AGI is still at an academic stage, such that. It’s an open book, more or less a large part of it is quite open. […] But I’m concerned about things that are very pragmatic, which are very near-term, which are now becoming closed, which are not open source anymore.”