Sunday, March 25, 2018

Over a cup of coffee

[Adapted from this FB post. Prelude of it can be found here / here / here. ]

The speaker of this TED talk [Youtube Link] (nearly 0.5M views in 3.5 months since publication), Scott Galloway is a 'Clinical' Professor of Marketing at the New York University Stern School of Business [Linkedin profile]. Yesterday afternoon, had an opportunity of a 90 minute adda at Tikka over a cup of coffee with one of our alumnus who did his MBA in the same business school. Currently, he is working on investment in start-ups and helping M & A with big corporations.
Such interaction is very educative for us who work with students and their aspirations. It was learnt that 10 percent of the start-ups are definitely promising. Many start-ups do oversell, are risk-averse while there are those who are quite confident. And, larger fraction of these start-ups are in Artificial Intelligence (AI) targeting the Digital Marketing space.
This made me reflect upon the recent controversy about Cambridge Analytica's misuse of social media platform in grand scale [The Guardian follow up]. Could it be that tighter rules and regulations are in the offing on data ownership, protection? While it is being believed that the 'data is new oil' of the economy, there is increasing awareness and concern on misuse of data and its unintended exploitation.
Going forward, sharing of data across platforms may not be found as easy. Informed consent may mean much more than clicking "I Agree" button where the person concerned as per his background will actually be able to make out what he gives consent to and what are it's consequences. He can pick and choose and not forced to agree en-block, without the sword of denial of service (the basic service which is essential today and is to be considered undeniable) hanging on his neck. It should be explicit if the person himself (his digital self) is going to end up being a saleable product, if so, how he exercises his right to know who holds his digital persona at a given time and for what purpose (since he is more than a poultry animal, nurtured with what it thinks as free stuffs, to get eventually sold).
The hard hitting TED talk of Prof. Galloway follows a Q & A session where he says, "They (Corporations) are not concerned about conditions of our souls. They are not going to take care of me when I get older. We have set up a society that value shareholder's value over everything and they are doing what they are supposed to be doing."
Zeynep Tufekci's TED talk [Youtube Link] was published just one month before Galloway's, on 11-11-2017. Zeynep, associated with University of North Carolina and Harvard, does research on social implications of emerging technologies in the context of politics and corporate responsibility. Her talk is very incisive on how things work out in digital space and what is its implication for which she gives specific examples.
One of her finding shows that in this AI driven world, a bipolar, manic person may be found the most gullible by the algorithm to buy a ticket to Las Vegas since such people overspend, and then he is pushed such content that makes him go for the purchase. Her own study of a particular rally showed youtube's AI algorithm pushing more of white surpremacist's video in increasing order of extremism through suggestions. She also talked about an experiment where 67 M people were shown two different content through pop-up, the second one was more persuasive than the first to perform a task; and much larger share of the people of the second group did really fell for the task.
She finds that whatever be the intention of the seemingly good people writing these AI algorithms, the persuasive architecture of the algorithm eventually finds out that if it shows something enticing, hardcore, something that can affect emotion and behaviour, then people will give more attention and they can be persuaded to do what the algorithm aims at. Therefore, the associated liability cannot be just transferred to the algorithm, however deep and opaque it might be, where people behind it claim innocence.
The last TED talk of this post [Youtube Link], dated July, 2017, is that of Tristan Harris [Website]. He is said to be the “closest thing Silicon Valley has to a conscience.” He suggests that we must embrace following three as solution and at the earliest. (i)We need to acknowledge that we can be persuaded (our ego may find it difficult to acknowledge), (ii) We need new model and accountability system (transparency in the flow) and (iii) We need a design renaissance that would aim at (a)protecting us and (b)empowering us.
In our yesterday afternoon's Tikka adda, our human intelligence (whatever is left) told that besides artificial intelligence, issues related to genetic engineering need our immediate attention!!

Note: The above is as understood by me in a finite time and, misunderstandings, if any, cannot be attributed to other persons appearing in this post.

No comments: