Sign up with your email address to be the first to know about new products, VIP offers, blog features & more.

artificial intelligence, natural rights

By Posted on 0 6 m read 248 views

“I propose to consider the question. Can machines think?”
– Alan TurnAlan Turin 

For the purpose of this article, we will agree upon the fact that Human Beings are the world’s most intelligent creatures. Or at least that we may be regarded as more intelligent because we are not merely surviving in the world, we are also creating things [ most of us 🙂 ]

And in the spirit of being intelligent enough to create, the humans in computer science have decided to go one step further in technology and create what is popularly known as Artificial Intelligence (AI). Chances are that you know of AI. Apart from its more frequent adoption in the development of technology, it’s also showing up more in everyday lingua.  But what is AI?

AI is essentially a concept that refers to systems/machines created to function intelligently and ‘independently’. In its simplest form, it is machines doing work like humans. And there are levels to this. One of the more basic levels would be speech recognition. Think of when you talk to Siri like it is your friend. Siri is not human, but when you ask a question, it recognizes your speech, converts it to text and can even respond. Also, AI can be observed when Netflix graciously recommends a movie for you as your friend will. Like ‘I know you. I’ve been with you for a while. I understand you. Here’s this amazing movie you’ll love to watch. If you don’t like it, swear for me’ 

On a broader scale, AI encompasses every field that attempts to do what intelligent humans can/will. Fields such as robotics (which is basically machines understanding their environment and getting around physically just as humans do) , image processing (which is machines being able to understand what they see), pattern recognition (which is seeing patterns and being able to make decisions based on these patterns), predictive analysis (which helps to use data to predict the future) and so on.

All of these definitely involves the study of the human brain, how it works and how to infuse these cognitive abilities in machines. However, it might be important to say that scientists are not necessarily trying to copy and paste human reasoning and behaviour (because how practicable is that?). The goal is essentially to model human reasoning and present results in a refined way. Focus is on results. So it’s cool that a machine can spot certain types of people, but how does it make judgment calls or predictions based on that?

Thinking about it practically, AI generally works with data. Even human beings learn with data. We go about the world for decades learning via what we see, sense, hear, experience and so on. And then we make decisions, inferences, classification, predictions and other forms of output based on all the data we’ve stored. It’s the same with AI. Only that decades worth of data is collected and fed into the machines via programming.  The machine is then able to read patterns, study behaviour, recognize visuals, understand speech, classify language and so on to make accurate decisions.

For instance, in order to make my machine intelligent enough not only to diagnose cancer, but also to determine what stage it is and the best medical approach to take, I may feed it with data in pictures of different tumors, medical history of patients, medical solutions adopted or advised to be adopted, development in particular cases and so on. 

This is not really an article about the technical abilities of AI but its human effects. I like to think that when AI and Big Data meet, you’ll find a bunch of excited scientists but not so excited human rights advocates. And the problem is not necessarily that AI uses Data. It is *how* it uses that data – the ethics considered in programming and how the law fosters the observance of such ethics. AI covers a BUNCH of industries. In education, it can be used to score exams or essays. In criminal justice, it can be used to assess risk/guilt to help inform or sway judgment. In medicine, it can be used for diagnostics. It can be used for online content moderation. It can also be used in deciding who you hire or fire.

AI & Privacy

This is the obvious one, right? AI needs data, thrives on it and eventually informs it (tbh). We are both in the era of Big Data and Hasty Agreements. So while there’s a huge collection of information, there’s also reduced attention by data subjects (YOU) to what terms and conditions they agree to. (I have a short piece in my soon-coming book (amen?) on this). Data collection right now is madness. There are cameras every mile, apps in every swipe, websites in every click and computer systems somewhere just collecting all this. I watched a Ted Talk where one guy said that Google has 10 – 15 exabytes of data. Have you ever even heard of the term ‘exabyte’? There are levels to this. 

My concern (and not just for Google) is how are these data protected and what are they used for? What is the breadth of the distribution, reach and use of these information and what should we expect in the future based on this?

AI & Freedom from Discrimination

Just like I earlier said, data is being fed into systems to help in processing and output. The question is what *kind* of data? 

Humans are naturally biased people. A lot of people say data don’t (doesn’t?) lie, and that might be true but data can discriminate. The processes of collecting data in itself may taint the outcome. The range of your dataset may also affect judgment. How inclusive is your data collection mechanism? As I said, I believe that data might be accurate and yet discriminatory. If, for instance, a large percentage of your dataset on terrorists and terrorism points to elements such as Islam as a religion, Hijab or Turbans as form of dressing, Arabic as language, Male as Gender, what you would have is a system that is being programmed to recognize all male Muslims as terrorists. That can be an airport disaster. The data fed is *probably* accurate but still potentially discriminatory.

Another example I can think of is the use of AI in recruitment. What are your data sources and are they inclusive enough to inform the machine accurately about the number of women in higher positions, women remuneration expectations, women productivity in the workplace and so on.

I think that even more important than ensuring that the data sets are not biased is also ensuring that decisions made based on sensitive machine predictions are not taken without added measures.

The same examples apply to black lives, and policing in the US (and perhaps in other parts of the world), low-income earners and racial links to same, and other minority considerations.

Techies and policymakers alike must understand the need for inclusion and diversity in the development of technology.

There are definitely other ways in which AI would influence human rights but these are my focus right now. If I were to propose solutions, I’d say:

  1. Human Rights Impact Assessment: In the development of any AI technology, there must be some human rights consideration. And if you’re an excellent person outside of the tech world, you may think, ‘well I’m sure this is already being done’. But I tell you that it’s rare to see this. Scientists (or programmers) are usually more focused on functionality and optimization and would not necessarily think ethics. These impact assessments have to also be carried out periodically as well.
  2. Education for Techies: Computer scientists need to be enlightened. Simple. Are we teaching these things in classrooms (conventional and otherwise)? Or are we waiting for our bright minds to build the next best thing before we start doing damage control? I believe that ethics and human rights should be *seriously* incorporated into curriculum and syllabus for young scientists.
  3. Test Test and Test: This is also one of those solutions that sound moot. But I’ll say it anyway. Scientists should be mandated and supervised (*cringe*) to test their products. But not just for functionality but for adherence to ethics. Permutate potential rights-infringing situations and test what the machine will do in that event. 
  4. Let us see through you: Essentially, be transparent. This is mostly targeted at the government. If a system is functioning by AI, let your people know.

Cheers!

Share this article

No Comments Yet.

What do you think?

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.