Why AI could be dangerous?

Artificial Intelligence vs Humans

Cyberatonica — Vitaly Sokol

You didn’t notice but past years of scandals or big problems involved AI. Throughout all, some AI systems were tested on live populations across the globe with some consequences. Autonomous cars killed pedestrians, a voice-recognition designed to detect immigration fraud ended up canceling thousands of visas, another made cancer incorrect recommendations, then Google build a secret censored search engine for China, and without people knowing, Facebook got massive data breaches.

Why did it all happen? Because AI is made by humans without a code of conduct. Right now it’s easy to sit in an ivory tower and write lines of computer code without being mindful of the full potential consequences of the work done. So where does that leave the industry? To innovate or not to innovate, that is the question. However, the more we innovate, the more we have to continually up our skills to stay ahead of the growing possible attacks. The malicious use of AI impacts how we construct and manage our digital infrastructure as well as how we design and distribute AI systems, and will likely require policy and other institutional responses. With respect to the abstraction of the bigger picture, intelligent, learning entities will be influenced by the information they absorb under a limited context and reapply that learning without considering the wider perspective. In such regard, if 40% of people have no basic digital skills, possible targets for cybercriminals, we need not ethics guidelines but regulations and laws. We can’t assess the value of innovation if it’s not on the market, so we better check the people that have hands-on it. (first part of this article here)

  • How about that? — Cybercrime victims per year are over 600 million people and if you break that down — 1.5 or 1.6 million victims per day or 20 per second. The global cost of cybercrime is predicted to hit $8 trillion annually by 2022.
  • The world data is expected to grow from 33 zettabytes in 2018 to 175 zettabytes in 2025 (one zettabyte is a thousand billion gigabytes). With so much data who will be responsible for damage caused by an AI-operated device or service?
  • When power and bias hide behind the facade of ‘neutral’ math we call this Math washing - Use of numbers to represent the complex social reality that makes the AI seem factual and precise when it’s not. (accidental or on purpose)
  • For example, do corporations anticipate, as speculate, that the internet will split into three — the US internet, the China internet, and the EU internet? Things don’t seem quite so simple anymore. What will consumer protections around privacy and security look like as the internets diverge? How it will be with a cold war AI battle between 3 powerful entities?
  • How we can prevent that some strange events cannot occur? All AI systems should probably incorporate elaborate rules, regulations, and accountability to prevent unfortunate, unintended capabilities.
AI Battle — Illustration by 7Cube

Specifically, AI systems allow now automation of surveillance far beyond the limits of human review and hand-coded analytics. Thus, they can serve to further centralize this information in the hands of a small number of actors. China is the most active in this field. In the Xinjiang Autonomous Region, they made a police region like no other: surveillance cameras, spyware, Wi-Fi sniffers, and biometric data collection, sometimes by stealth. Machine learning tools integrate these streams of data to generate extensive lists of suspects for detention in re-education camps, built by the government to discipline the hostile groups. (the estimation of people detained is almost 1 million). Venezuela announced the adoption of a new smart card ID known as the “Carnet de Patria,” which, by integrating government databases linked to social programs, could enable the government to monitor citizens’ personal finances, medical history, and voting activity. (China has already a program similar running) These examples show how AI systems increase social control and amplify the power of such data, raising urgent and important questions — to how basic rights and liberties will be protected? There are also risks emerging from unregulated facial recognition systems. Once identified, a face can be linked with other forms of personal records and identifiable data, such as credit score, social graph, or criminal record. With the help of AI, researchers aim to automatically detect inner emotional states or even hidden intentions. The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments. Easily can be used to intensify forms of bias and discrimination. Raises troubling ethical questions about locating someone’s “real” character and emotions outside of the individual, and the potential abuse of power that can be justified based on these faulty claims.

People design algorithms: which data to use and how to integrate it. But data is not automatically objective. Anyone that worked with data knows that data is messy, often incomplete, sometimes fake, and full of complex human meanings. From this perspective, we have 2 possible outcomes: data accidentally wrong or data on purpose wrong. Ex: On accidentally algorithms based on data like women are paid less than men, will consider this a norm and they will amplify bias. Algorithms shouldn’t be seen as simply types of tools. We all should know how what is good’ is decided upon. We each have a responsibility to avoid this nonsense. Take, another example, a ProPublica report that found an algorithm being used in American criminal sentencing to predict the accused’s likelihood of committing a future crime was biased against black people. Algorithms will always reflect the design choices of the humans who built them and the data collected, so it’s irresponsible to assume otherwise. Who collected data and what criteria did they use?

People make mistakes - Algorithms make mistakes.
PEOPLE ARE BIASED - Algorithms are biased.
People are not neutral - Algorithms are not neutral.

From a security perspective, a number of these developments are worth nothing. But the ability to recognize a target’s face and navigate through space can be applied in autonomous weapon systems. Similarly, the ability to generate synthetic images, text, and audio could be used to impersonate others online or to sway public opinion by distributing content through social media channels. There is a growing consensus that these AI systems will perpetuate and amplify hate, and that computational methods are not inherently neutral and objective. More or less after we made a quantum computer that is reliable and working at full speed. In a conventional (“classical”) computer, one bit of binary data can have one of just two values: one or zero. But in a quantum computer, these switches, called quantum bits or qubits (pronounced “cue-bits”), have more options, because they are governed by the laws of quantum theory. As a result, quantum computers can represent many more possible states of binary ones and zeros. By the time you get to 300 qubits — as opposed to the billions of classical bits in the dense ranks of transistors in your laptop’s microprocessors — you have 2³⁰⁰ options. That’s more than the number of atoms in the known universe. This power if it’s released in the wrong hands, poses such a significant threat to cyber-security that we are going to have to completely rethink the way we secure commercial transactions, or digital footprints. Imagine being able to know every fact about every market on the planet in minutes.

So, the issue of where responsibility and transparency lie has to be the main concern for all of us. Not regulating all-pervasive and often decisive technologies by law would effectively amount to the end of democracy. Say what?!! AI that makes decisions that affect individuals should give intelligible reasons; when a machine engages a human in political discourse, the machine should be required by law to disclose that it is a machine. Some of these problems may not be as serious as they first appear but it’s better to be premature and lay the groundwork for policies involving AI. The A.G.I. race is going on right now. Achieving A.G.I. is job number one for Google, IBM, and many smaller companies as well as DARPA, the NSA, and governments and big companies. Profit is the main motivation for this race. Imagine one likely goal: a virtual human brain at the price of a computer. Imagine banks of thousands of AGI quality brains working for your company. Wouldn’t you want to have that technology? But good regulation would improve our perception of safety, and also our perception that humans will remain in control. And at the end of the day, AI will still have to resolve the idea from Ancient Greece “who will guard the guardians?”

Born to make impact. 15 years in brand/website development, UI/UX, e-commerce and digital art. Growth Hacker. Web Developer. Web Designer. Illustrator.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store