What is Artificial Intelligence & Risks of Artificial Intelligence

What is Artificial Intelligence & Risks of Artificial Intelligence

What is Artificial Intelligence:

Artificial intelligence (AI) is a wide-ranging part of computer science involved with building smart machines able to accomplish tasks that typically need human intelligence. AI is an interdisciplinary science with various approaches, but improvements in machine learning and deep learning are generating a standard shift in virtually every sector of the tech industry.

How Artificial Intelligence (AI) Works:

AI is the part of computer science that strives to answer Turing’s question in the affirmative. It is an endeavor to replicate or reproduce human intelligence in machines.

The overall goal of artificial intelligence has given beginning to many questions and debates. So much so, that no single definition of the domain is universally accepted.

The principal limitation in determining AI as solely “building machines that are intelligent” is that it ingests not define what artificial intelligence is? What makes a machine intelligent?

AI is a computer operation capable of accomplishing tasks that generally require human intelligence. Many specific artificial intelligence systems are powered by machine learning, some of them are powered by deep learning, and very tedious things like rules power some of them.

Narrow Artificial Intelligence :

Sometimes pointed to as “Weak AI,” this classification of AI functions within a limited context and is a simulation of human intelligence. Narrow AI is often concentrated on performing a single task extremely well. While these machines may seem intelligent, they are operating under very more constraints and limitations than even the most basic human intelligence.

Artificial General Intelligence (AGI):

AGI, sometimes assigned to as “Strong AI,” is the kind of artificial intelligence we notice in the films, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with universal knowledge, and, much like a human being, it can utilize that intelligence to solve any problem.

Examples of Artificial Intelligence (AI):

  • Smart assistants (Siri, Alexa & Beeb)
  • Manufacturing and drone robots
  • Social media monitoring tools for critical content or fake news
  • Song or TV show suggestions from Spotify, Amazon Prime & Netflix
  • Spam filters on email (Gmail, Yahoo Mail, and other mails)
  • Disease mapping and prediction tools
  • Optimized, personalized healthcare treatment recommendations
  • Conversational bots for marketing and customer service
  • Robo-advisors for stock trading

Also Read: BBC releases ‘Beeb’ voice assistant in beta

Risks of Artificial Intelligence (AI):

Max Tegmark said, “Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.”

AI is overgrowing. While science novel often describes AI as robots with human-like characteristics, AI can incorporate anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence now is fitly known as narrow AI (or weak AI) in that it is created to deliver a small task (e.g., only facial recognition or only internet searches or alone driving a car).

However, the long-term goal of many researchers is to generate general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its particular task is, like playing chess or solving equations, AGI would outperform humans at almost every cognitive task.

Most researchers admit that a superintelligent AI is strange to exhibit human emotions like love or hate and that there is no purpose to expect AI to become deliberately benevolent or malevolent. Instead, when considering how AI might become a risk, specialists think two scenarios most likely.

Autonomous weapons are artificial intelligence systems that are processed to kill. In the hands of the wrong person, these weapons could simply cause mass casualties.

Moreover, an AI arms race could unwittingly lead to an AI war that also results in mass disasters. To avoid being prevented by the enemy, these weapons would be designed to be extremely hard to simply “turn off,” so humans could believably lose control of such a situation. This risk is one that’s present even with narrow AI but increases as levels of AI intelligence and autonomy increase.

techgogoal

TechGogoal updates all the Information from the levels of Technology, Business, Gadgets, Apps, Marketing, Social Networks, and other Trending topics of Innovative technology.

Leave a Reply

Your email address will not be published. Required fields are marked *