Open

What Is Artificial Intelligence – AI

WHAT IS AI?

We take a look at what artificial intelligence is and how it’s being used. Artificial intelligence is already everywhere. The technology is widely used in ways that are quite obvious, such as self-driving cars, and others that are inconspicuous.

It keeps your mobile phone ticking over, translates for Alexa, helps doctors analyse medical images, controls robotics in factories, and so much more, quietly working behind the scenes to automate both simple and complicated tasks.

Though these are small examples of its capabilities, AI is predicted to have a huge impact on our lives, with plenty predicting disruption to our jobs and work life and others seeing the benefits of churning through vast data sets. Keeping up with such changes requires understanding the various technologies behind AI, be it neural networks, deep learning and machine learning, and seeing how they’re already being used.

 

With the technology already being used to pilot driverless cars, secure our data, automate factories and even shape our society with smart cities, here’s what you need to know about AI.

Examples of AI

 

AI covers quite a vast area and so it’s no surprise there are numerous examples of how it’s being used in both industry and everyday life.

For example, driverless cars employ AI to make decisions about how a car should react in certain scenarios, such as an obstacle getting in the way. Should the car instantly stop, just slow down or keep going? Naturally, in reality, this is dependent on what the car has sensed is in the way.

On a more controversial level, it’s also being developed to make much harder decisions, where the loss of life can be minimised, but not avoided, by choosing who to kill when there’s certainty someone will have to die in order for others to survive.

Less alarming real-world examples of AI include using the technology to test humans on a strategic level. For example, Google’s AlphaGo robot was used to prove that computers can be smarter than humans by beating some of the world’s leading Chinese board game Go players.

To a lesser extent, many of the voice assistant systems including Siri, Google Now and Cortana use AI to make decisions based upon your commands. For example, if you ask Siri for a restaurant recommendation, it will use the data it knows about you (such as where you are, what kind of food you like) to recommend a restaurant to you.

Weak AI vs strong AI

We encounter the simplest form of AI in everyday consumer products. Known as ‘weak AI’, these machines are designed to be extremely intelligent at performing a certain task. An example of this is Apple’s Siri, which is designed to appear very intelligent but actually uses the internet as its information source. The virtual assistant can participate in conversations, but is limited to doing so in a restrictive, predefined manner that can lead to inaccurate results.

On the other hand, in its most complex form, AI may theoretically have all the cognitive functions a human possesses, such as the ability to learn, predict, reason and perceive situational cues. This ‘strong AI’ can be perceived as the ultimate goal, but humans have yet to create anything deemed to be a fully independent AI.

Currently the most compelling work is situated in the middle of these two types of AI. The idea is to use human reasoning as a guide, but not necessarily replicate it entirely. For example, IBM’s supercomputer Watson can sift through thousands of datasets to make evidence-based conclusions.

Applied vs general  

Applied, or ‘narrow’, AI, refers to machines built for specific tasks. This has been the most successful application of the technology within an industry, allowing systems to make recommendations based on past behaviour, ingesting huge quantities of data to make more accurate predictions or suggestions. In this way, they can learn to perform medical diagnoses, recognise images, and even trade stocks and shares. Despite this narrow form of AI being brilliant in its own field, it isn’t designed to perform day-to-day decision-making.

General AI remains the realm of science fiction. Instead of being trained on a specific type of data to perform one task very well, like applied (or narrow) AI, general AI would see a machine able to perform any task a human can. This would involve, for instance, it being able to learn a lesson from one type of situation and apply that lesson to an entirely new situation.

While general AI is generating a lot of excitement and research, it’s still a long way off – perhaps thankfully, because this is the type of AI sci-fi writers discuss when talking about the singularity – a moment when powerful AI will rise up and subjugate humanity.

Machine learning & deep learning

While general AI may attract the most public attention, it’s the field of applied AI that has had the greatest success and biggest effect on the industry. Given the focused nature of applied AI, systems have been developed that not only replicate human thought processes, but are also capable of learning from the data they process – known widely as ‘machine learning‘.

An example of this is image recognition, which is increasingly becoming an AI-led field. A system may be designed to manipulate pre-scripted routines that analyse shapes, colours and objects in a picture, scanning millions of images in order to teach itself how to correctly identify an image.

However, as this process developed it quickly became clear that machine learning relied far too much on human prompting and created wide margins of error if an image was blurry or ambiguous.

Deep learning has since taken off as the next generation of AI study. The term refers to the building of artificial neural networks that resemble the interconnected neurons that make up the human brain. Unlike the brain, where neurons are able to talk to any other within its vicinity, these artificial networks are built using layers that create a route for data to pass through.

The idea is that once one layer finishes analysing the data being processed, it’s then passed down to the next layer where it can be re-analysed using additional contextual information. For example, in the case of an AI system designed to combat bank fraud, a first layer may analyse basic information such as the value of a recent transaction, while the second layer may then add location data to inform the analysis.

In the case of Google’s AlphaGo, the system that defeated a champion Go player in 2016, the deep learning neural network is comprised of hundreds of layers, each providing additional contextual information. While machine learning is a type of AI, there are differences between the two terms – read more about them here.

Are there risks?

There’s no getting away from the fact that if robots are capable of learning and automating complicated tasks, they’re going to be far better at performing certain roles than humans. And let’s face it, businesses are always looking to cut costs.

Lawyers, teachers, and even us journalists are thought by some to be under threat of automation. Yet there’s a compelling argument that the technology will enhance, not replace, these jobs.

An example of this can be found in the work of the Associated Press, which recently developed AI driven reporting. Instead of replacing a reporter, the system has allowed the AP to report on Minor League baseball for the first time in its history.

A much greater risk that’s often ignored in the media is the threat of AI being used against us, as it’s foolish to assume that only the good guys will be developing the technology. It’s highly likely that we’ll see the emergence of smart malware before long, threats that are able to adapt on the fly to attempts to halt their spread.

Another WannaCry or NotPetya campaign is almost certainly going to be far worse with AI behind it, being able to spread far further and cause more disruption for a longer period of time. Even threats like email phishing scams, which today are fairly unsophisticated, are likely to be far more nuanced, particularly if AI and Internet of Things devices are used together to target a vast number of victims.

Credit: ITPRO

Leave a Reply