“Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don’t think Artificial Intelligence will transform in the next several years.” states Andrew Ng, Chief Scientist at Baidu.

rafiq_ajani.jpg

Rafiq Ajani
Rafiq Ajani

Whether you fully believe this assertion or not, there is no question that Artificial Intelligence (AI) will impact the human race like few things before it have. In fact, AI is no longer something ‘out there’ in the future – it's already amidst us. Organizations and people are now routinely using a branch of AI called Machine Learning to automate tasks, establish patterns in huge reams of data and even predict outcomes with unprecedented accuracy.

Artificial Intelligence is the theory and development of computer systems to be able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.

Why is artificial intelligence important?

AI automates repetitive learning and discovery through data. But AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue.

AI adds intelligence to existing products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies at home and in the workplace, from security intelligence to investment analysis.

AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill, becoming a classifier or a predictor. So, just as the algorithm can teach itself how to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data.

AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago. All that has changed with incredible computer power and big data. The more data you can feed the models, the more accurate they become.

AI achieves incredible accuracy though deep neural networks. For example, your interactions with Alexa, Google Search and Google Photos are all based on deep learning. In the medical field, AI techniques from deep learning, image classification, and object recognition can now be used to find cancer on MRIs with the same accuracy as highly trained radiologists.

What are some AI technologies being used today?

  • Machine learning uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.
  • A neural network is a type of machine learning that is made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
  • Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
  • Cognitive computing is a subfield of AI that strives for a natural, human-like interaction with machines. The ultimate goal is for a machine to simulate human processes through the ability to interpret images and speech – and then speak coherently in response.
  • Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
  • Natural language processing (NLP) is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.

Six of the best applications of Artificial Intelligence in use today

  • Siri is a pseudo-intelligent digital personal assistant that helps us find information, gives us directions, add events to our calendars, helps us send messages etc. She uses machine-learning technology to get smarter and better able to predict and understand our natural-language questions and requests.
  • Alexa's usefulness and ability to decipher speech from anywhere in the room has made it a revolutionary product that can help us scour the web for information, shop, schedule appointments, set alarms etc., but also help power our smart homes and be a conduit for those that might have limited mobility.
  • Cogito is a fusion of machine learning and behavioral science to improve customer interaction for phone professionals. This applies to millions of voice calls that are occurring on a daily basis.
  • Amazon.com's transactional AI has become smart at predicting what we are interested in purchasing based on our online behavior. On the horizon, perhaps even shipping products to us before we know we need them.
  • Netflix analyzes billions of records to suggest movies that you might like based on your previous reactions and choices of films. However, the only drawback is that most small-labeled movies go unnoticed while big-name movies grow and balloon on the platform.

Artificial Intelligence is an incredible career opportunity

The largest technology companies are placing huge bets on artificial intelligence, banking on applications ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care, and autonomous vehicles. As they chase this future, they are paying out salaries that are astounding even in an industry that has never been shy about lavishing a fortune on its top talent.

Typical AI specialists, including both PhDs fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more, in salary and company stock, according to those who work for major tech companies or have entertained job offers from them.

Thousands of openings in artificial intelligence and machine learning posted on job boards are going unfilled. In fact, though AI is one of the fastest-growing areas for high-tech professionals, according to a recent Kiplinger report, there are too few qualified engineers. “Supply is far lower than demand,” says Boris Babenko, a machine vision engineer at Orbital Insight, a company in Palo Alto, Calif., that uses AI to make sense of data gathered from satellite images. “That’s true of all software engineering, but AI is a niche on top of that.” The need for AI specialists exists in just about every field as companies seek to give computers the ability to think, learn, and adapt.

“If you look hard enough, any industry you can think of has a need for AI and machine learning,” says Geoff Gordon, acting head of the Machine Learning Department at Carnegie Mellon University.

What are the skills needed to become an AI professional?

According to Gordon, some workers start in software engineering or a data-heavy field such as physics. “Others might come from a field like biology,” he says. “Machine learning becomes an important part of what they do, and they end up loving it.” He says many of his PhD students have returned to school to study AI after a few years in industry. A background in software engineering, experts agree, is a must-have.

“We assume that when people first come in they have not only formal thinking ability but also the know-how to code and work with computers,” Gordon says. The exact programming language doesn’t matter; most students know several. “We love seeing candidates who have had some open-source projects,” Babenko says, “so we can look at the code they’ve written.”

Beyond technical skills, AI requires an innate sense of curiosity and a drive for problem solving. “We’re trying to train people who can take on the impossible problems and solve them,” Gordon says. “Someone once described our students as elite machine-learning ninjas who would get dropped in by black helicopters to solve all your problems.” A combination of analytical ability and creativity also matters, according to Matthew Michelson, Chief Scientist at InferLink, an AI firm in El Segundo, Calif.

“This is a difficult combination to find, but you need to be analytical to understand the data and to craft algorithms,” Michelson says. Creativity is important, he adds, because “the problems are often new and require new solutions.” He looks at candidates’ hobbies—he’s partial to those who developed games—when considering how they might handle problems.

As for education, jobs exist for those with a Master's degree, and there are plenty of lower-level positions as well. Employers hiring in AI value PhD candidates for their depth of education and the work they produce during their doctoral program. Attending conferences is a good way to keep your AI knowledge up to date—vital with the field evolving so rapidly—and to find job leads. Just about every industry today needs employees with AI skills.

“My advice to those interested in working in AI is to network, attend events, and follow industry news closely—become part of the industry conversation,” says Jana Eggers, CEO of Nara Logics, a synaptic intelligence company in Cambridge, Mass., that combines neuroscience and computer science. “It is the best way for you to assess your fit with a company, as well as to learn of professional opportunities.”

Babenko praises competitions such as those run by Kaggle, which styles itself as “the world’s largest community of data scientists.” The competitions can be great for networking, he points out.

Several of our Jamati professionals are already engaged in cutting-edge AI endeavors.

Nabeel Gilani, Graduate Student at MIT Media Lab

“Our research group explores how data science and machine learning can be used to promote a healthier civil society. For example, we've conducted analyses of public discourse on Twitter during last year's election cycle, which revealed ideological fragmentation and media 'cocooning' in how social media users connect to and share information with others. Unfortunately, this type of fragmentation is characteristic of many digital media platforms and is a contributing factor to polarization and hostility between groups: if we only engage with perspectives we agree with, we remain in our own bubbles and disengaged from the 'other.' This is also one of the reasons why things like fake news, misinformation, etc. spread so easily in certain pockets on social media. Inherent human psychology and tribalism are contributing factors, but so too are the algorithms that undergird these media platforms. The recommendation systems that suggest content in our Facebook feeds learn our preferences based on what we read/click/tend to share, and suggest more of the same, creating a vicious cycle of self-reinforcing information that makes it hard to see what else is out there without taking deliberate action.

Given this, some of our research explores how we can design and deploy new technologies that motivate people to move beyond their own information bubbles and engage with one another across divisions -- both on the internet and in real life."

Dr. Sahirzeeshan Ali, Research Scientist at the Center for Computation Imaging and Personalized Medicine (CCIPD) at Case Western Reserve Medical University and Seidman Cancer Center

Dr. Ali is conducting research on using AI to examine pathologies, such as cancer cells. By using data from thousands of patients, computer software is being designed to recognize patterns and compare diagnoses and treatments to outcomes, thus being able to model and predict the best course of action. Radiologists currently make decisions based on their training, perception, experience etc., but the mathematical relationships that AI can detect appear to be better at diagnosing and suggesting optimal treatments. Studies in the UK, perhaps the leader in the field of AI for medicine, suggest one in five patients have been misdiagnosed, representing 12,000 scans annually. Unnecessary surgeries could have been prevented at a saving of $400 million to the National Health Service. As Dr. Ali, says, "Every patient deserves to have his or her own equation, and AI is going to lead to greater personalized treatment plans, rather than generalized treatment based on an average patient with particular symptoms or cells.

Author

Rafiq Ajani is a Partner with McKinsey and Company, leading its North America Knowledge Center in the United States. He oversees teams in Costa Rica, Brazil, and Mexico as they deliver industry and functional expertise, advanced analytics, business research, proprietary tools and data, and knowledge-management services.