How AI Is Accelerating Business Growth and Innovation

By Anthony de Freitas | Friday, 30 April 2021 | Feature, Business, Tech

For many of us, Spielberg’s 2001 film drama “A.I. Artificial Intelligence” was the introduction to the possibilities of intelligent machines. So far, however, the capabilities portrayed there have not been realized. Nothing as advanced as David, the AI-driven robot child, has been developed. Yet, AI applications are all around us. AI powers the personalized feeds we get on social media, our anti-spam email defenses, the online ads that appear to stalk us, digital voice assistants like Alexa, Siri, and Google Assistant, and our Netflix recommendations. Despite failing to achieve the most ambitious promises of its promoters, such as self-driving cars, AI technologies are widely employed in a variety of fields that affect our everyday lives. Is this boon or bane? We’ll leave that question for you to address. Hopefully, answering it will become easier after reading our roundup of the salient features of this remarkable branch of computer science.

Abstract AI concept.

The March of Progress

Despite the many ominous connotations trumpeted in works of fiction, the adoption and growth of AI can is simply another phase of the technological advance that has marked the development of human society. Yet, because we associate intelligence with living creatures, particularly our own species, the idea of machines that possess that faculty excites some trepidation. AI agents may turn out to be as unpredictable and perverse as any intelligent human.

No such worry is evident in Silicon Valley. Sundar Pichai, Google’s chief, speaking at the World Economic Forum in Davos, Switzerland, enthused about the technology: "AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire,” he said.

The AI Market Is Big and Getting Bigger

Google is a major participant in an AI market that is clipping along at a five-year compound annual growth rate (CAGR) of 17.5%. Globally, the industry is projected to swell to $554.3 billion by 2024. Other players of note are IBM, Intuit, Microsoft, OpenText, Palantir, SAS, and Slack.

The industry comprises three main segments: software, hardware, and services. Commanding 88%, the software segment is, by far, the largest. It is the preserve of two tech giants. IBM dominates in three software categories; Microsoft is the market leader in four.

AI has accomplished quite a lot. One sub-field in particular, machine-learning, has achieved some spectacular results. In 1997, IBM’s Deep Blue computer astonished the world by defeating reigning world chess champion Garry Kasparov. Two decades later, an AI system created by DeepMind, a company owned by Google’s corporate parent Alphabet, beat Lee Sedol, a highly expert player of the ancient Chinese game, Go. Such feats have raised the question of whether machines might not, one day, rule the world.

Rise of the Machines

Fear of machines is nothing new. As the industrial revolution took off in Britain, so too did resistance to the increased use of automation. British weavers and textile workers, who took the name Luddites, went from factory to factory smashing stocking frames, afraid they would be thrown out of work.

Their alarm was to some extent justified, wrote 19th-century British economist David Ricardo: “...the opinion entertained by the labouring class, that the employment of machinery is frequently detrimental to their interests, is not founded on prejudice and error, but is conformable to the correct principles of political economy.”

Contemporary figures have voiced concerns about the intelligent machines of today. In an interview with the BBC, famed British physicist Steven Hawking warned: "The development of full artificial intelligence could spell the end of the human race." Elon Musk has expressed that idea more vividly: “With artificial intelligence we are summoning the demon.”

But what exactly is AI, and why is the technology so fearsome?

What Is Artificial Intelligence?

The term artificial intelligence was coined in 1956 by computer scientist John McCarthy. It supplanted his previous designation, automata studies, a label so bland that no one seemed able to comprehend what it meant. McCarthy defined artificial intelligence as “getting a computer to do things which, when done by people, are said to involve intelligence.” But what exactly is intelligence?

It is now accepted that humans are endowed with general intelligence, which allows them to acquire knowledge, reason abstractly, adapt to novel situations, and benefit from instruction and experience. Furthermore, there is also recognition that individuals may possess specific intelligences (e.g., musical prodigies). Nevertheless, the general ability to learn easily and quickly, to hold extensive chains of reasoning in mind, and to originate novel ideas are today regarded as the characteristics of intelligence.

The most successful subfields of AI have, in fact, been those that, in some measure, involve learning (i.e., the acquisition and use of knowledge). For instance, so-called “deep learning” (DL) allows systems to learn and improve by going through lots and lots of examples rather than by being explicitly programmed. DL techniques are already widely incorporated in browsers, language translation, the detection of credit card fraud, and a host of other applications. The CEO of chip company NVIDIA explains deep learning in this way, “Instead of people writing software, we have data writing software.”

In general, deploying machine learning (ML), of which DL is a subgroup, faces two challenges: lack of data and the difficulty of developing generalized (inductive reasoning) algorithms. Even when data does exist, it may contain hidden assumptions that can trip the unwary. Also, idiot savant algorithms may perform well on specific tasks but fail if faced with the unexpected. Hence, delayed deployment of self-driving cars. And a system that processes language, such as a chatbot, may deceive a human into thinking it understands when, of course, it doesn’t. Being creatures of emotion, we easily succumb to the Tamagotchi effect.

Artificial Neural Networks

Deep learning really took off in 2012, after the success of the ImageNet Challenge. The challenge was to devise systems that can identify new images after “training” by going through hundreds of thousands of images. ImageNet is an online database of images that have all been labeled. Each label has dozens or hundreds of images associated with it. Faced with such a test, humans are able to accurately identify new images 95% of the time. In 2012, DL systems were close on our heels, attaining accuracy of 85%. In 2015, DL accuracy increased to 96%. DL systems use large amounts of computing power and vast quantities of training data to operate artificial neural networks (ANNs).

An artificial neural network (ANN) simulates the way a human brain handles information. In a biological brain, neurons are the brain cells that receive sensory input from within our body and without, activate responses in our muscles, and communicate what it is doing to other neurons.

An ANN—simulated entirely in software—is composed of three basic “layers.” An input layer of neurons through which data is input to the system; several hidden layers that process the data; and an output layer that gives the results. Each neuron incorporates a set of “weights'' and an “activation function” that determines when and to what extent it operates. The system is trained by adjusting the weights so that for a wide variety of given inputs, it produces the desired output. Thereafter, it is expected that unfamiliar input will be handled based on the system’s “experience.” Today’s deep-learning networks have 20 to 30 layers; Microsoft has built one with 152 layers.

Deep Learning

For example, Google is using deep learning to improve the quality of search results, understand voice commands, identify images, suggest answers to emails, improve language translation, and make autonomous vehicles respond better to their environment.

To some extent, the fidelity of DL networks depends on the quantity and variety of the labeled examples it trains on. Effective DL networks are possible since the growth of the internet has provided huge amounts of training data. But analyzing large quantities of data requires a lot of processing power. Fortunately, around 2009, the AI community realized that the graphics processing units (GPUs) used in video were well-suited to running deep-learning algorithms. GPUs have sped up deep-learning systems almost a hundredfold.

There are several kinds of deep learning networks. The one described above and the one most widely employed at present is “supervised learning,” where the system learns from observing examples. Another deep learning approach is “unsupervised learning.” Here the system is asked to review the examples but given no specific instructions beyond looking for patterns and anomalies. Such systems are good at identifying fraudulent activity or cyber intrusions. An unsupervised learning system developed by Google was able to identify images of cats, even though it had never been taught what a cat looked like.

A third type of deep learning is reinforcement learning, which lies somewhere between supervised and unsupervised learning. This approach places the system in an interactive game-like environment where it attempts to improve its performance by trial and error. The system is given the rules of the “game,” but no instruction in how to play. The courses of action it chooses to achieve the game’s objective result in either a reward (move closer to the objective) or punishment (lose the game). Reinforcement learning systems are thought to be appropriate for autonomous vehicles. Their use in managing the temperature of Google’s Data Centers has led to a 40% decrease in energy costs.

DL techniques have led to success in many narrow applications. The quest now is the development of artificial general intelligence (AGI) systems. These are “jack of all trades” networks capable of solving a wide range of tasks, rather than just some specific ones.

Benefits of Using AI in Business

There are three basic ways that AI can improve business capabilities: by intelligent automation of internal processes, through data analysis that leads to insights, and by making it easier for customers to interact with an organization, and for employees to work with each other.

Robotic Process Automation

Routine, repetitive tasks are the easiest to automate and so, for centuries, have been the first to be delegated to machines. Still, the robotic process automation (RPA) technologies of today are a great deal more advanced than earlier types of automation. RPA uses robots—really just software code—that behave like humans in the way they consume and produce information. For instance, RPA robots are used to transfer data from email and call center systems into systems of record, as well as update customer files with address changes and transaction data.

Data Analysis

AI is also ideal for scouring vast amounts of data in an effort to discover patterns that may lead to useful insights. Data analytic applications are now used to suggest products a customer may find most desirable, identify card fraud in real-time, detect insurance claims fraud, personalize marketing messages, and in many other ways. AI-powered data analysis differs from traditional data analysis because the AI agents learn from experience and hone their capabilities over time.

Cognitive Engagement

AI agents can be created that simulate a human’s conversational ability. Such agents are quite familiar. We know them as chatbots that engage customers using natural language processing. Chatbots can offer service at any hour and to any number of customers. No waiting in line for help from a representative. They are being trained to address an increasing number of issues, ranging from password requests to technical support questions—all in the customer’s natural language.

AI Applications in Business

AI has found its way into a variety of business enterprises. For one, it is transforming the way we shop. Since May 2020, consumers have had access to The Yes mobile app to streamline and personalize their online shopping experiences. The app uses the answers to a series of questions to learn the consumer’s preferences and to serve up only the fashion items she is most likely to desire. The user answers questions such as “Would you wear a strapless dress?” and “Do you like mini skirts?” with a simple “yes” or “no.” At present, The Yes app is available only for women’s fashion and devices that run iOS. The app is also of benefit to smaller brands. They now have an opportunity to increase their exposure by being listed on The Yes platform.

Google’s foray into AI marks a strategic shift for the search engine. Its operation has always depended on algorithms with defined rules. But the only way to improve that capability was by direct programming. Now, the search function incorporates deep learning neural networks that learn on their own. Other services that use AI include the advertising platforms: Google Ads and Doubleclick. Both of these platforms incorporate Smart Bidding, a machine learning-powered automated bidding system.

Google Maps’ Driving Mode uses AI to allow a user to send and hear messages, make calls, and control media by using voice commands while operating a vehicle. Gmail Smart Reply suggests replies that match a user’s style and the email they’ve received. Then there’s Nest Cam, a security camera that will detect people and large moving objects and livestream the video to a mobile app. A homeowner can be miles away and still monitor their premises. And Google Translate now uses an artificial neural network, christened Google Neural Machine Translation (GNMT), to increase the quality and accuracy of its translations.

Twitter uses AI to select the tweets that appear in a user’s feed. Previously, users would see tweets ranked by relevance to their search terms. This was probably the same for all users who entered a particular search term. With the addition of AI, feeds have become more personalized. Twitter also acquired machine learning company Magic Pony Technology (MPT) in 2016. MPT builds neural networks that enhance images.

SoundHound specializes in AI focused on recognizing sounds. The company uses its sound recognition technology to enable machines to respond to voice commands and to understand speech.

Wade and Wendy is a recruitment platform with a difference. Wade and Wendy are actually two chatbots. Wendy does the actual recruiting; Wade provides career advice. Wendy screens job candidates in Q&A sessions and highlights key information that determines how good a match for the position they might be. Each interview Wendy performs improves its screening ability.

TRUiC, a media and tech company specializing in tools and guides for small businesses, employs AI tools. Using the company’s business name generator, entrepreneurs are able to generate business names that already have an available domain name. This saves a lot of time and makes the tough job of finding the optimal name so much easier. The name generator kills two birds with one stone. Not only is a unique name generated, but it also checks for .COM domain availability, allowing businesses to set up a reputable website domain for their brand.

Creator and Destroyer of Jobs

Since the advent of the Industrial Revolution, the alarm has been sounded that machines will throw humans on the breadline. A New York Times headline of February 1928 warned that March of the Machine Makes Idle Hands; Farm Employment Less with Increased Output. At a February 1962 news conference, President Kennedy declared that “the major domestic challenge, really, of the ‘60s, [is] to maintain full employment at a time when automation, of course, is replacing men.” And in March 1964, a group of influential figures, including Nobel Prize winner Linus Pauling and a future recipient of the Prize, Gunnar Myrdal, addressed an open memo to President Johnson that raised the prospect that computers would soon create mass unemployment. In the 1980s, after their development and adoption, personal computers were lambasted for the same reasons.

We’ve become accustomed to machines taking over manual jobs. Now they’ve become smart enough to start replacing white-collar ones, like bookkeeping and radiology, as well. Jobs in radiology are a good example of those at risk, since DL is good at image recognition, and is able to identify malignant cancer cells much more accurately than radiologists can. Vulnerability depends on the extent to which the job encompasses routine tasks, not its status in the blue collar-white collar hierarchy. So, for example, the higher ranking radiologist is more likely to be replaced than her personal assistant.

Proportion of Workforce Likely to Lose Jobs to Automation: 47%

A fascinating study by two University of Oxford dons—Carl Benedikt Frey and Michael A. Osborne—on the future of work examined the likelihood that some 702 occupations would be computerized. It found that 47% of workers in America were at risk of losing their jobs owing to automation. The jobs most liable to be automated are the ones that require middling skills. It’s a trend that’s been observed for decades. The phenomenon, known as job polarization, occurs when employment becomes increasingly concentrated among the highest- and lowest-skilled jobs due to the disappearance of jobs made up mostly of routine tasks.

Categorizing occupations as variable or routine and as manual or cognitive leads to four groups. Non-routine cognitive jobs are generally high-skilled and include management, technical, and healthcare workers, such as doctors and engineers. Manual non-routine occupations are generally low-skilled and include service and protection workers, such as waiters and security guards. Cognitive-routine jobs include sales and office workers, such as sales agents and office assistants. Manual routine occupations include construction workers, mechanics, and machine assemblers.

As the charts indicate, the proportion of the workforce employed in routine jobs continues to fall. And while manual cognitive jobs remain at the same level, cognitive non-routine occupations are steadily increasing.

Yet, advances in technology, apart from other benefits they provide, generally create more jobs than they destroy. In 2016, for example, the Census Bureau reported that information technology (IT) jobs had increased tenfold, from 450,000 in 1970 to 4.6 million.

Automating tasks improves productivity, leading to greater profits, as well as lower production costs. The lower costs, in turn, means the product can be offered at a reduced price, which increases demand. In the end, the rise in demand causes industry employment to rise,(i.e., jobs are created). For instance, the introduction of automated teller machines (ATMs) might have been expected to reduce the number of bank employees, but it led instead to an increase.

ATMs reduced the cost of operating a branch, providing an incentive for banks to open more branches. Banks hired more workers, but instead of performing the routine business of dispensing cash, which doesn’t contribute to revenue, they marketed revenue-earning bank services, such as the sale of investment products. Rather than destroying jobs, automation forces their reformulation in a way that upgrades skill sets from manual to cognitive.

Moreover, the advances in technology that make automation possible can create entirely new occupations. It would have been difficult to imagine at the turn of the century that today one can find a job as a drone pilot, data analyst, YouTube vlogger, or SEO specialist. Fears that one day, machines will take all the jobs dooming large swaths of the populace to a poverty-stricken unemployed existence are unfounded, say economists. Such a perspective results from the “lump of labor” fallacy (i.e., the idea that there is a finite amount of work, and as machines do more, humans will have less to do).

Murder by Machine or the Modern Prometheus?

What about the presentiment, floated in “Terminator 3: Rise of the Machines,” that our creations will turn on us? That fear is also an old one, appearing in literature, most notably Mary Shelley’s 1818 novel, “Frankenstein; or, The Modern Prometheus.” In that gripping tale, a humanoid made by Victor Frankenstein terrorizes its creator and goes on to murder Frankenstein’s wife and friends. The Creature’s murderous rampage is motivated by a sense of injustice, something an AI application will not have. Even the most intelligent systems lack attributes that humans possess, in particular, consciousness and sentience. As such, they are not susceptible to the dark emotions that will precede going rogue. A machine cannot change its mind from benign to malignant purposes simply because it doesn’t have a mind to change.

As with most inventions, there’s no telling the impact AI will eventually have on society. It has already affected the provision of education profoundly by giving impetus to the development of the massive open online course (MOOC). Two MOOC platforms — Udacity and Coursera — can trace their origins to AI. In July 2011, computer scientists Sebastian Thrun and Peter Norvig offered their “Introduction to Artificial Intelligence” course freely available online. Over 160,000 people in 190 countries signed up for the course. The overwhelming response prompted Thrun, together with others, to found Udacity. The platform now has 11.5 million users and over 100 million in revenues.

The development of another MOOC platform, Coursera, followed a similar path. Also in 2011, Stanford professors Andrew Ng and Daphne Koller began offering their machine learning courses online. The following year, encouraged by the response, they launched Coursera, which now has over 77 million users. If it continues to bestow such benefits, AI may turn out to be the cornucopia of the digital technology age.


About the Author

Headshot of Anthony de Freitas

Anthony is the owner of Kip Art Gifts, an ecommerce store that specializes in art-inspired jewelry, fashion accessories, and other objects. Previously, he worked as an accountant and financial analyst. He enjoys writing on small business, financial intermediation, and economics. Anthony was educated at Wilson’s School and the London School of Economics and Political Science.

Related Articles


Read More

Form Your Startup

Ready to formally establish your startup? Click below to read our review of the best business formation services!

Best Business Formation Services