FUTURE OF WORK & TECHNOLOGY
The alternative workforce: It's now mainstream
The alternative workforce can be a long-term solution to tight talent markets—but only if treated strategically.
For many years, people viewed contract, freelance, and gig employment as “alternative work,” options considered supplementary to full-time jobs. Today, this segment of the workforce has gone mainstream, and it needs to be managed strategically. Given growing skills shortages and the low birth rate in many countries, leveraging and managing “alternative” workforces will become essential to business growth in the years ahead.
ORIGINALLY conceived of as contract work, “alternative” work today includes work performed by outsourced teams, contractors, freelancers, gig workers (paid for tasks), and the crowd (outsourced networks). The world is seeing rapid growth in the number of people working under such arrangements. By 2020, for instance, the number of self-employed workers in the United States is projected to triple to 42 million people.1Freelancers are the fastest-growing labor group in the European Union, with their number doubling between 2000 and 2014; growth in freelancing has been faster than overall employment growth in the United Kingdom, France, and the Netherlands.2 And many people are alternative workers part-time: Deloitte’s latest millennial study found that 64 percent of full-time workers want to do “side hustles” to make extra money.3
For organizations that want to grow and access critical skills, managing alternative forms of employment has become critical. Many countries are seeing declining birth rates,4 reducing the size of the labor pool. Forty-five percent of surveyed employers worldwide say they are having trouble filling open positions, the largest such percentage since 2006. Among companies with more than 250 employees, the percentage struggling to find qualified candidates rises to 67 percent.5
ORGANIZATIONAL DEVELOPMENT, DESIGN & LEARNING
Real learning in a virtual world
How can corporate trainers prepare employees for dangerous or extraordinary workplace scenarios? VR technology offers immersive learning opportunities for an increasingly broad range of experiences.
Introduction: Total immersion
AT THE oil refinery, emergency sirens begin to wail. A shift supervisor races to the scene of the emergency and sees smoke already billowing from the roof of a distillation unit. He needs to get the fire under control, but when he opens the door to the control room, a wall of flame greets him. The situation is worse than anything in his training manual. How can he locate the shut-off button when he can’t see through the flames? He hesitates—and in that moment, the pressure built up in the distillation tower releases in a massive explosion, ripping apart the building and scattering debris across the whole refinery.
Explore the technology industry collection
A red message flashes before the supervisor’s eyes: Simulation failed. A voice comes over the intercom and says, “All right—let’s take two minutes, and then we’ll reset from the beginning.” He is covered in sweat as he takes off the headset. It had been a virtual reality (VR) simulation, but the stress was real; more importantly, the lessons on how to respond to a crisis had been real.
For decades, trainers have faced a difficult trade-off: How can you adequately prepare learners to make good decisions when facing dangerous or extraordinary situations? You can provide simple learning materials like books and classes, but these are likely inadequate preparation for stressful and highly complex situations. Or you can expose the learners to those situations in live training, but this can be extremely costly—not to mention hazardous. For many jobs and situations, training has long offered an unappealing choice between easy but ineffective, or effective but expensive and risky.
VR promises a third way: a method of training that can break this trade-off of learning and provide effective training in a safe, cost-effective environment.1 Certainly, the technology is not optimal for every learning activity. But VR has been shown to offer measurable improvement in a wide array of immersive learning outcomes, in tasks that range from flying advanced jets to making a chicken sandwich to handling dangerous chemicals.2
This article is intended to help trainers identify whether VR is right for their particular learning needs and chart a path toward successful adoption of the technology. Ultimately, learning-focused VR can turn novices into experts more swiftly, effectively, and smoothly than ever before.
It’s all about expertise
Success in business often rests on having the right expertise in the right places: having the IT expert on hand when the system goes down, or the best shift manager on duty when a huge order comes in. The more experts in an organization, the more likely an expert will be around when needed.
Of course, expertise can be purchased by hiring established experts. But their numbers are finite, and with needs constantly shifting, training often makes far more sense. Corporate learning, then, aims to create expertise as quickly and effectively as possible. We want people to learn better and more quickly. This begs a question: What exactly is expertise? Just what is it that we want people to be able to do after training?
Expertise is easiest to define in terms of what it is not. Expertise is not merely the number of years one has studied or how many academic degrees—or corporate training certificates—one has earned or even the results one has achieved. For example, simply tabulating wins and losses in tennis turns out to be a poor way of ranking the best players.3 And notwithstanding some popular theories, thousands of hours of practice don’t always generate expertise. For example, deliberate practice accounts for only 29.9 percent of the variance in expertise in music.4
Experts are not only better at executing particular tasks—they tend to think about things fundamentally differently than amateurs. In fact, they can execute better precisely because they think about things differently. Experts typically see more when looking at a situation than an amateur. Research comparing a world champion chess player with amateurs showed that the champion was better not only at playing chess but at knowing the game. The champion had a better understanding of a chessboard setup after viewing it for five seconds than a skilled amateur did after 15 minutes of studying the board.5
That result came about not because the chess champion was any smarter or had faster visual acuity than his amateur opponents—it was a product of expertise itself. Experts are able to recognize patterns behind the data we all see. Academic research has found a similar pattern-recognition story in nearly every industry from medicine to chess.6 Experts in diverse domains are better able to reorganize and make sense of scrambled information.7 Where knowledgeable amateurs rely on rules and guidelines to make decisions, experts are able to quickly read and react to situations by recognizing indicators that signal how a situation is behaving.8 A key to creating experts, it seems, is not the memorization of facts or knowledge but, rather, instilling flexible mental models that help explain why systems act the way they do.
How can we learn better?
In hindsight, trainers may have had it easy in offering certifications based on hours of study. Creating deeper expertise can be far more challenging. How can we train people to see deeper patterns in data? How do we know whether they are using flexible mental models?
For most people, experiences that expose trainees to tough or atypical cases force them to create more refined or specialized reasoning than that found in a book or procedure manual.9 The most effective learning may come from unexpected scenarios, a challenge to present in a book or classroom.10 But unpredictable, experience-based learning has obvious limitations: It is easy to learn from experience when failure simply means losing a chess match, but what about fighting a fire, unloading hazardous chemicals, or configuring a wind turbine—all tasks for which failure means huge costs or even death? The problem facing trainers is how to create the benefits of learning from experience without incurring the costs of facing rare or dangerous experiences. The answer is to re-create those experiences.
Take medical training, for example. A cardiologist may practice for years, continually training, before reaching the peak of her profession. One reason: Many of the most serious medical problems are extremely rare, meaning that a doctor must often work for years before encountering them and building expertise in how to recognize and treat them. With some procedures requiring doctors to practice on 100 patients before reaching a critical level of skill, this means that some doctors may retire before even having the opportunity to become an expert in treating certain rare conditions.11
VR training offers a shortcut. Given its ability to present immersive, realistic situations over and over again, the technology can give doctors the opportunity to potentially build expertise on conditions before they see them for the first time in real patients (see figure 1). VR can also offer the ability to learn in new ways—not only simulating what a doctor might see but presenting it in 3D or in more detail. For example, a cardiologist could see a heart defect, not just from symptoms or test results but as a 3D model, allowing her to peek inside the heart and understand the problem more deeply and how to treat it more accurately.12
Virtual reality: Better training faster, safer, and at less cost
VR technology can enable more effective learning at a lower cost and in less time than many traditional learning methods. This is because VR can allow for more training repetitions, especially when dealing with costly, rare, or dangerous environments. For example, the skills of aviation maintenance personnel can degrade when budget constraints limit flying hours; if jets are not in the air, there is nothing to be fixed. But without that practice, critical maintenance skills can slip, leading to increased accidents.13 VR can allow maintenance staffers to keep up their skills by learning from experience, at a fraction of the cost of putting an actual jet in the sky.
VR is not just about saving money—it can provide better outcomes than many traditional learning methods. Most research examining the technology’s effectiveness have found that it reduces the time taken to learn, decreases the number of trainee errors, increases the amount learned, and helps learners retain knowledge longer than traditional methods.14 These effects apply to the general population as well as specialists training for unique tasks. One experiment compared how prepared airline passengers were for an emergency from reading the ubiquitous seatback safety card versus completing a brief immersive game. Passengers who used the game seemed to learn more and retain their knowledge longer than those who merely read the safety card. These better outcomes are almost certainly linked to the fact that the game was more successful than the card at engaging passengers and arousing fear, both incentivizing participants to learn and providing the neurological surprise to support that learning.15
Beyond simply improving how well learners retain information, VR-based training can help learners when they get it wrong. The ability to track all of a trainee’s actions and inputs as he or she moves through a scenario can reduce the cost of providing individual feedback and giving tailored feedback. Experts need not sift through all the data and tell a trainee where he or she went wrong—the system itself may be able to determine likely causes of error and best strategies for avoiding those errors in the future.16
All of these capabilities mean that VR can be a valuable learning tool for a variety of tasks in any industry—and some real-world applications are already catching up to predictions that academic research has suggested:
- Better learning. Some major retailers have begun training workers using VR simulations. Staff are able to repeatedly take on new tasks such as managing the produce department or annual challenges such as dealing with Black Friday.17 Working through these challenges is designed to help people directly see the impact of their actions on customer experience. And simulations can even allow staff to virtually travel to other stores to see how operations are managed there, spreading good ideas and offering paths to improvement.18 As a result, some companies have found that not only do people seem to retain more compared to traditional methods—they appear to learn more as well.19
- Faster learning. In 2017, KFC debuted a VR training simulation to help trainees learn the chain’s “secret recipe” for preparing chicken. Using the simulation, trainees were able to master the five steps of making fried chicken in 10 minutes, compared with 25 minutes for conventional instruction.20
Linde’s experience with VR-based training illustrates the technology’s potential benefits. One of the world’s largest suppliers of industrial gases, Linde delivers hazardous chemicals to thousands of locations daily, meaning that truck drivers must handle materials that may be explosive or, at -320° F, cold enough to instantly freeze hands solid. When one slip-up can mean injury or death, how can new drivers build their skills and expertise? For Linde, VR-based training provides an answer. In the virtual environment, new drivers can get dozens of repetitions, building safe habits before stepping out on their first delivery.21VR can even give drivers an X-ray view of what is happening inside the tanks as they work. Not only are drivers practicing the right skills—they are learning the underlying concepts of why they are the right skills. That is what can create expertise—allowing drivers to react to unexpected situations quickly and with confidence.
Linde is experimenting with more ambitious VR training environments as well. The company used CAD files for a plant currently under construction to create an immersive VR environment, aiming to train the operators who will eventually manage that plant.22 As with the earlier oil-refinery example, operators can practice emergency procedures or dangerous tasks, but they can also explore the environment, understand how all systems fit together, and even peek inside operating machinery to have a better view of the plant for which they will soon be responsible.23
When can VR enhance training?
As with any technology, VR is a tool, not a magic bullet. Incorporating VR into a training program hardly guarantees quality improvements; indeed, the coming years will doubtless bring anecdotes of VR disappointments along with successes. Trainers should bring the same careful planning in program design and learning goals to VR as to any other training effort—including focusing programs around understanding the knowledge that an organization needs learners to acquire and what they should then do with that knowledge.
The knowledge that learners must acquire can cover a wide range, but several factors are particularly relevant to VR technology: how rare the knowledge is, how observable, and how easily it can be replicated physically. A cardiologist may struggle to learn about uncommon heart defects exactly because they are rare, limiting learning opportunities. Many find organic chemistry challenging to learn partly because one can’t directly observe molecular bonds with human senses; landing on an aircraft carrier is tricky to perfect because repetitions are both costly and dangerous.
Another attribute to consider: what trainers expect learners to do with the knowledge once they have it. Do people simply need to recognize and apply it, as with reading the defense in football, or do they need to perform complicated actions such as synthesizing it with other knowledge and adjusting to context? All of these factors play into how best to present knowledge to learners.
By understanding the different factors that go into learning, a trainer can make informed decisions about when VR is appropriate and design the best training possible to maximize performance (see figure 2). For example, if learners need only acquire relatively simple information—that is, information that is common, obvious, or easy to represent—VR may be superfluous and no more effective than books, classroom instruction, or job aids.
Similarly, if learners need to do more complex tasks involving simple information, VR may help, but there may be easier, cheaper ways to accomplish the learning. Take the simple knowledge of a workflow: Workers need to understand the workflow and apply it in different contexts. VR might certainly help in learning such workflows, but it may not always be necessary. If the various contexts of the work are not rare, dangerous, or costly to recreate, using case studies or job aids may be cost-effective alternatives.
Where VR moves into a class of its own is when the knowledge that learners must acquire is complex: where trainees must try to grapple with difficult-to-observe phenomena that occur rarely or in dangerous situations. In these cases, VR-based training may well be an effective choice, offering the advantages of faster and better learning at lower cost.
Indeed, VR’s ability to allow for collaboration and for repeated simulation opens up entirely new learning possibilities:
- Shared scenarios. Consider a military squad that needs its members not only to individually do the right thing but to coordinate and work together. Shared scenarios can allow members to practice individual actions and communication within the squad in a variety of combat situations they could not normally face.
- Seeing the unseen. VR may be even more helpful for research scientists. Not only do they often need to collaborate within teams—they regularly struggle with concepts not easily visualized. But imagine if a team of scientists could share ideas while all looking at a 3D model of the molecules they are studying. They could come up with new ideas inspired by finally seeing the previously unseen—and they could then easily share those ideas with their colleagues.
- Test and re-test. VR technology allows trainees to test ideas as well as share them. Many Formula 1 auto racing teams use VR extensively in preparation for races, going far beyond drivers simply learning the track—after all, they already know it by heart. Instead, the teams use simulations to test different setups for their car and different race strategies.24 The aim is to prepare team members for any eventuality during the race, helping them react swiftly. This type of virtual testing represents a deeper form of learning, one in which the drivers and the teams are using VR to see into the future and discover the deeper patterns in what is likely to happen. In short, they are building expertise.
Getting started is less daunting than it may seem
Many trainers no doubt find exciting the description of VR as a new technology that can bring revolutionary benefits, though CFOs and CTOs—worried about complex technical integration, high up-front costs, and years of headlines about VR hype—may express less initial enthusiasm. The good news: Implementing VR technology may be far less daunting than it might seem. With standardized development kits, training design and technical integration have never been easier, as the costs of hardware, computing power, and storage continue to fall. As a result, many will find the cost of VR-based training applications increasingly reasonable. Especially when companies consider the increases in performance and the cost savings from time lost to longer, traditional training methods, VR can show a rapid return on investment.
With technology improving and prices dropping, the major steps to consider for creating successful VR learning resemble those typically involved in designing any good learning program:
- Understand your training needs. Determine the type of knowledge that learners must absorb and how they must use that knowledge during the job to help understand whether VR is right for your need and how it should be used.
- Create your business case. Quantify the expected benefit from the training in terms of increased performance, decreased errors, and productivity gains from fewer days lost to training. Array those benefits against expected costs to understand the ROI for the project.
- Pilot the training. Start small. Begin with a pilot program to evaluate the effectiveness of the VR training and its adoption within the organization.
- Quantify the benefit and scale the program. Use the results of the pilot program to validate initial estimates of ROI, modify the program based on what worked and what did not, and scale in scope or size of deployment.
Following these steps, companies adopting VR should get more than a shiny new technology—they can get better learning at lower cost than other options. Ultimately, the applications of VR and its ROI are limited not by dollars or technology but purely by imagination.
ORGANIZATIONAL DEVELOPMENT, DESIGN & LEARNING
From jobs to superjobs
As organizations embrace and adopt robotics and AI, they’re finding that virtually every job can be redesigned—creating new categories of work, including hybrid jobs and “superjobs.”
The use of artificial intelligence (AI), cognitive technologies, and robotics to automate and augment work is on the rise, prompting the redesign of jobs in a growing number of domains. The jobs of today are more machine-powered and data-driven than in the past, and they also require more human skills in problem-solving, communication, interpretation, and design. As machines take over repeatable tasks and the work people do becomes less routine, many jobs will rapidly evolve into what we call “superjobs”—the newest job category that changes the landscape of how organizations think about work.
DURING the last few years, many have been alarmed by studies predicting that AI and robotics will do away with jobs. In 2019, this topic remains very much a concern among our Global Human Capital Trends survey respondents. Almost two-thirds of this year’s respondents (64 percent) cited AI and robotics as an important or very important issue in human capital. But are fears of net job losses to technology realistic? And what additional implications does the growing adoption of these technologies in the workplace hold?
First, let’s discuss the technology. The market for technologies such as robotic process automation (RPA)—software to automate manual tasks—is growing at 20 percent per year and is likely to reach US$5 billion by 2024.1 Reflecting this growth, 41 percent of respondents to our 2019 Global Human Capital Trends survey say they are using automation extensively or across multiple functions. Among the various ways they are automating work, RPA is the most prevalent, but 26 percent of respondents are using robotics, 22 percent are using AI, and 22 percent are using cognitive technologies as well (figure 1). And their use is expected to spread. In our survey, 64 percent of respondents saw growth ahead in robotics, 80 percent predicted growth in cognitive technologies, and 81 percent predicted growth in AI. Now that organizations are using these technologies, it appears they are seeing the benefits and investing heavily in them.
The language of automation
Automation: Includes robotics, cognitive, and AI.
Robotics: Includes physical robots (such as drones and robots used for manufacturing) and robotic process automation (technology that automates highly standardized routines and transactions).
Cognitive technologies: Include natural language processing and generation (machines that understand language), and machine learning (pattern recognition).
AI: Machines that can make predictions using deep learning, neural networks, and related techniques.
Given this growth in adoption, our survey also shows that the level of “fear” and “uncertainty” around these technologies is growing. Only 26 percent of respondents stated that their organizations were “ready or very ready” to address the impact of these technologies. In fact, only 6 percent of respondents said that their organizations were “very ready,” suggesting that organizations are now beginning to understand the scale and the massive implications for job design, reskilling, and work reinvention involved in integrating people and automation more extensively across the workforce.
The jobs they are a-changin’
Are jobs going away due to technology? While some may be eliminated, our view is that many more are changing. The unemployment rate remains low in the United States, and the labor market is tight for new and critical skills around the world. Furthermore, only 38 percent of our survey respondents told us that they expect technology to eliminate jobs at their organizations within the next three years, and only 13 percent believe automation will eliminate a significant number of positions, far different from our findings on this score only a few years ago.
Earlier research by Deloitte posited that automation, by removing routine work, actually makes jobs more human, enabling the role and contribution of people in work to rise in importance and value. The value of automation and AI, according to this research, lies not in the ability to replace human labor with machines, but in augmenting the workforce and enabling human work to be reframed in terms of problem-solving and the ability to create new knowledge. “It is [the] ability to collectively make sense of the world that makes us uniquely human and separates us from the robots—and it cuts across all levels of society.”2
The ways our survey respondents tell us they are using automation, and their efforts to redesign work as a corollary to automation, speaks to this idea. This year, while 62 percent of respondents are using automation to eliminate transactional work and replace repetitive tasks, 47 percent are also augmenting existing work practices to improve productivity, and 36 percent are “reimagining work.” Many respondents also told us they were doubling down on reskilling: Eighty-four percent of the respondents who said that automation would require reskilling reported that they are increasing funding for reskilling and retraining, with 18 percent characterizing this investment as “significant” (figure 2).
The picture that emerges from these findings is that, as machines replace humans in doing routine work, jobs are evolving to require new combinations of human skills and capabilities. This creates the need for organizations to redesign jobs—along with their business and work processes—to keep pace.
The advent of “superjobs”
In traditional job design, organizations create fixed, stable roles with written job descriptions and then add supervisory and management positions on top. When parts of jobs are automated by machines, the work that remains for humans is generally more interpretive and service-oriented, involving problem-solving, data interpretation, communications and listening, customer service and empathy, and teamwork and collaboration. However, these higher-level skills are not fixed tasks like traditional jobs, so they are forcing organizations to create more flexible and evolving, less rigidly defined positions and roles.
These new types of jobs, which go under a variety of names—“manager,” “designer,” “architect,” or “analyst”—are evolving into what we call “superjobs.” New research shows that the jobs in highest demand today, and those with the fastest acceleration in wages, are so-called “hybrid jobs” that bring together technical skills, including technology operations and data analysis and interpretation, with “soft” skills in areas such as communication, service, and collaboration.3 The concept of superjobs takes this shift one step further. In a superjob, technology has not only changed the nature of the skills the job requires but has changed the nature of the work and the job itself. Superjobs require the breadth of technical and soft skills that hybrid jobs do—but also combine parts of different traditional jobs into integrated roles that leverage the significant productivity and efficiency gains that can arise when people work with smart machines, data, and algorithms.4
The evolution of jobs
Standard jobs: Roles that perform work using a specified and narrow skill set. Generally organized around repeatable tasks and standard processes.
Hybrid jobs: Roles that perform work using a combination of skill sets drawing on both technical and soft skills. Historically, these types of skills have not been combined in the same job.
Superjobs: Roles that combine work and responsibilities from multiple traditional jobs, using technology to both augment and broaden the scope of the work performed and involve a more complex set of domain, technical, and human skills.
For instance, the Cleveland Clinic, a leading US medical center facing new competition from for-profit hospital systems that had moved into the Cleveland area, underwent a fundamental rethinking and redesign of its entire enterprise—including job definitions. Not a single role was left untouched: Whether clinical or not, whether licensed or not, each position had to be evaluated and considered for potential gains in efficiency, skill level, and viability. In this process, the clinic realized that specialist roles in medicine had to become more flexible and dynamic. It became clear that doctors had to be responsible not only for deep medical domain understanding but also for understanding broad issues of patient care. One result of this effort was an increased awareness of the hybrid roles played by nurses and other care providers—and an increased investment in training them in “care and case management” to broaden their skills beyond their technical specialties.5
From redesigning jobs to recoding work
The creation of superjobs—and the decomposition, recombination, and expansion of new roles as part of their creation—requires organizations to think about work design in new ways. If organizations take existing tasks and simply automate them, there will likely be some improvement in throughput—but if the jobs and the work are redesigned to combine the strengths of the human workforce with machines and platforms, the result can be significant improvements in customer service, output, and productivity.6 The shift from the redesign of jobs to the recoding of work—integrating machines and humans in the flow of work and creating meaningful roles for people—is a substantial challenge in front of every business and HR leader. It will require fresh thinking and high levels of collaboration across the business, including among the IT, finance, and HR functions, among others. And it will take a deliberate plan to get in front of the challenge.
Recoding work for the future demands a new approach: not just rewriting job descriptions, but rather starting with a broader canvas and then composing the work so it can take advantage of machines, workers in alternative work arrangements, and—most importantly—unique human capabilities such as imagination, curiosity, self-development, and empathy. This contrasts with the traditional approach to creating job descriptions, which have typically been defined by a narrow view of the skills, activities, tasks, and expectations of workers in highly specific roles. In many organizations, this has led to a proliferation of hundreds of very detailed and formulaic—and some would say deadening and uninspiring—job descriptions and profiles. A job canvas, on the other hand, takes a more expansive, generative, and meaningful view. In the future, work will be defined by:
- The outputs and problems the workforce solves, not the activities and tasks they execute;
- The teams and relationships people engage and motivate, not the subordinates they supervise;
- The tools and technologies that both automate work and augment the workforce to increase productivity and enhance value to customers; and
- The integration of development, learning, and new experiences into the day-to-day (often real-time) flow of work.
Imagine this construct in the context of the HR organization. Today, HR roles are shifting dramatically due to the influx of technology, from chatbots to automated workflows. A redesigned job could use technology to increase the range of questions an HR shared services representative could answer. But while doing this would add some value, a more powerful opportunity to increase productivity and value would be to start with a broader canvas of what HR shared services can be. Given that technology can provide real-time insights on worker sentiment and behavior across the enterprise, is there a way to combine these insights with the human skills needed to work in HR shared services—problem-solving, communication and listening, customer service and empathy, and teamwork and collaboration—to craft an entirely new role of an HR “experience architect”? The person in such a superjob would take advantage of technology to automate answering routine questions, while focusing primarily on the outcome of delivering an effective workforce experience. It would not be a redesigned HR shared services job, but one in which the work itself has been recoded to encompass more possibilities, greater productivity, and, ultimately, a more meaningful experience for workers who are looking for more.
The potential for backlash
The advent of superjobs carries with it the potential for societal backlash. The flip side—some would say the darker side—of the creation of superjobs is growth in commodity jobs, service jobs, and microtasks. Already, commentators are seeing a bifurcation of some work and jobs into highly augmented, complex, well-paid jobs on the one hand, and lower-wage, lower-skilled work across service sectors on the other. Recent research is capturing the impact of technology and automation on the division of the job market.7In the face of the potential social consequences, business leaders should challenge themselves to reimagine work to meet the needs of all workforce segments in all job types—service and gig workers as well as those with superjobs.
Clearly, the full story has yet to unfold with regard to technological advances and their impact to work. We believe that organizations need to view these trends in the context of the social enterprise—and the increasingly important connections between organizations and society. Augmenting workers with technology will, no doubt, lead to work being done in new ways. The challenge before organizations now is to execute this reinvention in a manner that leads to positive results for themselves, their workers, and the economy and society as a whole.
FUTURE OF WORK & TECHNOLOGY
Fears of AI-based automation forcing humans out of work or accelerating the creation of unstable jobs may be unfounded. AI thoughtfully deployed could instead help create meaningful work.
Creating good jobs
WHEN it comes to work, workers, and jobs, much of the angst of the modern era boils down to the fear that we’re witnessing the automation endgame, and that there will be nowhere for humans to retreat as machines take over the last few tasks. The most recent wave of commentary on this front stems from the use of artificial intelligence (AI) to capture and automate tacit knowledge and tasks, which were previously thought to be too subtle and complex to be automated. Is there no area of human experience that can’t be quantified and mechanized? And if not, what is left for humans to do except the menial tasks involved in taking care of the machines?
At the core of this concern is our desire for good jobs—jobs that, without undue intensity or stress, make the most of workers’ natural attributes and abilities; where the work provides the worker with motivation, novelty, diversity, autonomy, and work/life balance; and where workers are duly compensated and consider the employment contract fair. Crucially, good jobs support workers in learning by doing—and, in so doing, deliver benefits on three levels: to the worker, who gains in personal development and job satisfaction; to the organization, which innovates as staff find new problems to solve and opportunities to pursue; and to the community as a whole, which reaps the economic benefits of hosting thriving organizations and workers. This is what makes good jobs productive and sustainable for the organization, as well as engaging and fulfilling for the worker. It is also what aligns good jobs with the larger community’s values and norms, since a community can hardly argue with having happier citizens and a higher standard of living.1
Read more from the Future of Work collection
Does the relentless advance of AI threaten to automate away all the learning, creativity, and meaning that make a job a good job? Certainly, some have blamed technology for just such an outcome. Headlines today often express concern over technological innovation resulting in bad jobs for humans, or even the complete elimination of certain professions. Some fear that further technology advancement in the workplace will result in jobs that are little more than collections of loosely related tasks where employers respond to cost pressures by dividing work schedules into ever smaller slithers of time, and where employees are being asked to work for longer periods over more days. As the monotonic progress of technology has automated more and more of a firm’s function, managers have fallen into the habit of considering work as little more than a series of tasks, strung end-to-end into processes, to be accomplished as efficiently as possible, with human labor as a cost to be minimized. The result has been the creation of narrowly defined, monotonous, and unstable jobs, spanning knowledge work and procedural jobs in bureaucracies and service work in the emerging “gig economy.”2
The problem here isn’t the technology; rather, it’s the way the technology is used—and, more than that, the way people think about using it. True, AI can execute certain tasks that human beings have historically performed, and it can thereby replace the humans who were once responsible for those tasks. However, just because we can use AI in this manner doesn’t mean that we should. As we have previously argued, there is tantalizing evidence that using AI on a task-by-task basis may not be the most effective way to apply it.3 Conceptualizing work in terms of tasks and processes, and using technology to automate those tasks and processes, may have served us well in the industrial era, but just as AI differs from previous generations of technologies in its ability to mimic (some) human behaviors, so too should our view of work evolve so as to allow us to best put that ability to use.
In this essay, we argue that the thoughtful use of AI-based automation, far from making humans obsolete or relegating them to busywork, can open up vast possibilities for creating meaningful work that not only allows for, but requires, the uniquely human strengths of sense-making and contextual decisions. In fact, creating good jobs that play to our strengths as social creatures might be necessary if we’re to realize AI’s latent potential and break us out of the persistent period of low productivity growth that we’re experiencing today. But for AI to deliver on its promise, we must take a fundamentally different view of work and how work is organized—one that takes AI’s uniquely flexible capabilities into account, and that treats humans and intelligent machines as partners in search of solutions to a shared problem.
Problems rather than processes
Consider a chatbot—a computer program that a user can converse or chat with—typically used for product support or as a shopping assistant. The computer in the Enterprise from Star Trek is a chatbot, as is Microsoft’s Zo, and the virtual assistants that come with many smartphones. The use of AI allows a chatbot to deliver a range of responses to a range of stimuli, rather than limiting it to a single stereotyped response to a specific input. This flexibility in recognizing inputs and generating appropriate responses is the hallmark of AI-based automation, distinguishing it from automation using prior generations of technology. Because of this flexibility, AI-enabled systems can be said to display digital behaviors, actions that are driven by the recognition of what is required in a particular situation as a response to a particular stimulus.
We can consider a chatbot to embody a set of digital behaviors, how the bot responds to different utterances from the user. On the one hand, the chatbot’s ability to deliver different responses to different inputs gives it more utility and adaptability than a nonintelligent automated system. On the other hand, the behaviors that chatbots evince are fairly simple, constrained to canned responses in a conversation plan or limited by access to training data.4 More than that, chatbots are also constrained by their inability to leverage the social and cultural context they find themselves in. This is what makes chatbots—and AI-enabled systems generally—fundamentally different from humans, and an important reason that AI cannot “take over” all human jobs.
Humans rely on context to make sense of the world. The meaning of “let’s table the motion,” for example, depends on the context it’s uttered in. Our ability to refer to the context of a conversation is a significant contributor to our rich behaviors (as opposed to a chatbot’s simple ones). We can tune our response to verbal and nonverbal cues, past experience, knowledge of past or current events, anticipation of future events, knowledge of our counterparty, our empathy for the situation of others, or even cultural preferences (whether or not we’re consciously aware of them). The context of a conversation also evolves over time; we can infer new facts and come to new realizations. Indeed, the act of reaching a conclusion or realizing that there’s a better question to ask might even provide the stimulus required to trigger a different behavior.
Chatbots are limited in their ability to draw on context. They can only refer to external information that has been explicitly integrated into the solution. They don’t have general knowledge or a rich understanding of culture. Even the ability to refer back to earlier in a conversation is problematic, making it hard for earlier behaviors to influence later ones. Consequentially, a chatbot’s behaviors tend to be of the simpler, functional kind, such as providing information in response to an explicit request. Nor do these behaviors interact with each other, preventing more complex behaviors from emerging.
The way chatbots are typically used exemplifies what we would argue is a “wrong” way to use AI-based automation—to execute tasks typically performed by a human, who is then considered redundant and replaceable. By only automating the simple behaviors within the reach of technology, and then treating the chatbot as a replacement for humans, we’re eliminating richer, more complex social and cultural behaviors that make interactions valuable. A chatbot cannot recognize humor or sarcasm, interpret elliptical allusions, or engage in small talk—yet we have put them in situations where, being accustomed to human interaction, people expect all these elements and more. It’s not surprising that users find chatbots frustrating and chatbot adoption is failing.5
A more productive approach is to combine digital and human behaviors. Consider the challenge of helping people who, due to a series of unfortunate events, find themselves about to become homeless. Often these people are not in a position to use a task-based interface—a website or interactive voice response (IVR) system—to resolve their situation. They need the rich interaction of a behavior-based interface, one where interaction with another human will enable them to work through the issue, quantify the problem, explore possible options, and (hopefully) find a solution.
We would like to use technology to improve the performance of the contact center such a person might call in this emergency. Reducing the effort required to serve each client would enable the contact center to serve more clients. At the same time, we don’t want to reduce the quality of the service. Indeed, ideally, we would like to take some of the time saved and use it to improve the service’s value by empowering social workers to delve deeper into problems and find more suitable (ideally, longer-term) solutions. This might also enable the center to move away from break-fix operation, where a portion of demand is due to the center’s inability to resolve problems at the last time of contact. Clearly, if we can use technology appropriately then it might be possible to improve efficiency (more clients serviced), make the center more effective (more long-term solutions and less break-fix), and also increase the value of the outcome for the client (a better match between the underlying need and services provided).
If we’re not replacing the human, then perhaps we can augment the human by using a machine to automate some of the repetitive tasks. Consider oncology, a common example used to illustrate this human-augmentation strategy. Computers can already recognize cancer in a medical image more reliably than a human. We could simply pass responsibility for image analysis to machines, with the humans moving to more “complex” unautomated tasks, as we typically integrate human and machine by defining handoffs between tasks. However, the computer does not identify what is unusual with this particulartumor, or what it has in common with other unusual tumors, and launch into the process of discovering and developing new knowledge. We see a similar problem with our chatbot example, where removing the humans from the front line prevents social workers from understanding how the factors driving homelessness are changing, resulting in a system that can only service old demand, not new. If we break this link between doing and understanding, then our systems will become more precise over time (as machine operation improves) but they will not evolve outside their algorithmic box.
Our goal must be to construct work in such a way that digital behaviors are blended with human behaviors, increasing accuracy and effectiveness, while creating space for the humans to identify the unusual and build new knowledge, resulting in solutions that are superior to those that digital or human behaviors would create in isolation . Hence, if we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence. To do this, we need to move away from thinking of work as a string of tasks comprising a process, to envisioning work as a set of complementary behaviors concentrated on addressing a problem. Behavior-based work can be conceptualized as a team standing around a shared whiteboard, each holding a marker, responding to new stimuli (text and other marks) appearing on the board, carrying out their action, and drawing their result on the same board. Contrast this with task-based work, which is more like a bucket brigade where the workers stand in a line and the “work” is passed from worker to worker on its way to a predetermined destination, with each worker carrying out his or her action as the work passes by. Task-based work enables us to create optimal solutions to specific problems in a static and unchanging environment. Behavior-based work, on the other hand, provides effective solutions to ill-defined problems in a complex and changing world.
If we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence.
To facilitate behavior-based work, we need to create a shared context that captures what is known about the problem to be solved, and against which both human and digital behaviors can operate. The starting point in our contact center example might be a transcript of the conversation so far, transcribed via a speech-to-text behavior. A collection of “recognize-client behaviors” monitor the conversation to determine if the caller is a returning client. This might be via voice-print or speech-pattern recognition. The client could state their name clearly enough for the AI to understand. They may have even provided a case number or be calling from a known phone number. Or the social worker might step in if they recognize the caller before the AI does. Regardless, the client’s details are fetched from case management to populate our shared context, the shared digital whiteboard, with minimal intervention.
As the conversation unfolds, digital behaviors use natural language to identify key facts in the dialogue. A client mentions a dependent child, for example. These facts are highlighted for both the human and other digital behaviors to see, creating a summary of the conversation updated in real time. The social worker can choose to accept the highlighted facts, or cancel or modify them. Regardless, the human’s focus is on the conversation, and they only need to step in when captured facts need correcting, rather than being distracted by the need to navigate a case management system.
Digital behaviors can encode business rules or policies. If, for example, there is sufficient data to determine that the client qualifies for emergency housing, then a business-rule behavior could recognize this and assert it in the shared context. The assertion might trigger a set of “find emergency housing behaviors” that contact suitable services to determine availability, offering the social worker a set of potential solutions. Larger services might be contacted via B2B links or robotic process automation (if no B2B integration exists). Many emergency housing services are small operations, so the contact might be via a message (email or text) to the duty manager, rather than via a computer-to-computer connection. We might even automate empathy by using AI to determine the level of stress in the client’s voice, providing a simple graphical measure of stress to the social worker to help them determine if the client needs additional help, such as talking to an external service on the client’s behalf.
As this example illustrates, the superior value provided by structuring work around problems, rather than tasks, relies on our human ability to make sense of the world, to spot the unusual and the new, to discover what’s unique in this particular situation and create new knowledge. The line between human and machine cannot be delineated in terms of knowledge and skills unique to one or the other. The difference is that humans can participate in the social process of creating knowledge, while machines can only apply what has already been discovered.6
Good for workers, firms, and society
AI enables us to think differently about how we construct work. Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors. Individuals consulting financial advisors, for example, typically don’t want to purchase investment products as the end goal; what they really want is to secure a happy retirement. The problem can be defined as follows: What does a “happy retirement” look like; how much income is needed to support that lifestyle, how to balance spending and saving today to find the cash to invest and navigate and (financial) challenges that life puts in the road, and what investments give the client the best shot at getting from here to there? The financial advisor, client, and robo-advisor could collaborate around a common case file, a digital representation of their shared problem, incrementally defining what a “happy retirement” is and, consequently, the needed investment goals, income streams, and so on. This contrasts with treating the work as a process of “request investment parameters” (which the client doesn’t know) and then “recommend insurance” and “provide investment recommendations” (which the client doesn’t want, or only wants as a means to an end). The financial advisor’s job is to provide the rich human behaviors—educator to the investor’s student—to elucidate and establish the retirement goals (and, by extension, investment goals), while the robo-advisor provides simple algorithmic ones, responding to changes in the case file by updating it with an optimal investment strategy. Together, the human and robo-advisor can explore more options (thanks to the power and scope of digital behaviors) and develop a deeper understanding of the client’s needs (thanks to the human advisor’s questioning and contextual knowledge) than either could alone, creating more value as a result.
Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors.
If organizing work around problems and combining AI and human behaviors to help solve them can deliver greater value to customers, it similarly holds the potential to deliver greater value for businesses, as productivity is partly determined by how we construct jobs. The majority of the productivity benefits associated with a new technology don’t come from the initial invention and introduction of new production technology. They come from learning-by-doing:7 workers at the coalface identifying, sharing, and solving problems and improving techniques. Power looms are a particularly good example, with their introduction into production improving productivity by a factor of 2.5, but with a further factor of 20 provided by subsequent learning-by-doing.8
It’s important to maintain the connection between the humans—the creative problem identifiers—and the problems to be discovered. This is something that Toyota did when it realized that highly mechanized factories were efficient, but they didn’t improve. Humans were reintroduced and given roles in the production process to enable them to understand what the machines were doing, develop expertise, and consequently improve the production processes. The insights from these workers reduced waste in crankshaft production by 10 percent and helped shorten the production line. Others improved axel production and cut costs for chassis parts.9
This improvement was no coincidence. Jobs that are good for individuals—because they make the most of human sense-making nature—generally are also good for firms, because they improve productivity through learning by doing. As we will see below, they can also be good for society as a whole.
Consider bus drivers. With the development of autonomous vehicles in the foreseeable future, pundits are worried about what to do with all the soon to be unemployed bus drivers. However, rather than fearing that autonomous buses will make bus drivers redundant, we should acknowledge that they will find themselves in situations that only a human, and human behaviors, can deal with. Challenging weather (heavy rain or extreme glare) might require a driver to step in and take control. Unexpected events—accidents, road work, or an emergency—could require a human’s judgment to determine which road rule to break. (Is it permissible to edge into a red light while making space for an emergency vehicle?) Routes need to be adjusted due to anything from a temporarily moved stop to modifying routes due to roadwork. A human presence might be legally required to, for example, monitor underage children or represent the vehicle at an accident.
As with chatbots, automating the simple behaviors and then eliminating the human will result in an undesirable outcome. A more productive approach is to discover the problems that bus drivers deal with, and then structure work and jobs around these problems and the kinds of behaviors needed to solve them. AI can be used to automate the simple behaviors, enabling the drivers to focus on more important ones, making the human-bus combination more productive as a result. The question is: Which problems and decision centers should we choose?
Let us assume that the simple behaviors required to drive a bus are automated. Our autonomous bus can steer, avoiding obstacles and holding its lane, maintain speed and separation with other vehicles, and obey the rules of the road. We can also assume that the bus will follow a route and schedule. If the service is frequent enough, then the collection of buses on a route might behave as a flock, adjusting speed to maintain separation and ensure that a bus arrives at each stop every five minutes or so, rather than attempting to arrive at a specific time.
As with the power loom, automating these simple behaviors means that drivers are not required to be constantly present for the bus (or loom) to operate. Rather than drive a single bus, they can now “drive” a flock of buses. The drivers monitor where each bus is, how it’s tracking to schedule, with the system suggesting interventions to overcome problems, such as a breakdown, congestion, or changed road conditions. The drivers can step in to pilot a particular bus should the conditions be too challenging (roadworks, perhaps, where markings and signaling are problematic), or to deal with an event that requires that human touch.
These buses could all be on the same route. A mobile driver might be responsible for four-to-five sequential buses on a route, zipping between them as needed to manage accidents or dealing with customer complaints (or disagreements between customers). Or the driver might be responsible for buses in a geographic area, on multiple routes. It’s even possible to split the work, creating a desk-bound “driver” responsible for drone operation of a larger number of buses, while mobile and stationary drivers restrict themselves to incidents requiring a physical presence. School or community buses, for example, might have remote video monitoring while in transit, complemented by a human presence at stops.
Breaking the requirement that each bus have its own driver will provide us with an immediate productivity gain. If 10 drivers can manage 25 autonomous buses, then we will see productivity increase by a factor of 2.5, as we did with power looms: good jobs for the firm, as workers are more productive. Doing this requires an astute division of labor between mobile, stationary, and remote drivers, creating three different “bus driver” jobs that meet different work preferences: good jobs for the worker and the firm. Ensuring that these jobs involve workers as stakeholders in improving the system enables us to tap into learning-by-doing, allowing workers to continue to work on their craft, and the subsequent productivity improvements that learning-by-doing provides, which is good for workers and the firm.
These jobs don’t require training in software development or AI. They do require many of the same skills as existing bus drivers: understanding traffic, managing customers, dealing with accidents, and other day-to-day challenges. Some new skills will also be required, such as training a bus where to park at a new bus stop (by doing it manually the first time), or managing a flock of buses remotely (by nudging routes and separations in response to incidents), though these skills are not a stretch. Drivers will require a higher level of numeracy and literacy than in the past though, as it is a document-driven world that we’re describing. Regardless, shifting from manual to autonomous buses does not imply making existing bus drivers redundant en masse. Many will make the transition on their own, others will require some help, and a few will require support to find new work.
The question then, is: What to do with the productivity dividend? We could simply cut the cost of a bus ticket, passing the benefit onto existing patrons. Some of the saving might also be returned to the community, as public transport services are often subsidized. Another choice is to transform public transport, creating a more inclusive and equitable public transport system.
Buses are seen as an unreliable form of transport—schedules are sparse with some buses only running hourly for part of the day, and not running at all otherwise; and route coverage is inadequate leaving many (less fortunate) members of society in public transport deserts (locations more than 800 m from high-frequency public transport). We could rework the bus network to provide a more frequent service, as well as extending service into under-serviced areas, eliminating public transport deserts. The result could be a fairer and more equitable service at a similar cost to the old, with the same number of jobs. This has the potential to transform lives. Reliable bus services might result in higher patronage, resulting in more bus routes being created, more frequent services on existing bus routes, and more bus “drivers” being hired. Indeed, this is the pattern we saw with power looms during the Industrial Revolution. Improved productivity resulted in lower prices for cloth, enabling a broader section of the community to buy higher quality clothing, which increased demand and created more jobs for weavers. Automation can result in jobs that are good for the worker, firm, and society as a whole.
Automation can result in jobs that are good for the worker, firm, and society as a whole.
How will we shape the jobs of the future?
There is no inevitability about the nature of work in the future. Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable. Melvin Kranzberg, a historian specializing in the history of technology, captured this in his fourth law: “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”10
The jobs first created by the development of the moving assembly line were clearly unacceptable by social standards of the time. The solution was for society to establish social norms for the employee-employer relationship—with the legislation of the eight-hour an example of this—and the development of the social institutions to support this new relationship. New “sharing economy” jobs and AI encroaching into the workplace suggest that we might be reaching a similar point, with many firms feeling that they have no option but to create bad jobs if they want to survive. These bad jobs can carry an economic cost, as they drag profitability down. In this essay, as well as our previous,11 we have argued that these bad jobs are also preventing us from capitalizing on the opportunity created by AI.
Our relationship with technology has changed, and how we conceive work needs to change as a consequence. Prior to the Industrial Revolution, work was predominantly craft-based; we had an instrumental relationship with technology; and social norms and institutions were designed to support craft-based work. After the Industrial Revolution, with the development of the moving production line as the tipping point, work was based on task-specialization, and a new set of social norms and institutions were developed to support work built around products, tasks, and the skills required to prosecute them. With the advent of AI, our relationship with technology is changing again, and this automation is better thought of as capturing behaviors rather than tasks. As we stated previously, if automation in the industrial era was the replication of tasks previously isolated and defined for humans, then in this post-industrial era automation might be the replication of isolated and well-defined behaviors that were previously unique to humans.12
There are many ways to package human and digital behaviors—of constructing the jobs of the future. We, as a community, get to determine what these jobs look like. This future will still require bus drivers, mining engineers and machinery operators, financial advisors, as well as social workers and those employed in the caring professions, as it is our human proclivity for noticing the new and unusual, of making sense of the world, that creates value. Few people want financial products for their retirement fund; what they really want is a happy retirement. In a world of robo-advisors, all the value is created in the human conversation between financial advisors and clients, where they work together to discover what the clients’ happy retirement is (and consequently, investment goals, incomes stream, etc.), not in the mechanical creation and implementation of an investment strategy based on predefined parameters. If we’re to make the most of AI, realize the productivity (and, consequently, quality of life) improvements it promises, and deliver the opportunities for operational efficiency, then we need to choose to create good jobs:
- Jobs that make the most of our human nature as social problem identifiers and solvers
- Jobs that are productive and sustainable for organizations
- Jobs with an employee-employer relationship aligned with social norms
- Jobs that support learning by doing, providing for the worker’s personal development, for the improvement of the organization, and for the wealth of the community as a whole.
The question, then, is: What do we want these jobs of the future to look like?
DIGITAL (HR & TRANSFORMATION)
Organizational Network Analysis: The Missing Piece of Digital Transformation
Since 2000, more than half of Fortune 500 companies have “gone bankrupt, been acquired, or ceased to exist as a result of digital disruption”, according to Harvard Business Review. In the same article, it is estimated that three-quarters of today’s S&P 500 will be replaced in 10 years-time, largely as a result of digital disruption. And no industry will be an exception.
On the bright side, rewards from successful digital transformation are plentiful. For example, one study showed that the companies who have successfully digitally transformed themselves achieve a 16 percentage points higher margin than their industry average.
So we have both “the stick” and “the carrot” to make our companies digital. But the degree to which digitization drives sectors and firms, according to the McKinsey Global Institute’s Industry Digitization Index, shows us that only very few companies actually went far with their digital transformations. For example, the United States operates on only 18 percent of its digital potential, and Europe on an even lower level of 12 percent.
So, why aren’t there many more organizations making greater digital progress?
Before we dig any deeper, it is important to define what digital transformation actually is. My personal favorite comes from Brian Solis, the LinkedIn influencer who’s famous for his 6 stages of digital transformation. Solis defines digital transformation as “the realignment of, or new investment in, technology, business models, and processes to drive new value for customers and employees and more effectively compete in an ever-changing digital economy.”
Although robust, this definition covers all the ingredients: new technology, new business models and processes, and new values for customers and employees. All with the purpose to make organizations excel in a constantly changing digital economy.
How Hard Is Digital Transformation?
Deloitte’s Global Human Capital Trends 2016 found that 92% of senior executives and HR leaders believe their companies are not well organized. Yet, only 14% believe their companies are ready to reorganize effectively. They just might be right, since business transformations have only a 30% chance to succeed. That percentage is known for more than 20 years, and today, according to Forbes, the odds are shrinking even further.
But that’s not the worst part.
The worst part is that the much-desired digital transformation has the smallest chance to succeed. Instead of the already daunting 30% success rate for business transformation overall, digital transformation, again according to Forbes, has only a 16% chance to succeed. With so many resources pouring into it (over $1.1 trillion estimated just for this year), and with digital transformation being a top priority for so many companies, every CEO should ponder: is it acceptable for me to have only a 16% chance to succeed?
The answer obviously is “No!”.
But what should they do to increase these absurd odds?
An interesting study comes from two professors from Harvard Business School, Marco Iansiti and Karim Lakhani, who joined forces with digital transformation expert Robert Bock. In their research on 344 companies coming from various industries such as manufacturing, consumer packaged goods, financial services, and retail, they found significant differences between the top 25% and the bottom 25% of digitally transformed companies.
The differences were clear in all four pillars of digital operations they observed: (1) Customer Interaction and Relationship Management, (2) Manufacturing, Product, and Service Delivery, (3) Product Creation and Delivery, and the (4) Human Capital Management and Employee Productivity.
Having those four pillars transformed, the top 25% companies got much better gross margins, earnings, and net income, than the bottom 25% companies. And the difference in their technology budgets? There wasn’t any. But something was stopping all the other companies from producing this kind of results.
The Biggest Obstacle
On LinkedIn, every single minute a post is written on digital transformation. But is this vast network of practitioners and academics of all kinds close to the solution on how to best address this pressing issue?
It seems not.
You’ll usually see some great partial solutions. Proposals that are based on different views of how to approach digital transformation, from needed skills to AI. For example, Gartner provides a great skills driven approach, which focuses on how a CIO can foster the development of digital dexterity in his/hers company. The approach to building-up skills is detailed. Yet I think it might be questionable whether it will really transform the employees’ mindset, or just make them “tick the boxes”, as they follow the steps.
As for the AI, the most promising software to accelerate digital transformation came to light just last month. Laszlo Bock, former CHRO of Google, presented his company Humu’s flagship product – the nudge engine. This software seems to be the closest way humans can get to AI’s abilities in terms of reacting on time with appropriate behavior, because the nudge engine reminds all employees to act in a desirable/prescribed way in real-time (e.g. to thank a co-worker for doing a good job, or ask a quieter team member about his/her opinion during a meeting).
But this method, like any other, will encounter its challenges if it “pushes” employees into behaviors that they aren’t fond of, or are actually against. Sure, the employees might do what nudge engine suggests to please their managers, but on a deeper level, if their beliefs and values don’t change, real transformation won’t happen. Tomayto – tomahto? Some employees will bypass what they don’t believe in whenever they can because they are humans and “no AI is going to tell them how to behave.”
Aware of possible obstacles of any kind of digital transformation approach, Marcus Blosch, Gartner’s research vice president, warns about 6 barriers to become a digital business. He lists various factors, pointing out as the first barrier the same thing all the other authors highlight when they comprehensively investigate this topic. There are different names for it (mostly organizational culture), but with all fingers pointing in the same direction, the biggest obstacle seems to really be – the human aspect.
Although other barriers must not be disregarded, the famous quote attributed to Peter Drucker: “The culture eats strategy for breakfast” rings true. How people perceive change is decisive since they are the ones who have to work in a new way. So, making Drucker’s claim more actionable, we could say the main reason companies fail at digital transformation is because they don’t get the buy-in from most of their employees.
But how do you get people to buy digital transformation?
Find and Motivate the Influencers
As we all know from personal experience, some people are more influential than others. The good thing about this is – as was proven a couple of decades ago already – that you can actually measure someone’s influential reach in an organization. The method used for this is called Organizational Network Analysis (ONA).
This scientific methodology has been rigorously tested in the business environment by many researchers and practitioners for more than a quarter of a century. One of them, Michael J. Arena, General Motors’ Chief Talent Officer, played a critical role in transforming GM. Based on this and other experiences, he published a book this summer about how GM and other companies are transforming themselves from traditional to agile organizations using ONA. The book is based on more than a decade-long research that included dozens of major companies. The companies ranged all the way from automotive, aerospace, health-care, and high-tech firms to consumer goods and financial services companies.
ONA can be used in different ways. While Michael J. Arena mostly uses it for creating so-called adaptive space, there are other ways to apply it. For example, while some companies just appoint their change agents, with all the challenges that come with that (e.g not knowing their actual reach, not being transparent, even causing bad blood, etc.), other companies identify them transparently in a scientifically and business proved way.
And that makes all the difference.
For example, Maven7, an ONA company co-founded by one of the world’s greatest network scientists, Albert-Laszlo Barabasi, has been helping companies to successfully transform their businesses using Organizational Network Analysis. Among other discoveries, they found out that, on average, with just 4% of influencers identified by ONA you can effectively reach about 70% of employees. And as Harvard Business Review emphasizes, more than 50% of influencers are typically unknown to the management. This gives you the scope of how much you miss by appointing change agents without using ONA. The real magic happens when you add influencers to the number of employees they reach. Including them, management can have three-quarters of employees on their side. Sometimes even more. That’s enough to seriously accelerate digital transformation.
Organizational Network Analysis enables management to see, plan and harness the power of the company’s informal networks, making teams closer than they ever were. Bersin by Deloitte also focuses on ONA as the tool to help make the Network of Teams, which is the main management leverage for the most advanced companies today. With Network of Teams champions being companies like Google, Facebook and Amazon, using Organizational Network Analysis seems to be the natural path to follow to create a digitally mature culture.
Also, Josh Bersin envisions the earlier mentioned nudge engine as a great tool to combine with ONA. Although being thrilled with its huge potential, he points out that the nudge engine will prove its worth over time, while Organizational Network Analysis has firm business foundations already.
ONA – The Human Slide for Transformation
People, generally speaking, don’t like change. Especially in an organizational setting. They worry about their salaries, their formal or informal status, job security, being able to cope with the change etc. So they are more likely to be defensive than cheerful when they hear the term “digital transformation”.
They have to be convinced that it’s good for them. And who can better convince them than the colleagues they already trust? That’s where Organizational Network Analysis jumps in. ONA tells you who are the most trusted employees in your organization who can reach three-quarters of your employees, themselves included.
There are different ways to do ONA, from short surveys to tracking the technological aspect of communication in the company (e-mails, e-calendars, phone records etc.). If you do this right, you’ll get a motivated group of employees eager to work with you, who can positively influence most of the other employees to accept digital transformation.
Gather them, lay out your strategy to them and listen carefully to their feedback. Many of them will come from the front lines of your business and know exactly where challenges may arise for them, for other employees and for customers. The great thing about influencers is that you can use their feedback at any point of digital transformation.
Therefore, I believe Organizational Network Analysis is the missing piece of the digital transformation puzzle. It helps you move the barriers Gartner mentions, from employee resistance through increased inter-departmental collaboration, to closing the talent gap, all of which accelerate the change.
If we envision digital transformation as a steep downhill walk (which is a good metaphor given the risks!), ONA could serve you as the slide, accelerating your progress in a controlled way. Allowing your influencers to help set up a strategy would be like setting up at the top of the slide. They could see some details you missed, and clear your path by motivating other employees to move obstacles ahead of you.
By doing so, they’ll make you go faster and not run into something (e.g. employee resistance). If at some point you run into some challenges and slow down, they’ll help you align the strategy, and accelerate again. And if you already fell, they’ll suggest the next steps to get up and going again.
Working alongside other employees, these opinion makers will amplify your messages. They are credible and they will reach the hearts and minds of other employees in a way you never could. That’s what they do naturally every day. They’ll extend your reach and help steer the conversion process on the field, giving you real-time feedback you can act upon. They are the best help you could ever have. And those “consultants“ are right there – in your company – waiting for you to consult with them.
Bock, Iansiti and Larkhani concluded their paper on digital transformation with a sentence: „To do this well, leading companies invest not only in technology but also in developing (…) network-centric capabilities and mindset to put that technology to the best use.“
Those top 25% companies we mentioned may or may not be using Organizational Network Analysis. But ONA surely helps to empower employees, connect them and help build network-centric capabilities and mindset. The Harvard Business Review article from the beginning of this text concludes that digitally transformed companies “are companies that have embraced transformation as a way of life (…).”
It’s important to keep this in mind because like any other growth, digital transformation is a path, not a destination. And only if the majority of your employees accept digital transformation as a way of life, your digital transformation will succeed.