Pic
HR Technology
Artificial Intelligence in HR – FAQs you need to be able to answer

AI-768x363.png

With this article, I am using an interview style: By ‘answering’ to FAQs around AI in HR I am trying to paint a picture of the ‘state of the union’ and what is likely to happen next.

What are the most common applications of artificial intelligence in HR? How are these employed today?

On this issue we need to distinguish between the different levels of artificial intelligence. The typical 3-step subdivision mentions (a) artificial narrow intelligence (ANI) – ANI is, for example, on par with an infant. The second stage is artificial general intelligence (AGI), an advanced level: It covers more than one field like power of reasoning, problem solving and abstract thinking, which is mostly on par with adults. Artificial super intelligence (ASI), the final stage of the intelligence explosion, in which AI surpasses human intelligence across all fields. We have not yet reached the last stage – cognitive self-learning is therefore not yet relevant, certainly not for HR divisions. However, the other two levels are already applied in HR.

ai stages.PNG

a) Artificial Narrow Intelligence (ANI)

“Learning machines” on the first stage of artificial intelligence make decisions by tapping into large amounts of data and statistically validated algorithms. Classic examples are chatbots such as Siri, Alexa or Cortana – all so-called “personal assistants”. They can identify patterns in voice commands  and react to them according to predefined algorithms. In HR these are employed when the initial contact with applicants is being made –  often when it comes to answering standard questions about an advertised position. As chatbots they can support an ongoing interaction between recruiter and applicant during the recruiting process, in this case they quite simply increase the recruiters accessibility or the applicant.

Second field of application: „Natural Language Processing“ (NLP), this technology supports the scanning of letters of application to characterize the applicants` range and use vocabulary and his "wording“ in general. NLP can assist in  writing job advertisements, by using a language which is exactly targeted towards the preferred group of applicants. Good examples are "X.ai“ for appointments or "Mya“ for answering questions about starting date, salary or recruiting process and "ClearFit“ for creating job advertisements and evaluating applications.

The applicant typically benefits by being able to interact simpler and faster with the company on a 24/7 basis. For the HR person, this use of AI means an efficiency gain.

b) Artificial General Intelligence (AGI)

On the second level of artificial intelligence networked machines can develop new models and algorithms ad hoc. In HR this technology can improve the preselection of applicants. With this second stage of artificial intelligence we can tackle an imminent and important problem of employee selection today: Unconscious prejudices. The world is full of it, as well as the world of employee selection: As a rule, people prefer “themselves and their peers” to people who are socialized differently. Personal experience with certain behaviors of others also determines a person`s judgement of others. All these subconscious prejudices can impair a good selection decision. Now, AI doesn`t know these prejudices – which might actually having exciting consequences. One example is "HireVue“ – this video interview service was originally used to increase employee selection efficiency: Candidates interviewed via video do not have to travel and do not have to synchronize their availability with the hiring manager’s busy schedules – a good, sensible tool many companies use. Now HireVue collects and stores thousands of candidate interviews for their clients on a daily basis. And by tapping into that vast amount of data, it can develop models for an automated pre-selection: What choice of words does the applicant use? Which facial expressions and gestures when answering standardized questions? And how do facial expressions and gestures fit the skills the company is looking for? In a 15-minute interview, HireVue generates and evaluates around 25,000 data points – significantly more than the 50-100 data points that a recruiter would collect and document in a classic interview. This way HireVue creates a data-driven profile of the applicant and a short list of suitable applicants without a hiring manager having to spend a second of work. And unconscious prejudices could be a thing oft he past.

 SAP’s recruiting solution "SuccessFactors AI“ scans job ads for gender-specific terms that may deter the opposite sex from applying. This should help to minimize the diversity bias in recruiting and address a larger talent pool.

What has to be considered when introducing AI in an HR department?

  • In times of scarce talent, it is not the most advanced AI recruitment process that solves the biggest selection problems. Our collaborations with large companies such as Bertelsmann, Bosch, BMW, Cisco, Deutsche Bahn and many others clearly demonstrate this. Any use of technology – including AI – should primarily aim at improve the candidate’s experience. Too many applicants are lost during the recruiting process. And for the best applicants we are not attractive to begin with. Any HR department should first identify all contact points (“touchpoints”) in the applicants-company-relation, then identify the most important ones. And then do everything to optimize the contact point experience.
  • Critical points of contact that often stand for a poor candidate experience are: A tedious online application process, difficult availability, uninformed or unreliable recruiters and uninspiring candidate aptitude tests. If these contact points are identified and improved by employing successful creative methods such as Design Thinking, truly interesting fields for artificial intelligence often emerge – with the clear aim to improve the candidate’s experience.

Example of a candidate journey map:

candidate.png

Some ways it can be done:

  • One-click mechanisms for uploading Xing or LinkedIn profiles are absolutely sufficient for the actual application – these profiles are in turn automatically evaluated by first level AI.
  • Chatbot-“Recruiters” are available 24 hours a day, 7 days a week – another first level AI field of activity.
  • Classic, tedious application tests averaging of 100 closed questions, (“on a scale of 1 to 5, how much do you agree with the following statement”) can replaced by an unbiased, fast and modern designed video interview with open questions. This would require second level AI.

How do applicants have to prepare for the new technology?

First of all: Not at all. Remember: An applicant should fit a job. If a machine helps finding the right person – for example by avoiding unconscious prejudice – all the better.

But applicants can use artificial intelligence themselves. A simple example would be „Textio“: It provides phrasing assistance for letters of motivation – aligned to the job profile you are looking for. Recruiters can use it too: With Textio Johnson & Johnson achieved a
25% increase in the response rate of candidates approached in active sourcing.

An very widely discussed example was, almost two years ago, "EstherBot“. A female applicant, had built the chatbot – Esther – herself  without any programming skills only by using free chatbot technology. "EstherBot“ could answer questions about Esther’s working life, work preferences and motivation. The applicant sent a link to her chatbot to companies she was interested in as an employer. Potential employers did not preselect Esther by telephone interview, but by talking to her chatbot.

Can computers also conduct job interviews?

Not yet, not as we know them. What is possible: Chatbots for hiring manager interaction and video interviews that can be evaluated for content, intonation, facial expressions and gestures. These technologies already offer astonishing experiences: 73% of the interviewed candidates who came into contact with the digital assistant "Mya“ thought (and stated!) they were in contact with a human recruiter – and not, as they were, with the bot).

A real job interview would require third level AI – machine awareness, which would enable an AI to continuously develop new models, new assessments and the next questions in the actual interview.

Google introduced its “Duplex” technology at the i/o conference in May 2018, it might be a forerunner of this third level of AI: "Duplex“ called at a hairdresser and a restaurant to make an appointment or reserve a table. In both cases, the person at the other end of the line did not seem to notice that she or him was talking to a computer program. Duplex sounds like a human being, he intoned, paused, and understood questions. And it spread a few “hmm” or “uh” over the conversation.

Is it possible to do the potential analysis of the existing workforce, with the help of AI?

Of course: AI-supported HR does not only work for employees who are new to the company. It can be designed to come up with predictions for any kind future behavior – and thus it would also function as a potential analysis tool.

How can machines judge possibly good leadership capacities?

Wherever unconscious prejudices affect a good decision, machines have a potential advantage. However, machine-supported decision making is only useful if it functions as valuable contact point and leaves the employee to be evaluated with a good experience,- This of course also goes for managers.

Where does this technology lead us? How will we use AI in human resources in 10 years?

10 years is a long time, it’s difficult to make an educated guess. Whether we will actually reach the third level of AI in 10 years – and machine consciousness – I don’t know. At the moment I see the AI-hype in HR dwindling a little. But we certainly will have access to more and more data on people’s behavior – this can make a lot of things possible.

Will artificial intelligence make recruiting cheaper?

Yes, very much so. As our "Candidate Journey Maps“ show – we developed them together with major international companies: We can almost completely automate the pre-selection process – and provide applicants with an even better experience than in a conventional selection process. We expect to increase efficiency by 22%. At Unilever, for example, the use of HireVue led to an increase in application completion rates from 50% to 96% for 250,000 applicants to 800 vacancies, while at the same time reducing recruitment time by 90%.

"Early adopter“ or "Wait&See“? Can you give Pro’s and Con’s on each of these stances?

A company using Level 1 AI belongs to the “Early Majority”. A company running HR on Level 2 AI can be dubbed an “Early Adopter”. Of course, the decision to integrate AI should be aligned with the recruitment requirements goals of the company. In times when talent is hard to find, the first experience I leave with applicants is crucial. According to our research, the most effective instruments for optimization will probably contain elements of artificial intelligence.

Does AI have “teething troubles” when applied to HR?

AI in HR is still in its infancy, that’s right. In the narrow field of aptitude diagnostics and potential analysis, the classic test procedures were developed by organisational psychologists in the 1970s. We have been validating these tests for 50 years and still many managers doubt the reliability of these tests. And now we replace these tests with artificial intelligence. And we will still face the problem of validation and distrust. Trust in algorithms will growing much slower than the underlying technology is going to develop. In other words: This fundamental change in HR  will less be determined by the quality of ideas of computer scientists than by the acceptance of their technologies in organisations. That’s why we recommend to use AI where it serves an improved employee or applicant experience. If that works AI will undoubtedly be applied there.

Will corporations no longer need HR departments in 10 years – due to AI?

There can always occur unforeseeable innovative leaps – so-called “black swans”. But without black swans, I don’t think the HR departments will disappear. They will undergo major changes, they will become more customer orientated and data-driven. But they won’t disappear.

Which companies use artificial intelligence as a recruitment tool?

Many companies use first level AI. In cooperation with IBM, Siemens is developing a chatbot called “CARL” (Cognitive Advisor for interactive user Relationship & continuous Learning) this chatbot greets employees as well as applicants with the simple question, “How can I help you today?“ That way chatbots become a valuable “Employee Engagement” partner.

Unilever, in cooperation with Microsoft, is pursuing a very similar strategy. I think that more than half of the major Silicon Valley companies work with recruiter chatbots. And „HireVue“’s reference list is also long (e.g. GoldmanSachs, Vodafone or Nike). Many of these companies are now going to use artificial intelligence – as we suggest – in connection with candidate or employee experience. This will increase the acceptance rate of this technology.


Pic
HR Technology
What is Machine Learning?

This is the first of a series of articles intended to make Machine Learning more approachable to those who do not have a technical training. I hope it is helpful.

Advancements in computer technology over the past decades have meant that the collection of electronic data has become more commonplace in most fields of human endeavor. Many organizations now find themselves holding large amounts of data spanning many prior years. This data can relate to people, financial transactions, biological information, and much, much more.

Simultaneously, data scientists have been developing iterative computer programs called algorithms that can look at this large amount of data, analyse it and identify patterns and relationships that cannot be identified by humans. Analyzing past phenomena can provide extremely valuable information about what to expect in the future from the same, or closely related, phenomena. In this sense, these algorithms can learn from the past and use this learning to make valuable predictions about the future.

While learning from data is not in itself a new concept, Machine Learning differentiates itself from other methods of learning by a capacity to deal with a much greater quantity of data, and a capacity to handle data that has limited structure. This allows Machine Learning to be successfully utilized on a wide array of topics that had previously been considered too complex for other learning methods.

Examples of Machine Learning

The following are examples of more well-developed uses of Machine Learning that you may have come across in your day-to-day lives:

Types of Machine Learning

Machine Learning can be classified into three main categories:

  1. Supervised learning algorithms make use of a training set of input and output data. The algorithm learns a relationship between the input and output data from the training set and then uses this relationship to predict the output for new data. One of the most common supervised learning objectives is classification. Classification learning aims to use the learned information to predict membership of a certain class. The credit scoring example represents classification learning in that it predicts people who default on loans.
  2. Unsupervised learning aims to make observations in data where there is no known outcome or result, through deducing underlying patterns and structure in the data. Association learning is one of the most common forms of unsupervised learning, where the algorithm searches for associations between input data. The shopping basket analysis example represents association learning.
  3. Reinforcement learning is a form of ‘trial and error’ learning where input data stimulates the algorithm into a response, and where the algorithm is ‘punished’ or ‘rewarded’ depending on whether the response was the desired one. Robotics and autonomous technology make great use of this form of learning,

What are the necessary conditions for successful Machine Learning?

Machine Learning and ‘Big Data’ has become more well-known and has generated a lot of press in recent years. As a result, many individuals and organizations are considering how and if it might apply to their specific situation and whether there is value to be gained from it.

However, building internal capabilities for successful Machine Learning (or making use of external expertise) can be costly. Before taking on this challenge, it is wise to assess whether the right conditions exist for the organization to have a chance of success. The main considerations here relate to data and to human insight.

There are three important data requirements for effective Machine Learning. Often, not all of these requirements can be satisfactorily met, and shortcomings in one can sometimes be offset by one or both of the others. These requirements are:

  • Quantity: Machine Learning algorithms need a large number of examples in order to provide the most reliable results. Most training sets for supervised learning will involve thousands, or tens of thousands of examples.
  • Variability: Machine Learning aims to observe similarities and differences in data. If data is too similar (or too random), it will not be able to effectively learn from it. In classification learning, for example, the number of examples of each class in the training data is critical to the chances of success.
  • Dimensionality: Machine Learning problems often operate in multidimensional space, with each dimension associated with a certain input variable. The greater the amount of missing information in the data, the greater the amount of empty space which prevents learning. Therefore, the level of completeness of the data is an important factor in the success of the learning process.

Machine Learning can also be aided by high quality human insight. The permutations and combinations of analyses and scenarios than can be studied from a given set of data are often vast. The situation can be simplified by conversations with subject matter experts. Based on their knowledge of the situation, they can often highlight the aspects of the data that are most likely to provide insights. For example, a recruiting expert can help to identify what data points are most likely to drive a company’s selection decisions based on many years of being involved in and observing those decisions. Knowledge of underlying processes within an organization can also help the data scientist select the algorithm which best models that process and which, therefore, has the greatest chance of success.


Pic
HR Technology
How does Machine Learning work?

This is the second in a series of articles intended to make Machine Learning more approachable to those without technical training. The first article, which describes typical uses and examples of Machine Learning, can be found here.

In this installment of the series, a simple example will be used to illustrate the underlying process of learning from positive and negative examples, which is the simplest form of classification learning. I have erred on the side of simplicity to make the principles of Machine Learning accessible to all, but I should emphasize that real life use cases are rarely as simple as this.

Learning from a training set

Imagine that a company has a recruiting process which looks at many thousands of applications and separates them into two groups — those who have ‘high potential’ to receive a job with the company, and those who do not. Currently, human beings decide which group each application falls into. Imagine that we want to learn and predict which applications are considered ‘high potential’. We obtain some data from the company for a random set of prior applications, both those which were classified as high potential (positive examples) and those who were not (negative examples). We aim to find a description that is shared by all the positive examples and by none of the negative examples. Then, if a new application occurs, we can use this description to determine if the new application should be considered ‘high potential’.

Further analysis of the applications reveals that there are two main characteristics that affect whether an application could be described as ‘high potential’. The first is the College GPA of the applicant, and the second is the applicant’s performance on a test that they undertake during the application process. We therefore decide only to consider these factors in our determination of whether an application is ‘high potential’. These are our input attributes.

We can therefore take a subset of current applications and represent each one by two numeric values (x,y) where x is the applicant’s college GPA, and y is the applicant’s performance in the test. We can also assign each application a value of 1 if it is a positive example and 0 if it is a negative example. This is called the training set.

For this simple example, the training set can be plotted on a graph in two dimensions, with positive examples marked as a 1 and negative examples marked as a zero, as illustrated below.

After looking at the data further, we can establish certain minimum values on the two dimensions x and y, and we can say that any ‘high potential’ application must fall above these values. That is x > x1 and y > y1 for suitable values of x1 and y1.

This then defines the set of ‘high potential’ applications as a rectangle on our chart, as shown here.

In this way, we have made the hypothesis that our class of ‘high potential’ applications is a rectangle in two-dimensional space. We now reduce the problem to finding the values of x1and y1 so that we have the closest ‘fit’ to the positive examples in our training set.

We now decide to try a specific rectangle to see how well it fits the training data. We call this rectangle rr is a hypothesis function. We can try r on our training set and count how many instances in the training set occur where a positive example does not fall into the rectangle r. The total number of these instances is called the error of r. Our aim is to use the training set to make this error as low as possible, even to make it zero if we can.

One option is to use the most specific hypothesis. That is, to use the tightest rectangle that contains all of the positive examples and none of the negative examples. Another is to use the most general hypothesis, which is the largest rectangle that contains all the positive example and none of the negative examples.

In fact, any rectangle between the most specific and most general hypothesis will work on the specific training set we have been given.

However, our training set is just a sample list of applications, and does not include all applications. Therefore, even if our proposed rectangle r works on the training set, we cannot be sure that it would be free from error if applied to applications which are not in the training set. As a result, our hypothesis rectangle r could create errors when applied outside the training set, as indicated below.

Measuring error

When a hypothesis r is developed from a training set, and when it is then tried out on data that was not in the training set, one of four things can happen:

  1. True positive (TP): When r gives a positive result and it agrees with the actual data
  2. True negative (TN): When r gives a negative result and it agrees with the actual data
  3. False positive (FP): When r gives a positive result and it disagrees with the actual data
  4. False negative (FN): When r gives a negative result and it disagrees with the actual data. (This is the shaded area in the previous diagram)

The total error of the hypothesis function r is equal to the sum of FP and FN.

Ideally we would want this to equal zero. However…

Noise

The previous example of learning ‘high potential’ applications based on two input attributes is very simplistic. Most learning scenarios will involve hundreds or thousands of input attributes, tens of thousands of examples in the training set and will take hours, days or weeks of computer capacity to process.

It is virtually impossible to create simple hypotheses that have zero error in these situations, due to noise. Noise is unwanted anomalies in the data that can disguise or complicate underlying relationships and weaken the learning process. The diagram below shows a dataset that may be affected by noise, and for which a simple rectangle hypothesis cannot work, and a more complex graphical hypothesis is necessary for a perfect fit.

Noise can be caused by:

  • Errors or omissions in the input data
  • Errors in data labeling
  • Hidden attributes which are unobservable and for which no data is available, but which affect the classification.

Despite noise, data scientists will usually aim to find the simplest hypothesis possible on a training set, for example a line, rectangle or simple polynomial expression. They will be willing to accept a certain degree of training error in order to keep the hypothesis as simple as possible. Simple hypotheses are easier to construct, explain and generally require less processing power and capacity, which is an important consideration on large datasets.

Generalization, underfit and overfit

As observed above, it is necessary for a data scientist to make a hypothesis about which function best fits the data in the training set. In practical terms, this means that the data scientist is making assumptions that a certain model or algorithm is the best one to fit the training data. The learning process requires such ingoing assumptions or hypotheses, and this is called the inductive bias of the learning algorithm.

As we also observed, it is possible for a certain algorithm to fit well to a training set, but then to fail when applied to data outside the training set. Therefore, once an algorithm is established from the training set, it becomes necessary to test the algorithm against a set of data outside the training set to determine if it is an acceptable fit for new data. How well the model predicts outcomes for new data is called generalization.

If a data scientist tries to fit a hypothesis algorithm which is too simple, although it might give an acceptable error level for the training data, it may have a much larger error when new data is processed. This is called underfitting. For example, trying to fit a straight line to a relationship that is a higher order polynomial might work reasonably well on a certain training set, but will not generalize well.

Similarly, if a hypothesis function is used which is too complex, it will not generalize well — for example, if a multi-order polynomial is used in a situation where the relationship is close to linear. This is called overfitting.

Generally the success of a learning algorithm is a finely balanced trade-off between three factors:

  1. The amount of data in the training set
  2. The level of the generalization error on new data
  3. The complexity of the original hypothesis which was fitted to the data

Problems in any one of these can often be addressed by adjusting one of the others, but only to a degree.

The typical process of Machine Learning

Putting all of the above observations together, we can now outline the typical process used in Machine Learning. This process is designed to maximize the chances of learning success and to effectively measure the error of the algorithm.

Training: A subset of real data is provided to the data scientist. The data includes a sufficient number of positive and negative examples to allow any potential algorithm to learn. The data scientist experiments with a number of algorithms before deciding on those which best fit the training data.

Validation: A further subset of real data is provided to the data scientist with similar properties to the training data. This is called the validation set. The data scientist will run the chosen algorithms on the validation set and measure the error. The algorithm that produces the least error is considered to be the best. It is possible that even the best algorithm can overfit or underfit the data, producing a level of error which is unacceptable.

Testing: It will be important to measure the error of any learning algorithm that is considered implementable. The validation set should not be used to calculate this error as we have already used the validation set to choose the algorithm so that it has minimal error. Therefore the validation set has now effectively become a part of the training set. To obtain an accurate and reliable measure of error, a third set of data should be used, known as the test set. The algorithm is run on the test set and the error is calculated.

The typical output of a classification algorithm

The typical output of a classification algorithm can take two forms:

Discrete classifiers. A binary output (YES or NO, 1 or 0) to indicate whether the algorithm has classified the input instance as positive or negative. Using our earlier example, the algorithm would simply say that the application is ‘high potential’ or it is not. This is particularly useful if there is no expectation of human intervention in the decision making process, such as if the company has no upper or lower limit to the number of applications which could be considered ‘high potential’.

Probabilistic classifiers. A probabilistic output (a number between 0 and 1) which represents the likelihood that the input falls into the positive class. For example, the algorithm may indicate that the application has a 0.68 probability of being high potential. This is particularly useful if human intervention is to be expected in the decision making process, such as if the company has a limit to the number of applications which could be considered ‘high potential’. Note that a probabilistic output becomes a binary output as soon as a human defines a ‘cutoff’ to determine which instances fall into the positive class.



Pic
HR Technology
Reconstructing jobs

Fears of AI-based automation forcing humans out of work or accelerating the creation of unstable jobs may be unfounded. AI thoughtfully deployed could instead help create meaningful work.

Creating good jobs

WHEN it comes to work, workers, and jobs, much of the angst of the modern era boils down to the fear that we’re witnessing the automation endgame, and that there will be nowhere for humans to retreat as machines take over the last few tasks. The most recent wave of commentary on this front stems from the use of artificial intelligence (AI) to capture and automate tacit knowledge and tasks, which were previously thought to be too subtle and complex to be automated. Is there no area of human experience that can’t be quantified and mechanized? And if not, what is left for humans to do except the menial tasks involved in taking care of the machines?

At the core of this concern is our desire for good jobs—jobs that, without undue intensity or stress, make the most of workers’ natural attributes and abilities; where the work provides the worker with motivation, novelty, diversity, autonomy, and work/life balance; and where workers are duly compensated and consider the employment contract fair. Crucially, good jobs support workers in learning by doing—and, in so doing, deliver benefits on three levels: to the worker, who gains in personal development and job satisfaction; to the organization, which innovates as staff find new problems to solve and opportunities to pursue; and to the community as a whole, which reaps the economic benefits of hosting thriving organizations and workers. This is what makes good jobs productive and sustainable for the organization, as well as engaging and fulfilling for the worker. It is also what aligns good jobs with the larger community’s values and norms, since a community can hardly argue with having happier citizens and a higher standard of living.1

LEARN MORE

Read more from the Future of Work collection

Subscribe to receive updates on the Future of Work and cognitive technologies

Does the relentless advance of AI threaten to automate away all the learning, creativity, and meaning that make a job a good job? Certainly, some have blamed technology for just such an outcome. Headlines today often express concern over technological innovation resulting in bad jobs for humans, or even the complete elimination of certain professions. Some fear that further technology advancement in the workplace will result in jobs that are little more than collections of loosely related tasks where employers respond to cost pressures by dividing work schedules into ever smaller slithers of time, and where employees are being asked to work for longer periods over more days. As the monotonic progress of technology has automated more and more of a firm’s function, managers have fallen into the habit of considering work as little more than a series of tasks, strung end-to-end into processes, to be accomplished as efficiently as possible, with human labor as a cost to be minimized. The result has been the creation of narrowly defined, monotonous, and unstable jobs, spanning knowledge work and procedural jobs in bureaucracies and service work in the emerging “gig economy.”2

The problem here isn’t the technology; rather, it’s the way the technology is used—and, more than that, the way people think about using it. True, AI can execute certain tasks that human beings have historically performed, and it can thereby replace the humans who were once responsible for those tasks. However, just because we can use AI in this manner doesn’t mean that we should. As we have previously argued, there is tantalizing evidence that using AI on a task-by-task basis may not be the most effective way to apply it.3Conceptualizing work in terms of tasks and processes, and using technology to automate those tasks and processes, may have served us well in the industrial era, but just as AI differs from previous generations of technologies in its ability to mimic (some) human behaviors, so too should our view of work evolve so as to allow us to best put that ability to use.

In this essay, we argue that the thoughtful use of AI-based automation, far from making humans obsolete or relegating them to busywork, can open up vast possibilities for creating meaningful work that not only allows for, but requires, the uniquely human strengths of sense-making and contextual decisions. In fact, creating good jobs that play to our strengths as social creatures might be necessary if we’re to realize AI’s latent potential and break us out of the persistent period of low productivity growth that we’re experiencing today. But for AI to deliver on its promise, we must take a fundamentally different view of work and how work is organized—one that takes AI’s uniquely flexible capabilities into account, and that treats humans and intelligent machines as partners in search of solutions to a shared problem.

Problems rather than processes

Consider a chatbot—a computer program that a user can converse or chat with—typically used for product support or as a shopping assistant. The computer in the Enterprise from Star Trek is a chatbot, as is Microsoft’s Zo, and the virtual assistants that come with many smartphones. The use of AI allows a chatbot to deliver a range of responses to a range of stimuli, rather than limiting it to a single stereotyped response to a specific input. This flexibility in recognizing inputs and generating appropriate responses is the hallmark of AI-based automation, distinguishing it from automation using prior generations of technology. Because of this flexibility, AI-enabled systems can be said to display digital behaviors, actions that are driven by the recognition of what is required in a particular situation as a response to a particular stimulus.

We can consider a chatbot to embody a set of digital behaviors, how the bot responds to different utterances from the user. On the one hand, the chatbot’s ability to deliver different responses to different inputs gives it more utility and adaptability than a nonintelligent automated system. On the other hand, the behaviors that chatbots evince are fairly simple, constrained to canned responses in a conversation plan or limited by access to training data.4 More than that, chatbots are also constrained by their inability to leverage the social and cultural context they find themselves in. This is what makes chatbots—and AI-enabled systems generally—fundamentally different from humans, and an important reason that AI cannot “take over” all human jobs.

Humans rely on context to make sense of the world. The meaning of “let’s table the motion,” for example, depends on the context it’s uttered in. Our ability to refer to the context of a conversation is a significant contributor to our rich behaviors (as opposed to a chatbot’s simple ones). We can tune our response to verbal and nonverbal cues, past experience, knowledge of past or current events, anticipation of future events, knowledge of our counterparty, our empathy for the situation of others, or even cultural preferences (whether or not we’re consciously aware of them). The context of a conversation also evolves over time; we can infer new facts and come to new realizations. Indeed, the act of reaching a conclusion or realizing that there’s a better question to ask might even provide the stimulus required to trigger a different behavior.

Chatbots are limited in their ability to draw on context. They can only refer to external information that has been explicitly integrated into the solution. They don’t have general knowledge or a rich understanding of culture. Even the ability to refer back to earlier in a conversation is problematic, making it hard for earlier behaviors to influence later ones. Consequentially, a chatbot’s behaviors tend to be of the simpler, functional kind, such as providing information in response to an explicit request. Nor do these behaviors interact with each other, preventing more complex behaviors from emerging.

The way chatbots are typically used exemplifies what we would argue is a “wrong” way to use AI-based automation—to execute tasks typically performed by a human, who is then considered redundant and replaceable. By only automating the simple behaviors within the reach of technology, and then treating the chatbot as a replacement for humans, we’re eliminating richer, more complex social and cultural behaviors that make interactions valuable. A chatbot cannot recognize humor or sarcasm, interpret elliptical allusions, or engage in small talk—yet we have put them in situations where, being accustomed to human interaction, people expect all these elements and more. It’s not surprising that users find chatbots frustrating and chatbot adoption is failing.5

A more productive approach is to combine digital and human behaviors. Consider the challenge of helping people who, due to a series of unfortunate events, find themselves about to become homeless. Often these people are not in a position to use a task-based interface—a website or interactive voice response (IVR) system—to resolve their situation. They need the rich interaction of a behavior-based interface, one where interaction with another human will enable them to work through the issue, quantify the problem, explore possible options, and (hopefully) find a solution.

We would like to use technology to improve the performance of the contact center such a person might call in this emergency. Reducing the effort required to serve each client would enable the contact center to serve more clients. At the same time, we don’t want to reduce the quality of the service. Indeed, ideally, we would like to take some of the time saved and use it to improve the service’s value by empowering social workers to delve deeper into problems and find more suitable (ideally, longer-term) solutions. This might also enable the center to move away from break-fix operation, where a portion of demand is due to the center’s inability to resolve problems at the last time of contact. Clearly, if we can use technology appropriately then it might be possible to improve efficiency (more clients serviced), make the center more effective (more long-term solutions and less break-fix), and also increase the value of the outcome for the client (a better match between the underlying need and services provided).

If we’re not replacing the human, then perhaps we can augment the human by using a machine to automate some of the repetitive tasks. Consider oncology, a common example used to illustrate this human-augmentation strategy. Computers can already recognize cancer in a medical image more reliably than a human. We could simply pass responsibility for image analysis to machines, with the humans moving to more “complex” unautomated tasks, as we typically integrate human and machine by defining handoffs between tasks. However, the computer does not identify what is unusual with this particular tumor, or what it has in common with other unusual tumors, and launch into the process of discovering and developing new knowledge. We see a similar problem with our chatbot example, where removing the humans from the front line prevents social workers from understanding how the factors driving homelessness are changing, resulting in a system that can only service old demand, not new. If we break this link between doing and understanding, then our systems will become more precise over time (as machine operation improves) but they will not evolve outside their algorithmic box.

Our goal must be to construct work in such a way that digital behaviors are blended with human behaviors, increasing accuracy and effectiveness, while creating space for the humans to identify the unusual and build new knowledge, resulting in solutions that are superior to those that digital or human behaviors would create in isolation . Hence, if we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence. To do this, we need to move away from thinking of work as a string of tasks comprising a process, to envisioning work as a set of complementary behaviors concentrated on addressing a problem. Behavior-based work can be conceptualized as a team standing around a shared whiteboard, each holding a marker, responding to new stimuli (text and other marks) appearing on the board, carrying out their action, and drawing their result on the same board. Contrast this with task-based work, which is more like a bucket brigade where the workers stand in a line and the “work” is passed from worker to worker on its way to a predetermined destination, with each worker carrying out his or her action as the work passes by. Task-based work enables us to create optimal solutions to specific problems in a static and unchanging environment. Behavior-based work, on the other hand, provides effective solutions to ill-defined problems in a complex and changing world.

If we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence.

To facilitate behavior-based work, we need to create a shared context that captures what is known about the problem to be solved, and against which both human and digital behaviors can operate. The starting point in our contact center example might be a transcript of the conversation so far, transcribed via a speech-to-text behavior. A collection of “recognize-client behaviors” monitor the conversation to determine if the caller is a returning client. This might be via voice-print or speech-pattern recognition. The client could state their name clearly enough for the AI to understand. They may have even provided a case number or be calling from a known phone number. Or the social worker might step in if they recognize the caller before the AI does. Regardless, the client’s details are fetched from case management to populate our shared context, the shared digital whiteboard, with minimal intervention.

As the conversation unfolds, digital behaviors use natural language to identify key facts in the dialogue. A client mentions a dependent child, for example. These facts are highlighted for both the human and other digital behaviors to see, creating a summary of the conversation updated in real time. The social worker can choose to accept the highlighted facts, or cancel or modify them. Regardless, the human’s focus is on the conversation, and they only need to step in when captured facts need correcting, rather than being distracted by the need to navigate a case management system.

Digital behaviors can encode business rules or policies. If, for example, there is sufficient data to determine that the client qualifies for emergency housing, then a business-rule behavior could recognize this and assert it in the shared context. The assertion might trigger a set of “find emergency housing behaviors” that contact suitable services to determine availability, offering the social worker a set of potential solutions. Larger services might be contacted via B2B links or robotic process automation (if no B2B integration exists). Many emergency housing services are small operations, so the contact might be via a message (email or text) to the duty manager, rather than via a computer-to-computer connection. We might even automate empathy by using AI to determine the level of stress in the client’s voice, providing a simple graphical measure of stress to the social worker to help them determine if the client needs additional help, such as talking to an external service on the client’s behalf.

As this example illustrates, the superior value provided by structuring work around problems, rather than tasks, relies on our human ability to make sense of the world, to spot the unusual and the new, to discover what’s unique in this particular situation and create new knowledge. The line between human and machine cannot be delineated in terms of knowledge and skills unique to one or the other. The difference is that humans can participate in the social process of creating knowledge, while machines can only apply what has already been discovered.6

Good for workers, firms, and society

AI enables us to think differently about how we construct work. Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors. Individuals consulting financial advisors, for example, typically don’t want to purchase investment products as the end goal; what they really want is to secure a happy retirement. The problem can be defined as follows: What does a “happy retirement” look like; how much income is needed to support that lifestyle, how to balance spending and saving today to find the cash to invest and navigate and (financial) challenges that life puts in the road, and what investments give the client the best shot at getting from here to there? The financial advisor, client, and robo-advisor could collaborate around a common case file, a digital representation of their shared problem, incrementally defining what a “happy retirement” is and, consequently, the needed investment goals, income streams, and so on. This contrasts with treating the work as a process of “request investment parameters” (which the client doesn’t know) and then “recommend insurance” and “provide investment recommendations” (which the client doesn’t want, or only wants as a means to an end). The financial advisor’s job is to provide the rich human behaviors—educator to the investor’s student—to elucidate and establish the retirement goals (and, by extension, investment goals), while the robo-advisor provides simple algorithmic ones, responding to changes in the case file by updating it with an optimal investment strategy. Together, the human and robo-advisor can explore more options (thanks to the power and scope of digital behaviors) and develop a deeper understanding of the client’s needs (thanks to the human advisor’s questioning and contextual knowledge) than either could alone, creating more value as a result.

Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors.

If organizing work around problems and combining AI and human behaviors to help solve them can deliver greater value to customers, it similarly holds the potential to deliver greater value for businesses, as productivity is partly determined by how we construct jobs. The majority of the productivity benefits associated with a new technology don’t come from the initial invention and introduction of new production technology. They come from learning-by-doing:7 workers at the coalface identifying, sharing, and solving problems and improving techniques. Power looms are a particularly good example, with their introduction into production improving productivity by a factor of 2.5, but with a further factor of 20 provided by subsequent learning-by-doing.8

It’s important to maintain the connection between the humans—the creative problem identifiers—and the problems to be discovered. This is something that Toyota did when it realized that highly mechanized factories were efficient, but they didn’t improve. Humans were reintroduced and given roles in the production process to enable them to understand what the machines were doing, develop expertise, and consequently improve the production processes. The insights from these workers reduced waste in crankshaft production by 10 percent and helped shorten the production line. Others improved axel production and cut costs for chassis parts.9

This improvement was no coincidence. Jobs that are good for individuals—because they make the most of human sense-making nature—generally are also good for firms, because they improve productivity through learning by doing. As we will see below, they can also be good for society as a whole.

Consider bus drivers. With the development of autonomous vehicles in the foreseeable future, pundits are worried about what to do with all the soon to be unemployed bus drivers. However, rather than fearing that autonomous buses will make bus drivers redundant, we should acknowledge that they will find themselves in situations that only a human, and human behaviors, can deal with. Challenging weather (heavy rain or extreme glare) might require a driver to step in and take control. Unexpected events—accidents, road work, or an emergency—could require a human’s judgment to determine which road rule to break. (Is it permissible to edge into a red light while making space for an emergency vehicle?) Routes need to be adjusted due to anything from a temporarily moved stop to modifying routes due to roadwork. A human presence might be legally required to, for example, monitor underage children or represent the vehicle at an accident.

As with chatbots, automating the simple behaviors and then eliminating the human will result in an undesirable outcome. A more productive approach is to discover the problems that bus drivers deal with, and then structure work and jobs around these problems and the kinds of behaviors needed to solve them. AI can be used to automate the simple behaviors, enabling the drivers to focus on more important ones, making the human-bus combination more productive as a result. The question is: Which problems and decision centers should we choose?

Let us assume that the simple behaviors required to drive a bus are automated. Our autonomous bus can steer, avoiding obstacles and holding its lane, maintain speed and separation with other vehicles, and obey the rules of the road. We can also assume that the bus will follow a route and schedule. If the service is frequent enough, then the collection of buses on a route might behave as a flock, adjusting speed to maintain separation and ensure that a bus arrives at each stop every five minutes or so, rather than attempting to arrive at a specific time.

As with the power loom, automating these simple behaviors means that drivers are not required to be constantly present for the bus (or loom) to operate. Rather than drive a single bus, they can now “drive” a flock of buses. The drivers monitor where each bus is, how it’s tracking to schedule, with the system suggesting interventions to overcome problems, such as a breakdown, congestion, or changed road conditions. The drivers can step in to pilot a particular bus should the conditions be too challenging (roadworks, perhaps, where markings and signaling are problematic), or to deal with an event that requires that human touch.

These buses could all be on the same route. A mobile driver might be responsible for four-to-five sequential buses on a route, zipping between them as needed to manage accidents or dealing with customer complaints (or disagreements between customers). Or the driver might be responsible for buses in a geographic area, on multiple routes. It’s even possible to split the work, creating a desk-bound “driver” responsible for drone operation of a larger number of buses, while mobile and stationary drivers restrict themselves to incidents requiring a physical presence. School or community buses, for example, might have remote video monitoring while in transit, complemented by a human presence at stops.

Breaking the requirement that each bus have its own driver will provide us with an immediate productivity gain. If 10 drivers can manage 25 autonomous buses, then we will see productivity increase by a factor of 2.5, as we did with power looms: good jobs for the firm, as workers are more productive. Doing this requires an astute division of labor between mobile, stationary, and remote drivers, creating three different “bus driver” jobs that meet different work preferences: good jobs for the worker and the firm. Ensuring that these jobs involve workers as stakeholders in improving the system enables us to tap into learning-by-doing, allowing workers to continue to work on their craft, and the subsequent productivity improvements that learning-by-doing provides, which is good for workers andthe firm.

These jobs don’t require training in software development or AI. They do require many of the same skills as existing bus drivers: understanding traffic, managing customers, dealing with accidents, and other day-to-day challenges. Some new skills will also be required, such as training a bus where to park at a new bus stop (by doing it manually the first time), or managing a flock of buses remotely (by nudging routes and separations in response to incidents), though these skills are not a stretch. Drivers will require a higher level of numeracy and literacy than in the past though, as it is a document-driven world that we’re describing. Regardless, shifting from manual to autonomous buses does not imply making existing bus drivers redundant en masse. Many will make the transition on their own, others will require some help, and a few will require support to find new work.

The question then, is: What to do with the productivity dividend? We could simply cut the cost of a bus ticket, passing the benefit onto existing patrons. Some of the saving might also be returned to the community, as public transport services are often subsidized. Another choice is to transform public transport, creating a more inclusive and equitable public transport system.

Buses are seen as an unreliable form of transport—schedules are sparse with some buses only running hourly for part of the day, and not running at all otherwise; and route coverage is inadequate leaving many (less fortunate) members of society in public transport deserts (locations more than 800 m from high-frequency public transport). We could rework the bus network to provide a more frequent service, as well as extending service into under-serviced areas, eliminating public transport deserts. The result could be a fairer and more equitable service at a similar cost to the old, with the same number of jobs. This has the potential to transform lives. Reliable bus services might result in higher patronage, resulting in more bus routes being created, more frequent services on existing bus routes, and more bus “drivers” being hired. Indeed, this is the pattern we saw with power looms during the Industrial Revolution. Improved productivity resulted in lower prices for cloth, enabling a broader section of the community to buy higher quality clothing, which increased demand and created more jobs for weavers. Automation can result in jobs that are good for the worker, firm, and society as a whole.

Automation can result in jobs that are good for the worker, firm, and society as a whole.

How will we shape the jobs of the future?

There is no inevitability about the nature of work in the future. Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable. Melvin Kranzberg, a historian specializing in the history of technology, captured this in his fourth law: “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”10

The jobs first created by the development of the moving assembly line were clearly unacceptable by social standards of the time. The solution was for society to establish social norms for the employee-employer relationship—with the legislation of the eight-hour an example of this—and the development of the social institutions to support this new relationship. New “sharing economy” jobs and AI encroaching into the workplace suggest that we might be reaching a similar point, with many firms feeling that they have no option but to create bad jobs if they want to survive. These bad jobs can carry an economic cost, as they drag profitability down. In this essay, as well as our previous,11 we have argued that these bad jobs are also preventing us from capitalizing on the opportunity created by AI.

Our relationship with technology has changed, and how we conceive work needs to change as a consequence. Prior to the Industrial Revolution, work was predominantly craft-based; we had an instrumental relationship with technology; and social norms and institutions were designed to support craft-based work. After the Industrial Revolution, with the development of the moving production line as the tipping point, work was based on task-specialization, and a new set of social norms and institutions were developed to support work built around products, tasks, and the skills required to prosecute them. With the advent of AI, our relationship with technology is changing again, and this automation is better thought of as capturing behaviors rather than tasks. As we stated previously, if automation in the industrial era was the replication of tasks previously isolated and defined for humans, then in this post-industrial era automation might be the replication of isolated and well-defined behaviors that were previously unique to humans.12

There are many ways to package human and digital behaviors—of constructing the jobs of the future. We, as a community, get to determine what these jobs look like. This future will still require bus drivers, mining engineers and machinery operators, financial advisors, as well as social workers and those employed in the caring professions, as it is our human proclivity for noticing the new and unusual, of making sense of the world, that creates value. Few people want financial products for their retirement fund; what they really want is a happy retirement. In a world of robo-advisors, all the value is created in the human conversation between financial advisors and clients, where they work together to discover what the clients’ happy retirement is (and consequently, investment goals, incomes stream, etc.), not in the mechanical creation and implementation of an investment strategy based on predefined parameters. If we’re to make the most of AI, realize the productivity (and, consequently, quality of life) improvements it promises, and deliver the opportunities for operational efficiency, then we need to choose to create good jobs:

  • Jobs that make the most of our human nature as social problem identifiers and solvers
  • Jobs that are productive and sustainable for organizations
  • Jobs with an employee-employer relationship aligned with social norms
  • Jobs that support learning by doing, providing for the worker’s personal development, for the improvement of the organization, and for the wealth of the community as a whole.

The question, then, is: What do we want these jobs of the future to look like?


Pic
HR Technology
How HR Can Jumpstart the Enterprise AI Transformation

As artificial intelligence (AI) takes hold in the workplace, both employers and employees believe it will benefit the enterprise and help workers to be more productive. In fact, 93% of employees would be willing to take instructions from a robot, according to a new study Oracle conducted together with Future Workplace.

That’s not surprising, when you consider that employees encounter AI all the time as consumers. Most people today have no problem following the route our navigation app chooses or asking Siri or Alexa to put together a music playlist. Employees are ready for that kind of experience to appear in the workplace.

Yet many enterprises are moving very cautiously when it comes to implementing AI, despite the clear benefits. What’s preventing them from adopting AI more quickly?

For some companies, there some uncertainty around cost, or concerns about security. The biggest obstacle the survey uncovered, though, is a worker preparation gap. Organizations just aren’t taking the steps they need to get their employees ready to make the AI transition.

They are missing a huge opportunity to get ahead of the AI curve. There’s a huge pent-up demand by employees to change their enterprise software experience, to move from the cumbersome, form-filling designs of the past to using natural language to interact with data across (and outside) the enterprise.

The last change of this magnitude was the integration of mobile devices with enterprise software, a change that was at first responsive to customer demands and so driven internally by sales and marketing. The AI transition is far more driven initially by the internal demands of employees and their need for information, insights and collaboration. This time around, HR is the internal driver. In fact, paving the way for AI in the enterprise is one of the most important tasks facing HR leaders today.

So, what can HR do now to jumpstart the process? Here are four ideas:

1. Begin incorporating AI into some existing processes where the data already exists, adding machine learning and chatbots in place of older UIs. The HR help desk function is an ideal starting point, allowing employees to easily ask common questions with natural language, whether it’s how much vacation time they have left to the procedure for handling a sensitive management issue. 

Another area where HR can drive business value with AI is in recruiting. The talent war is real and it’s brutal. Making it easier for candidates to find the right job posting, to get personalized guidance through the applications process, to be guided on the best next steps, all can make the difference in landing talent. The same technology can be extended to employees looking for the right opportunity within the enterprise to grow their career. Not only can it point them to the best fits within open jobs, it also can connect them with the most appropriate resources and experts for them individually. Doing this can be a huge retention asset, as it will demonstrate that the company is invested in workers and their career development.

2. Training is the Key to Success. To take full advantage of AI, employees and leadership need to be on the same page. A workforce that doesn’t have the chance to develop an AI-based skillset won’t show the productivity gains or improved customer satisfaction that workplace AI can deliver. At the same time, when employees feel they aren’t getting help to build those skills, it creates anxiety. They can imagine a scenario that’s more sci fi than reality, one where the machines are taking over. Instead of welcoming AI, they’ll start to worry it will cost them their jobs. That creates problems for HR and the enterprise, instead of opportunities.

HR leaders will have a critical role in addressing this skills gap – and, according to the survey, many of them want to take on this task. In fact, a chief concern in HR that they won’t be able to keep up with the pace of AI deployment over the next three years. What holds them back is a lack of organizational commitment to preparing the workforce, including its cost – even though the long-term cost to the organization is a lot bigger from not preparing the workforce. 

3. Don’t wait until all the training resources are available, use existing resources to create the start of an AI training program. This will help HR demonstrate the value of training in getting the most out of the AI investment. (In fact, consider training executives first, as are likely to be the earliest direct beneficiaries of AI. Make them converts and resources will follow.)

4. Move deliberately and gradually. Make easier changes first, focusing on building an experience for employees that mirrors the one they have come to expect as consumers. That’s easiest to do with cloud technologies, because they allow AI to be embedded quickly into existing enterprise applications. Done right, it will help to allay any anxiety (by employees or management) by demonstrating how AI can make repetitive or mundane tasks much more enjoyable and collaborative. Once people experience the freedom that have to be creative and strategic, they will welcome it and want more; once management sees the productivity gains and the contribution to retention, they will more readily support a rapid transition to AI.

Bringing AI into the organization is no longer a matter of if, but when. The benefits are too great, and companies that lag in its adoption may also find they are laggards in their industries as well. There are obstacles to overcome, but workers are ready to jump in. Incorporating AI into the workplace is an enterprise-wide effort, and HR’s participation and leadership are crucial to success. 

What do you think is the key to success in deploying AI into the workplace?


Pic
HR Technology
A 5-Part Process for Using Technology to Improve Your Talent Management

At the law firm Allen & Overy, the idea of replacing traditional, annual performance appraisals with a technology-enabled continuous feedback system did not come from human resources. It came from a leader within the practice. Wanting something that encouraged more-frequent conversations between associates and partners, the senior lawyer read about what companies like Adobe were doing, and then asked his firm to help him create a new approach. When the new system, Compass, was rolled out to all 44 offices, the fact that it was born of a problem identified by internal staff helped accelerate the tool’s adoption across the firm.

In an era of transformative cognitive technologies like AI and machine learning, it’s become obvious that people, practices, and systems must become nimbler too. And because organizational change tends to be driven by those who most acutely feel the pain, it’s often line managers who are the strongest champions for “talent tech”: innovations in how firms hire people, staff projects, evaluate performance, and develop talent.

INSIGHT CENTER

  • Adopting AI
    SPONSORED BY SAS
    How companies are using artificial intelligence in their business operations.

But as we have observed in our research, consulting work, and partnerships with dozens of Fortune 500 companies and top professional services firms, the transition to new and different ways of managing talent is often filled with challenges and unexpected hurdles. Gaining the most from talent tech, we find, depends on the adopting firm’s ability to confront, and ultimately reinvent, an often outdated system of interlocking processes, behaviors and mindsets. Much like putting a new sofa in the living room makes the rest of the décor look outdated, experimenting with new talent technologies creates an urgency for change in the rest of the organization’s practices.

While the jury is still out on the long-term impact of many of the talent tech experiments we have witnessed, we have observed five core lessons from those firms that seem to be positioning themselves most effectively to reap their benefits:

  1. Talent tech adoption must be driven by business leaders, not the C-suite or corporate functions.
  2. HR must be a partner and enabler — but not the owner.
  3. Fast-iteration methodologies are a prerequisite, because talent tech has to be tailored to specific business needs and company context and culture.
  4. Working with new technologies in new and nimbler ways creates the need for additional innovation in talent practices.
  5. The job of leaders shifts from mandating change to fostering a culture of learning and growth.

Let’s look at these one by one.

1. Talent tech adoption must be owned and driven by business leaders.

Many business leaders we have spoken with have stressed: It’s not about the technology, it’s about solving a problem. It’s no surprise then, as we have observed, that talent tech projects have a greater likelihood of succeeding and scaling when they are driven by the business line — and not by top management or functional heads in HR or IT. Because operational managers are closer to the action, they have better insights into specific business challenges and customer pain points that can be addressed by new technologies.

As a VP charged with talent tech innovation at a large consumer products company told us: “We started our digital transformation top-down, creating a sense of urgency and cascading it down. Now it’s much more bottom-up because you have to experiment, you have to do things that are relevant in the field. The urgency has to come from inside the individual instead of top management.” The company organized a series of road shows that exposed high-potential managers to new developments in AI and enabled them to propose and run with projects of their own.

Putting responsibility for innovation in the hands of those who are closest to customers, and reducing layers of control and approval, increases the likelihood that the talent technologies will be fit for purpose. But for a generation of senior managers and functional heads raised on a steady diet of “visionary leadership,” this more adaptive approach does not always come naturally.

2. HR must be a partner and enabler — but not the owner.

Not only are line managers closely connected to business imperatives, but they are also eager to move fast in technology adoption. They want to seize on the promise of AI, machine learning, and people analytics to improve business results and enhance their career prospects. But their priorities can conflict with other parts of the business.

At one of the companies we worked with, a young, ambitious manager experimented successfully with an on-demand talent platform for staffing employees on projects. But the experiment raised questions, for example, about what latitude bosses had in deciding who’d be allowed to take on extra projects and about whether performance on these extra projects could or should count toward an employee’s annual appraisal and compensation. HR was not involved early enough, was more attuned to the risks than the opportunities, and opposed scaling the project further. Only after a lot of stakeholder management and leadership intervention did the pilot get back on track.

The ramifications of reimagining work are far-reaching, necessitating talent strategies built on the ability to access the right people and skills at the right time and then put them to work in flexible ways for which they will be coached and rewarded. But if middle managers wind up caught in bureaucratic procedures and rule-enforcement mindsets, implementations will falter. That’s why getting buy-in from HR early in the process is so important — and necessary for scaling up when pilots yield promising results.

3. Knowing how to use lean, self-managing team methodologies is a prerequisite.

Because AI-powered tools like on-demand talent platforms and project staffing algorithms are not simply “plug and play,” it can be helpful to use methods such as rapid prototyping, iterative feedback, customer-focused multidisciplinary teams, and task-centered “sprints” — the hallmarks of agile methodologies — to determine their usefulness.

For example, one large industrial company needed a better way to get people on cross-functional projects. Information about people’s skills and capabilities was dispersed across siloed business lines. Rather than attempting to build out a comprehensive system to identify and match employees across all the projects (and the silos), the company piloted the idea with only a few projects and a carefully selected pool of employees. Starting small allowed extremely fast learning and iteration, broader scaling, and more-complex uses of the system.

We have worked with a range of companies that are experimenting with technology platforms that catalogue projects that need doing, match project needs to skill supply, and then source appropriate talent. In each case, significant modifications were needed to adjust to specific requirements. And in most cases the data necessary to run the new systems existed in different formats residing in silos. Companies that lacked experience with lean methodologies had to be trained to operate as agile teams in order to define a specific use case for the technology. This learning curve is often the culprit behind implementation processes that take significantly longer than managers expect.

4. Talent tech raises urgency for further talent innovation. 

Much has been made of the scarcity of AI engineers, along with the fact that the precious few are quickly snapped up at huge salaries by the usual suspects — Amazon, Apple, Google, and Facebook. Beyond the hype, many firms are finding that they cannot hire the talent they need (because the top experts prefer to be free agents or already work for competitors) and that the skills and capacities they need evolve rapidly or are best sourced externally. These trends are fueling a strategic shift from acquiring talent to accessing talent on an as-needed contract basis; yet the cultural hurdles to staffing externally can be as, if not more, challenging than the technological ones.

One organization we worked with did not have a good mechanism in place for prioritizing the work requested of its shared internal consulting services. Its highly skilled consultants were responding on a first-come, first-served basis and fielding more demands than they could handle. Often they were also the wrong demands. When the team didn’t have the right people on the team for the work, they’d either do their best to complete it themselves or abandon it altogether. An analysis revealed that a good portion of the work could be better done externally by highly skilled contractors, and in fact the team could dramatically increase its ability to provide value across the organization if it could access a specific set of external expertise. But implementing the change was a challenge because the unit’s internal clients felt “safer” working with internal employees.

Once one part of the people system changes significantly, the pressure is on to change related processes. Companies that have shifted to more-agile ways of working have also found that they can no longer evaluate people once or twice a year on their ability to hit individual targets; they now need to look at how people perform as team members, on an ongoing basis. All of this is driving a shift from annual performance assessments to systems that provide feedback and coaching on a continuous basis, as firms ranging from Allen & Overy to Microsoft have found.

5. Leaders must foster a culture of learning.

One CTO we spoke to tells a story about an AI project that “hit the wall” despite a sequence of green lights. “It was over-administered,” she explained. “We had specified detail into 2019.” As reality on the ground began to diverge from the plan, the people in charge of executing the plans failed to speak up and the project derailed. Without people who feel an “obligation to dissent,” she concluded, it’s hard to innovate.

Across industries and sectors, practitioners and academics seem to agree on one thing: Successfully piloting new technologies requires shifting from a traditional plan-and-implement approach to change to an experiment-and-learn approach. But experiment-and-learn approaches are by definition rife with opportunities for failure, embarrassment, and turf wars. Without parallel work by senior management to shift corporate cultures toward a learning mindset, change will inch along slowly if at all.

When Microsoft CEO Satya Nadella took charge, for example, he saw that fear — and the corporate politics that resulted from it — was the biggest barrier to capturing leadership in cloud computing and mobility solutions. A convert to Carol Dweck’s idea of a growth mindset — the belief that talent is malleable and expandable with effort, practice, and input from others — he prioritized a shift from a “know it all” to a “learn it all” culture as a means to achieving business goals. Today, not only does Microsoft rank among the top firms in cloud computing, but the company is also “cool” again in the minds of the top engineering talent it needs to compete.

There is a lot of fear about the speed and scope of technological change, and it’s perhaps most acutely felt by the middle-management survivors of years of corporate layoffs. Fear does not make people more open to experimenting; rather it leads us to put all our energy and ingenuity toward protecting ourselves — and that is lethal for innovation. That’s why the critical task for leaders in a world in which machines will do more and more of their routine work is to enable a shift, from valuing being right, knowing the answers, or implementing top-down changes, to valuing dissent and debate, asking good questions, and iterating to learn.


Herminia Ibarra is a professor of organizational behavior and the Cora Chaired Professor of Leadership and Learning at Insead. She is the author of Act Like a Leader, Think Like a Leader (Harvard Business Review Press, 2015) and Working Identity: Unconventional Strategies for Reinventing Your Career (Harvard Business Review Press, 2003). Follow her on Twitter @HerminiaIbarra and visit her website.


Patrick Petitti is the co-founder and co-CEO of Catalant. More than 30 percent of the Fortune 100 use Catalant to seamlessly access and deploy internal and external talent. Petitti is also co-author of Reimagining Work: Strategies to Disrupt Talent, Lead Change, and Win with a Flexible WorkforceHe received his BS from the Massachusetts Institute of Technology and MBA from the Harvard Business School.