The term “big data” has become synonymous with 21st century business. In the contingent workforce industry, the rush to amass sprawling sets of data is growing more intense. Companies of all shapes and sizes are speeding ahead in the race to automate recruiting, digitize processes and construct vaults of information about clients and candidates. However, the question contingent workforce leaders need to confront doesn’t concern the amount of data collected – it’s about determining whether they have the right data and how to interpret it.
Is Big Data Too Big?
As Maxwell Wessel observes in a recent Harvard Business Review article, the net we’re casting to capture information is getting larger by the day. Any factor that could affect hiring or managing a workforce, it seems, is thrown into the mix.
“Masses of social, weather, and government data are being leveraged to predict supply chain outages,” Wessel notes. “Enormous amounts of user data are being harnessed at scale to identify individuals among a sea of website clicks. And companies are even starting to leverage huge quantities of text exchanges to build algorithms capable of having conversations with customers.”
All of this is happening in the contingent workforce space now, right down to the introduction of artificial intelligence as a recruiting tool. FirstJob is rolling out Mya, an AI that uses natural language processing and machine learning to automate up to 75 percent of the recruiting process.
Similar to virtual assistants like Amazon’s Alexa or Apple’s Siri, Mya can simulate conversations and somewhat complex interactions with users. Through these exchanges, she continually gathers data about candidates, skill sets, engagement levels, cultural fit and more. This information, FirstJob says, is then transformed into quantifiable intelligence.
Because Mya is new, the results remain to be seen. And the question still stands: with all the information Mya will obtain from hundreds or thousands of individual candidates, how much is useful? How much is actionable? How much is too much?
Uber, a Success Story in Small, Right Data
Praise it or vilify it, Uber stands as a success story about the power of data analytics. Investors and business experts have long touted the on-demand personal transit app as a paragon of big data. The system does capture a wealth of information from drivers and passengers, allowing it to plot the “real-time logistics flows of human transportation,” as Wessel puts it. However, he also draws a crucial distinction in the actual size of Uber’s data.
“But Uber’s success isn’t a function of the big data it collects,” Wessel explains. “Uber’s success results from something very different: the small, right data it needed to do something very simple — dispatch cars.”
Prior to Uber’s meteoric rise, commuters relied on taxis for similar services. Even without a computer taking in and processing data, Wessel points out that the “network of eyeballs moving around the city” to scout for potential passengers was itself a massive program of gathering and analyzing data.
“The fact that the computation happened inside of human brains doesn’t change the quantity of data captured and analyzed,” he adds. “Uber’s elegant solution was to stop running a biological anomaly detection algorithm on visual data — and just ask for the right data to get the job done. Who in the city needs a ride and where are they? That critical piece of information let the likes of Uber, Lyft, and Didi Chuxing revolutionize an industry.”
Rightsizing Your Big Data
The key to uncovering the right size for your data comes down to identifying “waste.” Wessel uses the example of a flower shop. The average retail florist experiences up to 50 percent spoilage rates with its merchandise. That means half of all those pretty bouquets end up in the bin. However, waste is a valuable source of opportunity.
Recommended for You
“Whether it’s in industrial production, retailing, or legal investigations, figuring out your sources of wasted effort and resources should guide the way toward the right data,” Wessel writes. The same holds true for contingent workforce programs.
For contingent workforce leaders, the first step in determining the right talent data involves looking for wasted efforts or unproductive processes. For the sake of illustration, let’s say your submittals-per-jobs ratio is 1:5 or lower. That means not enough resumes are coming in from your recruiters or your staffing partners. You’ve revealed a source of “waste” and an opportunity to improve your processes.
At this point, you’ve decided to concentrate on reducing the waste. It’s time to theorize on how to change the process and prevent squandered or inefficient efforts. With our submittals-per-jobs dilemma, we need to begin accumulating, dissecting and synthesizing data related to the issue.
- How are job descriptions faring? Are they compelling? Do they accurately reflect the demands and benefits of the position — for the client and the candidate?
- Are we tapping into the right sourcing channels? If traditional job boards aren’t producing results, analyze the data on social networks, online groups, communities, university systems and other media. Perhaps LinkedIn and Facebook are outshining Monster as fertile fields for harvesting top candidates.
- How successful are candidate outreach initiatives? Is the process manual? Could it be automated?
- What do successful recruiting efforts look like when compared to false starts or non-responsiveness? Analyzing this data will help you create a profile of measurable performance to replicate and deploy.
Build Meaningful Data Sets
- Drive thinking that extends beyond a single department or division. Consider how data affect the organization and its talent as a whole.
- Defend against confirmation biases that can arise from like perspectives or people who think the way you do. Approach the analysis as a “MythBusters” researcher would. Attempt to disprove accepted norms. Be receptive to risks, failures and unexpected outcomes — all of these situations are critical learning experiences that will improve the process.
- Use good datasets that are reliable, valid, clean and complete. The data should be objective, not based on a specific business group, category of talent, company division or hiring manager.
- Design comparisons across groups and over time.
Enlist Allies, Stakeholders and Partners Immediately
Even the most thoughtful and expertly performed analysis can fail if stakeholders are not informed and included in the process. Strive to bring others along on this journey of discovery, and solicit their input. You’ll find that decision makers will be more likely to participate, review the research, understand its value and implement the recommended changes. Otherwise, the entire effort can be jeopardized.
Without prior knowledge and inclusion, other stakeholders in the process may feel as though they’re being told how to do their jobs, especially if they think things are going well at the moment. Despite best intentions, recipients in this scenario will feel blindsided. And when that happens, crucial plans languish and go unimplemented, which amounts to wasted opportunities, squandered time and lost costs.
Create a Data Team
Designing the right team is imperative and should take place before any data collection or analysis occur. Although contingent workforce program managers have mountains of useful data in their heads and their tracking systems (e.g., VMS, ATS, enterprise resource system, etc.), the effort must be more collaborative to deliver comprehensive decisions. The best teams include a broad swath of representatives. In a contingent workforce program, that would incorporate professionals from the client organization, the MSP, the VMS and staffing partner firms. These subject matters experts will be required to address the Whys, the Whats and the Hows of the project.
- The Why Team: hiring managers, operational leaders and executives to provide the business expertise.
- The What Team: staffing partners, procurement leaders and HR officers to provide expertise on the talent.
- The How Team: Data analytics specialists from the contingent workforce company, client organization or technology provider (e.g., VMS) — professionals who understand the information, how to gather it and how to interpret it into meaningful results that decision makers can act upon.
More importantly, ensure that the assembled team embodies a diverse spectrum of thoughts and perspectives. Many companies consider themselves data driven, and they rely heavily on information gathered from a variety of sources — their clients, workers, suppliers and more. Yet, when team members are too alike – from the same department, for instance — we often find that their interpretations of the data become biased, oversimplified, overly broad or inductively reasoned to prove a hypothesis rather than deductively uncover a reality.
Let’s recall our submittals-per-jobs example. If the people responsible for gathering the data are deeply engaged in the group that sources or recruits candidates, they could wind up on the defensive. When this happens, people tend to seek out data that justify or exonerate their setbacks. Or, they’re too deeply immersed to know what to examine. Consider including team members from HR, marketing, operations, staffing partners and hiring managers, where applicable. By bringing in stakeholders from other areas of the contingent workforce program, you will get a clearer picture of the challenge and its solution.
Finding the Right Data Sweet Spot
As Wessel remarks in his article: “Sometimes the right data is big. Sometimes the right data is small. But for innovators the key is figuring out what those critical pieces of data are that drive competitive position. Those will be the pieces of right data that you should seek out fervently.”
Data about our contingent workforce programs, regardless of size, can open our eyes to a world of exceptional talent — and new innovators we didn’t see before. We just need to make sure we’re looking in the right places and turning over the proper stones.