How data-driven is your marketing department?

Marketing has evolved from creative newspaper, magazine and billboard advertising, to processes that are designed to drive customer engagement and accelerate business growth. With all the data at their hands nowadays, what is necessary to become a data-driven organisation?

“Ultimately, a data-driven marketing organisation learns to use data analytics as part of all marketing campaigns; from setting-up your campaign to post-campaign review. Within a data-driven organisation, information can move freely, is consistent across all channels and decision makers at all levels use data to better serve their customers.” (source: Forbes Insight Report) 

So what does this mean and how can your organisation use data analytics to create data-driven marketing campaigns? 
Lees verder How data-driven is your marketing department?

Tomatoes that send an email when they need more water: Sounds of the future?

Imagine cows fed and milked entirely by robots. Or tomatoes that send an e-mail when they need more water. Or a farm where all the decisions about where to plant seeds, spray fertilizer and steer tractors are made by software on servers on the other side of the sea.

This is what more and more of our agriculture may come to look like in the years ahead, as farming meets Big Data. There’s no shortage of farmers and industry gurus who think this kind of “smart” farming could bring many benefits. Pushing these tools onto fields, the idea goes, will boost our ability to control this fiendishly unpredictable activity and help farmers increase yields even while using fewer resources.
The big question is who exactly will end up owning all this data, and who gets to determine how it is used. On one side stand some of the largest corporations in agriculture, who are racing to gather and put their stamp on as much of this information as they can. Opposing them are farmers’ groups and small open-source technology start-ups, which want to ensure a farm’s data stays in the farmer’s control and serves the farmer’s interests.
Who wins will determine not just who profits from the information, but who, at the end of the day, directs life and business on the farm.

One recent round in this battle took place in October, when Monsanto spent close to $1 billion to buy the Climate Corporation, a data analytics firm. Last year the chemical and seed company also bought Precision Planting, another high-tech firm, and also launched a venture capital arm geared to fund tech start-ups.
In November, John Deere and DuPont Pioneer announced plans to partner to provide farmers information and prescriptions in near-real time. Deere has pioneered “precision farming” equipment in recent years, equipping tractors and combines to automatically transmit data collected from particular farms to company databases. DuPont, meanwhile, has rolled out a service that analyzes data into “actionable management strategies.”

Many farmers are wary that these giants could use these tools to win unprecedented levels of insight into the economics and operational workings of their farms. The issue is not that these companies would shower the farmers with ads, as Facebook does when it knows you’re looking to buy sneakers. For farmers, the risks of big data seem to pierce right to the heart of how they make a living. What would it mean, for instance, for Monsanto to know the intricacies of their business?
Farm advocacy groups are now scrambling to understand how — if given free rein — these corporations could misuse the data they collect. “We’re signing up for things without knowing what we’re giving up,” said Mark Nelson, director of commodities at the Kansas Farm Bureau. In May, the American Farm Bureau Federation, a national lobbying group, published a policy brief outlining some potential risks around these data-driven farm tools.
For farmers, the most immediate question is who owns the information these technologies capture. Many farmers have been collecting digitized yield data on their operations since the 1990s, when high-tech farm tools first emerged. But that information would sit on a tractor or monitor until the farmer manually transferred it to his computer, or handed a USB stick to an agronomist to analyze. Now, however, smart devices can wirelessly transfer data straight to a corporation’s servers, sometimes without a farmer’s knowledge.

“When I start storing information up on the Internet, I lose control of it,” said Walt Bones, who farms in Parker, S.D., and served as state agriculture secretary.
Justin Dikeman, a sales representative with DuPont, said farmers continue to own whatever data they collect on their operations. A spokesman for John Deere also said farmers own their data, and that farmers have the opportunity to opt-out of the company’s cloud services. Monsanto did not reply to a comment request.

Details on who owns what at which stage of the analytics process is less clear, though. Even if a contract guarantees that farmers own the raw data, for instance, it’s not obvious whether farmers will be able to access that data in a non-proprietary format. Nor is it evident how easily farmers can stop these devices from collecting and transmitting data.

How corporations use the information is another central concern. One worry is the giants will harness the data to engage in price discrimination, in which they charge some farmers more than others for the same product. For example, details on the economic worth of a farm operation could empower Monsanto or DuPont to calculate the exact value the farm derives from its products. Monsanto already varies its prices by region, so that Illinois farmers with a bumper crop might be charged more for seeds than Texas farmers facing a drought. Bigger heaps of data would enable these companies to price discriminate more finely, not just among different geographic regions but between neighbors.

Another issue is how the value of this information will be determined, and the profits divided. The prescription services Monsanto and DuPont are offering will draw on the vast amounts of data they amass from thousands of individual farms. Farmers consider much of this information – such as on soil fertility and crop yields – confidential, and most view details about particular farming techniques as akin to personal “trade secrets.” Even if the corporations agree not to disclose farm-specific information, some farmers worry that the information may end up being used against them in ways that dull their particular competitive edge.

“If you inadvertently teach Monsanto what it is that makes you a better farmer than your neighbor, it can sell that information to your neighbor,” said John McGuire, an agriculture technology consultant who runs Simplified Technology Services and developed geospatial tools for Monsanto in the late-1990s. And if the corporation gathers enough information, “it opens the door for Monsanto to say, ‘We know how to farm in your area better than you do,’” he said.

There are also no clear guidelines on how this information will be used within commodity markets. Real-time data is highly valuable to investors and financial traders, who bet billions of dollars in wheat, soybean and corn futures. In a market where the slightest informational edge makes the difference between huge profits and even bigger losses, corporations that gather big data will have a ready customer base if they choose to sell their knowledge. Or they could just use it to speculate themselves.

“If this real time yield data goes into the cloud and a lot of market investors get into it, there is potential for market distortion,” said Kyle Cline, policy advisor for national government relations at the Indiana Farm Bureau. “It could destabilize markets, make them more volatile,” he said.

John Deere has stated it will not share data with anyone it believes will use it to influence or gain an advantage in commodity markets. Monsanto, DuPont and other firms have not, however, issued similar public statements.

Some farmers and smaller manufacturers also worry that data analytics will give conglomerates like Monsanto and DuPont more power to compel farmers to buy other lines of products. Monsanto, for example, has proven highly adept at leveraging its wide suite of products to support one another. How Monsanto used its dominance in one business (genetic traits) to benefit others (seeds, fertilizer) was the focus of a three-year antitrust investigation by the Justice Department. (DOJ closed the probe last November without taking any action).

In recent years, Monsanto, DuPont and John Deere have also expanded into selling farmers a variety of financial services and insurance. John Deere, for one, acknowledges that its financial division may consult data from a farmer’s machinery, if the farmer permits.

Other private corporations are also competing for a share of the big data pie. Established equipment manufacturers like AgCo and Case IH have been expanding their data analytics services, and some high-tech upstarts are also joining the game. The Climate Corporation, the weather data and insurance company Monsanto bought in October, for example, was founded by a former Google employee.

Open-source groups attempting to provide farmers with some similar technologies include ISOBlue, a project based at Purdue University, which teaches farmers how to capture and independently store their own data. FarmLogs, a Michigan-based company backed by Silicon Valley money, sells software and data analytics that let farmers fully control the information collected. “We’re pushing back against the monopoly on information” that some existing vendors create, said FarmLogs founder Jesse Vollmar, who grew up on a farm.

What is not clear is whether these smaller open-source companies will be able to keep up with the established giants over the long run.

“Monsanto has its fingers awful deep within our industries,” McGuire said. “Its expansion [into data analytics] should scare a lot of people.”

To be sure, much depends on how widely farmers adopt the privately-designed “decision-support” services. Monsanto has estimated this to be a $20 billion market, but there is no proof yet whether the company will be able to process these reams of data into profitable farming and business strategies. Whether Monsanto’s bet will pay off is “tough to validate,” said Paul Massoud, an analyst with Stifel Nicolaus.

Some experts question whether relying on prescription-based services is in farmers’ best interest at all. “I don’t see farmers themselves crunching numbers, so [I] doubt they’ll be learning anything more about how to farm well,” said Bill Freese, expert on agricultural biotech and science policy analyst with Center for Food Safety. “Monsanto’s scheme does not really represent farmers embracing data analytics, but Monsanto embracing it to better sell the seeds it wants to sell with a pseudo-scientific rationale.”

A new group called the Grower Information Services Cooperative thinks the best way farmers can protect their interests during the transition to big data is to organize. Formed in west Texas last December, GISC is pushing a model where farmers would store their data in a repository through the co-op, and companies would pay the group a fee to access it. The system would give farmers technical as well as legal ownership, and provide a way for them to share in its monetary value, said Mark Cox, controller and communications director for GISC.

“Growers need to be proactive in how their information is managed,” Cox said. “Otherwise all that economic power will consolidate to these corporations and the grower will be at even more of a disadvantage. We don’t want the grower to become a tenant on his own farm.” GISC began accepting members this month, and is meeting with farm bureaus around the country to publicize its mission.

Soil sensors and seed planting algorithms may be a game-changer. Whether farmers fully reap the fruits of that harvest, though, will depend not on technologies but on the legal technicalities that bind their use.

The Predictive Power of Big Data

In computer science, the unit used to measure information is the bit, short for “binary digit.” You can think about a single bit as the answer to a yes-or-no question, where 1 is yes and 0 is no. Eight bits is called a byte.

Right now, the average person’s data footprint—the annual amount of data produced worldwide, per capita—is just a little short of one terabyte. That’s equivalent to about eight trillion yes-or-no questions. As a collective, that means humanity produces five zettabytes of data every year: 40,000,000,000,000,000,000,000 (forty sextillion) bits.

Such large numbers are hard to fathom, so let’s try to make things a bit more concrete. If you wrote out the information contained in one megabyte by hand, the resulting line of 1s and 0s would be more than five times as tall as Mount Everest. If you wrote out one gigabyte by hand, it would circumnavigate the globe at the equator. If you wrote out one terabyte by hand, it would extend to Saturn and back twenty-five times. If you wrote out one petabyte by hand, you could make a round trip to the Voyager 1 probe, the most distant man-made object in the universe. If you wrote out one exabyte by hand, you would reach the star Alpha Centauri. If you wrote out all five zettabytes that humans produce each year by hand, you would reach the galactic core of the Milky Way. If instead of sending e-mails and streaming movies, you used your five zettabytes as an ancient shepherd might have—to count sheep—you could easily count a flock that filled the entire universe, leaving no empty space at all.

This is why people call these sorts of records big data. And today’s big data is just the tip of the iceberg. The total data footprint of Homo sapiens is doubling every two years, as data storage technology improves, bandwidth increases, and our lives gradually migrate onto the Internet. Big data just gets bigger and bigger and bigger.

THE DIGITAL LENS

Arguably the most crucial difference between the cultural records of today and those of years gone by is that today’s big data exists in digital form. Like an optic lens, which makes it possible to reliably transform and manipulate light, digital media make it possible to reliably transform and manipulate information. Given enough digital records and enough computing power, a new vantage point on human culture becomes possible, one that has the potential to make awe-inspiring contributions to how we understand the world and our place in it.

Consider the following question: Which would help you more if your quest was to learn about contemporary human society— unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online?

On the one hand, the members of the sociology faculty benefit from brilliant insights culled from many lifetimes dedicated to learning and study. On the other hand, Facebook is part of the day-to-day social lives of a billion people. It knows where they live and work, where they play and with whom, what they like, when they get sick, and what they talk about with their friends. So the answer to our question may very well be Facebook. And if it isn’t — yet —then what about a world twenty years down the line, when Facebook or some other site like it stores ten thousand times as much information, about every single person on the planet?

These kinds of ruminations are starting to cause scientists and even scholars of the humanities to do something unfamiliar: to step out of the ivory tower and strike up collaborations with major companies. Despite their radical differences in outlook and inspiration, these strange bedfellows are conducting the types of studies that their predecessors could hardly have imagined, using datasets whose sheer magnitude has no precedent in the history of human scholarship.

Jon Levin, an economist at Stanford, teamed up with eBay to examine how prices are established in real-world markets. Levin exploited the fact that eBay vendors often perform miniature experiments in order to decide what to charge for their goods. By studying hundreds of thousands of such pricing experiments at once, Levin and his co-workers shed a great deal of light on the theory of prices, a well-developed but largely theoretical subfield of economics. Levin showed that the existing literature was often right—but that it sometimes made significant errors. His work was extremely influential. It even helped him win a John Bates Clark Medal—the highest award given to an economist under forty and one that often presages the Nobel Prize.

A research group led by UC San Diego’s James Fowler partnered with Facebook to perform an experiment on sixty-one million Facebook members. The experiment showed that a person was much more likely to register to vote after being informed that a close friend had registered. The closer the friend, the greater the influence. Aside from its fascinating results, this experiment— which was featured on the cover of the prestigious scientific journal Nature—ended up increasing voter turnout in 2010 by more than three hundred thousand people. That’s enough votes to swing an election.

Albert-László Barabási, a physicist at Northeastern, worked with several large phone companies to track the movements of millions of people by analyzing the digital trail left behind by their cell phones. The result was a novel mathematical analysis of ordinary human movement, executed at the scale of whole cities. Barabási and his team got so good at analyzing movement histories that, occasionally, they could even predict where someone was going to go next.

Inside Google, a team led by software engineer Jeremy Ginsberg observed that people are much more likely to search for influenza symptoms, complications, and remedies during an epidemic.

They made use of this rather unsurprising fact to do something deeply important: to create a system that looks at what people in a particular region are Googling, in real time, and identifies emerging flu epidemics. Their early warning system was able to identify new epidemics much faster than the U.S. Centers for Disease Control could, despite the fact that the CDC maintains a vast and costly infrastructure for exactly this purpose.

Raj Chetty, an economist at Harvard, reached out to the Internal Revenue Service. He persuaded the IRS to share information about millions of students who had gone to school in a particular urban district. He and his collaborators then combined this information with a second database, from the school district itself, which recorded classroom assignments. Thus, Chetty’s team knew which students had studied with which teachers. Putting it all together, the team was able to execute a breathtaking series of studies on the long-term impact of having a good teacher, as well as a range of other policy interventions. They found that a good teacher can have a discernible influence on students’ likelihood of going to college, on their income for many years after graduation, and even on their likelihood of ending up in a good neighborhood later in life. The team then used its findings to help improve measures of teacher effectiveness. In 2013, Chetty, too, won the John Bates Clark Medal.

And over at the incendiary FiveThirtyEight blog, a former baseball analyst named Nate Silver has been exploring whether a big data approach might be used to predict the winners of national elections. Silver collected data from a vast number of presidential polls, drawn from Gallup, Rasmussen, RAND, Mellman, CNN, and many others. Using this data, he correctly predicted that Obama would win the 2008 election, and accurately forecast the winner of the Electoral College in forty-nine states and the District of Columbia. The only state he got wrong was Indiana. That doesn’t leave much room for improvement, but the next time around, improve he did. On the morning of Election Day 2012, Silver announced that Obama had a 90.9 percent chance of beating Romney, and correctly predicted the winner of the District of Columbia and of every single state—Indiana, too.

The list goes on and on. Using big data, the researchers of today are doing experiments that their forebears could not have dreamed of.

Emerging Big Data Opportunities For CFOs

Big data is one of the most influential technology trends for the accounting profession, according to a recent report from IMA® and ACCA (Association of Chartered Certified Accountants). To explore what this trend specifically means for CFOs, check what the assistant controller of IBM said about emerging types of data from channels such as social media and how it can be used to inform strategy.

How do CFOs harness big data differently from other C-suite executives such as the CMO?

For CFOs, big data is an opportunity to glean deeper insights into the internal and external forces that influence their companies’ performance. The integration of big data brings new inputs and variables that help anticipate the future, enabling rolling views of leading indicators versus more traditional retrospective views. More frequent inputs allow the enterprise to respond more quickly to change.
By capitalizing on big data using business analytics tools, the role of the CFO is moving beyond optimizing the finance function to transforming the enterprise.
For other members of the C-suite, for example, the Chief Marketing Officer, big data provides insights on customer behavior, delivering information about customer sentiment and general perceptions of the company and its products to help shape future strategy. While CMOs are using big data and analytics to better understand the individual customer, CFOs are utilizing data to better understand the environment.

What new types of data have emerged that CFOs should pay attention to?

CFOs are used to dealing with highly structured and verifiable data – big data isn’t like that and it requires a change in mindset for CFOs to feel comfortable making use of it themselves.
Many CFOs are driving business agility by analyzing performance metrics such as resource productivity and inventory turnover. They are using this data to better understand their businesses and re-engineer and integrate business processes as part of a broader enterprise transformation.
From a CFO perspective, big data isn’t just about new types of data; it’s also about harnessing the availability of existing data. Business analytics tools enhance access while removing manual activity and enabling the processing of large volumes of data to make new connections and accelerate decision making.

What are some examples of CFOs incorporating data from social media sites and online forums as part of their integrated financial reporting responsibilities?

CFOs can make use of data from social media sites and online forums, and we see this in areas such as regulatory compliance, supply chain and labor climate. They monitor formal and informal media to predict sales trends and understand investor sentiment.
CFOs are also starting to generate their own big data by instrumenting their businesses to collect and incorporate large volumes of performance data supporting their new challenges of driving strategy and growth.

What typical roadblocks do CFOs face in using big data to drive strategy?

Many of the roadblocks that CFOs face are similar to those experienced by other members of the C-suite – the availability and usability of the data. However, tools are becoming available which make it easier to access data from less structured sources. CFOs have the opportunity to use this data to support fact-based decisions, developing their role as trusted business advisors.
A different challenge arises where big data cuts across traditional silos, encouraging enterprises to operate in a more integrated fashion. This changes the dynamics of the C-suite, with traditional roles becoming less clear-cut.

How will the way CFOs use big data change in the next five years?

Big data and business analytics raises the expectations of finance; taking advantage of this will call on different skills and finance departments should be focusing on building those skills.
CFOs are asked to analyze different options and strategies under consideration by the business; their finance teams must be able to use analytics to predict business outcomes and influence business leaders to deliver optimum results. CFOs will also become more effective advocates for change by building and communicating business cases based on big data findings.

Predictive Analytics Most Used To Gain Customer Insight

Using analytics to better understand customer satisfaction, profitability, retention and churn while increasing cross-sell and up-sell are the most dominant uses of cloud-based analytics today, following the results of a recent study.

Key takeaways of the study results include the following:

  • Customer Analytics (72%), followed by supply chain, business optimization, marketing optimization (57%), risk and fraud (52%), and marketing (58%) are the areas in which respondents reported the strongest interest.
  • When the customer analytics responses were analyzed in greater depth they showed most interest in customer satisfaction (50%) followed by customer profitability (34%), customer retention/churn (32%), customer management(30%), and cross-sell/up-sell (26%).
  • Adoption was increasingly widespread and growing, with over 90% of respondents reporting that they expected to deploy one or more type of predictive analytics in the cloud solution.
  • Industries with the most impact from predictive analytics include retail (13% more than average), Financial Services (12%) and hardware/software (4%). Lagging industries include health care delivery (-9%), insurance -11%) and (surprisingly) telecommunications (-33%).  The following graphic illustrates the relative impact of cloud-based predictive analytics applications by industry.

Adoption of Cloud-based Predictive Analytics by Industry

 

  • The most widespread analytics scenarios include prepackaged solutions (52%), cloud-based analytics modeling (47%) and cloud-based analytic embedding of applications (46%).  Comparing the 2011 and 2013 surveys showed significant gains in all three categories, with the greatest being in the area of cloud-based analytic modeling.  This category increased from 51% in 2011 to 75% in 2013, making it the most likely analytics application respondents are going to implement this year.

Comparison of Analytics Applications Most Likely To Deploy, 2011 versus 2013

  • 63% of respondents report that when predictive analytics are tightly integrated into operations using Decision Management, enterprises have the intelligence they need to transform their businesses.

Impact of Predictive Analytics Integration Across The Enterprise

 

  • Data security and privacy (61%) followed by regulatory compliance (50%) are the two most significant concerns respondent companies have regarding predictive analytics adoption in their companies.  Compliance has increased as a concern significantly since 2011, probably as more financial services firms are adopting cloud computing for mainstream business strategies.

Concerns of Enterprises Who Are Using Cloud-based Predictive Analytics Today

 

  • Internal cloud deployments (41%) are the most common approach to implementing central cloud platforms, followed by managed vendor clouds (23% and hybrid clouds (23%). Private and managed clouds continue to grow as preferred platforms for cloud-based analytics, as respondents seek greater security and stability of their applications.  The continued adoption of private and managed clouds are a direct result of respondents’ concerns regarding data security, stability, reliability and redundancy.

Approach To Cloud Deployment

  • The study concludes that structured data is the most prevalent type of data, followed by third party data and unstructured data.
  • While there was no widespread impact on results from Big Data, predictive analytics cloud deployments that have a Big Data component are more likely to contribute to a transformative impact on their organizations’ performance.  Similarly those with more experience deploying predictive analytics in the cloud were more likely to use Big Data.
  • In those predictive analytics cloud deployments already operating or having an impact, social media data from the cloud, voice or other audio data, and image or video data were all much more broadly used as the following graphic illustrates.

Which Data Types Deliver The Most Positive Impact In A Big Data Context

BIG DATA is not a hype, it’s here to stay!

De klant is koning, dat is het motto van de meeste merken en bedrijven. Maar wat weten we van die (potentiële) klant? Tegenwoordig is er veel (digitale) informatie beschikbaar, zogenoemde big data. Voorbeelden hiervan zijn databases, online koopgedrag, social media data en klantenservice data. Daarnaast worden producten steeds slimmer uitgerust met data, zoals chips en volgsystemen.

Mooie ontwikkelingen. De vraag is echter wat bedrijven en merken in de praktijk met deze data willen, kunnen en mogen. Nettribes organiseerde daarom een roundtable met een aantal A-merken en big data specialisten van Oracle. Doelstelling van de roundtable was enerzijds te discussiëren over de kansen, beperkingen en toekomst van big data en anderzijds om met gelijkgestemden informatie en ervaringen uit te wisselen over dit snel oprukkende fenomeen.

Dit artikel geeft een kijkje in de keuken voor iedereen die ‘iets met big data wil of moet’.

‘Shift’ or ‘die’
Johannes Brouwer van Oracle definieert big data als ‘het combineren van data op een manier die nieuwe waarde aan een product op dienst toegevoegt’. Hij is ervan overtuigd dat big data kan en moet bijdragen aan overall doelstellingen, en niet moet worden ingezet voordat er een plan op tafel ligt. Volgens Brouwer draait het steeds meer om de zogenoemde ‘informatiewaarde’ van producten. Het gaat allang niet meer om het product zelf, maar om de data en informatie die het product kan bieden. Denk bijvoorbeeld aan Nike. Dit merk verkoopt niet alleen schoenen, maar is tegenwoordig ook je persoonlijke hardloop- en fitnesscoach.

Iedereen om de tafel is het dan ook eens; het is niet de vraag óf merken iets met big data moeten doen, maar wát zij hiermee moeten doen. Ook is er een nieuwe kijk op bedrijfsprocessen nodig om slim gebruik te maken van de beschikbare big data. Frank Jan Risseeuw van de ING bank voegt hieraan toe dat big data voor de bank interessant is, zoals het uit socialmedia data verkrijgen van feedback op de dienstverlening. Hiermee kunnen de processen voor klanten weer worden verbeterd.

Big data is niet nieuw, het gebruik ervan is nieuw. Dit heeft juridische gevolgen.
Thomas van Essen en Machteld Robichon schetsen de juridische kant van big data. Zij weten als geen ander dat het vooraf juridisch inrichten en afdichten van het gebruik van big data cruciaal is. Onjuist gebruik en onjuiste inzet kan namelijk leiden tot een onderzoek door het ‘College Bescherming Persoonsgegevens’ (CBP). Dit kan vervolgens resulteren in dwangsommen of boetes, reputatieschade en het (verplicht) vernietigen van opgebouwde data.

Van Essen geeft aan dat het verzamelen van data niet nieuw is, dit werd altijd al gedaan. Kruisverbanden leggen tussen verschillende data(bronnen) waarmee nieuwe relaties gelegd worden, is nieuw. Dit vergt voor veel merken een nieuw juridisch kader. Hierbij geldt dat het verzamelen en gebruiken van persoonsgegevens (gegevens die direct of indirect herleidbaar zijn tot een persoon) het belangrijkste juridische aandachtspunt is.

Het is onmogelijk om in één artikel een ‘one-fits-all’ juridisch kader te schetsen. Er zijn echter wel aandachtspunten om altijd mee te nemen bij de inzet van big data:

  • Doel vooraf definiëren: zorg dat het doel van het gebruik en de koppeling van persoonsgegevens duidelijk gedefinieerd wordt. Het zomaar verzamelen van data en relaties leggen tussen data(bronnen) kan in strijd zijn met de ‘Wet bescherming persoonsgegevens’. De gronden waarop je persoonsgegevens wel mag verzamelen zijn in de ‘Wet bescherming persoonsgegevens’ opgenomen.
  • Transparantie: consumenten worden steeds mondiger en willen weten wat er met hun data gebeurt. Dat veel mensen hun leven op internet zetten via socialmedia, betekent niet dat zij direct willen dat hier ook van alles (commercieel) mee gebeurt. Wees duidelijk en informeer over de doeleinden van het verzamelen en gebruiken van data.
  • Niet alle data aan personen koppelen: zoals eerder aangegeven, anoniem of geaggregeerd (niet herleidbaar tot een persoon) verwerken van data brengt minder risico’s met zich mee. Bob Christiaanse van socialmedia tool Media Injection heeft veel te maken met partijen die veel data bezitten en gebruiken. Zijn stelling is dat je niet altijd en alleen maar met persoonsgegevens succes boekt. Er is ook veel mogelijk met geaggregeerde data om bijvoorbeeld een doelgroepprofiel beter in kaart te kunnen brengen.
  • Anoniem moet echt anoniem zijn: een vaak gehoord argument is dat op het gebruik van gegevens de ‘Wet bescherming persoonsgegevens’ niet van toepassing is omdat de gegevens zijn geanonimiseerd. Dit is op zich juist, maar dan moeten de gegevens ook echt anoniem zijn. Indien de gegevens kunnen worden ontsleuteld, is de ‘Wet bescherming persoonsgegevens’ nog altijd van toepassing. Houd er dus rekening mee dat op het anonimiseren van persoonsgegevens de ‘Wet bescherming persoonsgegevens’ van toepassing is.

Acht handvaten om het management te overtuigen van big data 
Big data is geen hype, ‘it’s here to stay‘ en het is cruciaal om het management hierin mee te krijgen. Zonder (financiële) ondersteuning van het management is er weinig kans van slagen op lange termijn. Echt profiteren van big data vergt namelijk een lange adem. Onderstaand acht tips en handvaten van participanten met ervaring in het overtuigen van hun management en de rest van de organisatie van het voordeel van big data:

  1. Data inventariseren: voordat er een plan op tafel komt, is het verstandig om in kaart te brengen welke databronnen er beschikbaar zijn. Veel merken hebben zoveel verschillende afdelingen en bronnen dat niemand precies weet welke data eigenlijk allemaal verzameld wordt. Vaak blijkt dat afdelingen over kostbare data beschikken terwijl dit niet bij iedereen bekend is. Zo geeft Youness Bendahmane van KLM aan dat tijdens een oriëntatieronde bleek dat er nuttige data binnen de organisatie beschikbaar was, waarvan men niet wist dat het er was. Juist die inzichten zijn belangrijk voor het totaal plaatje.
  2. Inspireren: nadat helder is welke databronnen beschikbaar zijn, helpt een inspiratieronde waarbij aan de hand van cases en voorbeelden wordt aangetoond wat de organisatie met de data kan. Door het inspireren en enthousiasmeren van de juiste personen wordt men zich bewust van de eindeloze mogelijkheden en voordelen die (het combineren van) databronnen met zich meebrengen. Hiermee wordt een breed draagvlak binnen een organisatie gecreëerd dat essentieel is voor de inzet vanbig data.
  3. Maak een plan: het is van (juridisch) belang dat er een duidelijke aanleiding is om big data in te zetten in de organisatie. Maak een plan dat aansluit op deze oorspronkelijke aanleiding en zorg dat hier duidelijke korte- en langetermijndoelstellingen in verwerkt zijn. Houd hierbij de juridische eisen altijd in ogenschouw.
  4. Creëer ‘proofs of concept’: een valkuil na het inventariseren en inspireren is dat er vaak zó enorm veel data en mogelijkheden beschikbaar zijn, dat er meteen grootse plannen gemaakt worden. Om big data succesvol in een organisatie te introduceren en een vast onderdeel te laten worden van de bedrijfsprocessen, is het experimenten met kleine sets data echter de juiste eerste stap. Als deze zogenoemde ‘proofs of concept‘ succesvol zijn, kan daarmee het management overtuigd worden van de waarde van big data en een gedegen plan voor de lange termijn worden ontwikkeld. Het is hierbij voor het management van groot belang dat je kunt aantonen dat het gebruik van big data geld bespaart of oplevert.
  5. Canvas businessmodel: om op een slimme en snelle manier in kaart te brengen hoe een businessmodel verandert door de komst van big data, kan het Canvas businessmodel gebruikt worden. Binnen dit model wordt vrij sturend gewerkt waardoor er geen maanden overheen gaan voordat de veranderingen in het businessmodel binnen de organisatie duidelijk zijn.
  6. Kies de juiste partners: big data is een vak apart. Geen enkel bedrijf heeft alle benodigde experts en tooling op dit gebied in huis. Kies daarom de juiste externe partij of consultant die jouw markt of problemen goed begrijpt en de kansen ziet. Big data vergt een andere manier van kijken naar bedrijfsprocessen. De meeste organisaties zijn hier simpelweg nog niet op ingericht en een externe partij kan structuur aanbrengen en houvast bieden.
  7. Hou vast aan je plan en doelstellingen: zorg dat je de oorspronkelijke doelstellingen niet uit het oog verliest. Soms wil het management na het zien van de ‘proofs of concepts’ nog wel eens te hard van stapel lopen, wat resulteert in grootse en vaak risicovolle plannen. Zorg er daarom voor dat je altijd terugkomt bij je oorspronkelijke plan en doelstellingen. Juist door kleine stappen te nemen, kan een organisatie big data succesvol inzetten.
  8. Legal compliance check: breng in een vroeg stadium de juridisch relevante onderwerpen in kaart. Hiermee voorkom je dat interessante gegevens worden gegenereerd die vervolgens vanuit juridisch oogpunt niet kunnen worden gebruikt.

Wie zijn je buddies?
Fijn om te weten dat je er niet alleen voor staat in deze big data world. Vele marketeers zijn zoekende en dan helpt het te weten welke afdelingen voorop staan in het omarmen of zelfs pushen van big data. Een rondje langs de tafel geeft het volgende beeld:

  • Joris Landa, Van der Valk Hotels, geeft aan dat het online team de sturende factor is bij vele projecten en initiatieven. Zo ook bij de inzet van big data, waar offline en onlinesteeds meer gekoppeld zullen worden. Dit, terwijl ING Bank en KLM het juist zoeken in het snijvlak van IT, BI (business intelligence) en marketing.
  • Birgit Coenen, Radio 538, geeft terecht aan dat inzet van big data vaak “ontstaat uit fanatisme en toewijding van een paar mensen die de rest mee weten te krijgen.” En, zoals Coenen aangeeft, “of dat dan marketing of IT is, maakt niet uit. Alhoewel het wel helpt als je de ICT begrijpt en dus ook wat de impact kan zijn van wat er gevraagd wordt”.
  • Pascal Hopman, BMW, ziet de raakvlakken van de anderen maar geeft aan dat ook CRM een grote rol speelt in big data. Zij zijn van oudsher bezitters van veel data. Dit geldt ook voor medewerkers van juridische zaken, dan wel externe juristen. Zij moeten vanaf het begin betrokken worden en niet alleen achteraf als het al te laat is.

Een divers plaatje dus. Dit geeft aan dat big data niet in hokjes te stoppen is en het belangrijk is om eerst na te gaan welke afdelingen de lead nemen en welke afdelingen een ondersteunende rol spelen.

Wat gaat de toekomst ons brengen?
Op de vraag wanneer big data mainstream succesvol ingezet zal worden, verschillen de meningen. De verwachte tijdslijn ligt tussen de drie en tien jaar. Versnelling zal optreden door het (verder) ontwikkelen van technologie en resources. Zo verwacht Joris Landa van ‘Van der Valk Hotels’ dat juist de nichemarkten een snelle ontwikkeling zullen doormaken en de consument hierin meegaat zolang zij hier ook voordeel mee behaalt. De vraag is waar voor de consument de grens ligt van het vrijwillig ‘vrijgeven’ van zijn of haar data, en wat hier dan tegenover staat.

Blokkades kunnen optreden binnen de maatschappij wanneer ethische en juridische kwesties gaan spelen. Daarnaast zal het, zeker voor de grote multinationals, lastig kunnen worden om de snelle ontwikkelingen bij te houden en businessmodellen om te gooien. Zeker in tijden waar de economische crisis overheerst, zullen nieuwe initiatieven soms achterin de rij moeten aansluiten als het gaat om budgetten enresources.

Toch zijn we het allemaal eens dat big data de consument en merken vele kansen gaat bieden en dat het werk van marketeers, juristen en specialisten een stuk interessanter zal worden!

Data scientists invade HR departments

The age of ‘trust me, this will work’ is over. HR is being held accountable to deliver business results. And the language of the business is analytics.

When General Motors was looking for someone to lead its global talent  and organisational capability group, the $152 billion carmaker clearly wasn’t looking for a paper-pushing administrator. Michael Arena, who took the position 18 months ago, is an engineer by training. He was a visiting scientist at MIT Media Lab. He’s a Six Sigma black belt. He’s got a Ph.D.

This is not your father’s human resources executive. But it is a sign of where the corporate HR function is headed. Arena is dedicated to the hot field of talent analytics–crunching data about employees to get “the right people with the right talent in the right place at the right time at the right cost,” he says. ” Talent management is a soft space. Historically, we haven’t been able to measure definitely the things that we intuitively believe to be true,” says Arena. “But businesses are mandating it.” The age of “trust me, this will work” is over, says Arena. “HR is being held accountable to deliver business results. And the language of the business is analytics.”

The growing importance of sophisticated analytics to HR–not simply reporting what already exists in an organisation but predicting what could or should be–is a result of “the recognition that the efficient use of labor and deployment of resources is critically important to the business results of the company,” says Mark Endry, CIO of Arcadis U.S. He recently spent six months as interim senior vice president of HR at the $3.3 billion company.
In recent years, enterprises have developed more mature techniques for applying analytics to customer information. “They’ve been able to see–with relatively little data–how much they can do and how powerful the results can be,” says Ben Waber, author of People Analytics: How Social Sensing Technology Will Transform Business and What It Tells Us about the Future of Work. “When you think about what’s going on within companies, you have potentially billions of records generated every day about each person. They’re starting to see how valuable and important that data is.”

IT must be at the center of the unfolding data-driven transformation. Not everyone has an HR data scientist like GM. Arena emphasizes the importance of his partnership with Bill Houghton, GM’s CIO for global corporate functions. “A big piece is integration–ensuring the right systems are connected so we know where to draw the data from,” says Arena. “IT has to play a role in that.”
Indeed, GM’s CIO is counting on a new enterprise data warehouse–and hiring more IT professionals with a business intelligence background–to support HR’s efforts. “Right now the analysis is being done by small group of smart people,” says CIO Houghton. “The next step is how do we make the analytics more available to the everyday manager or the organisational leadership. We want to get this out of the hands of the rocket scientists and into the hands of managers.”
CIOs are the key to helping the organisation figure out what data matters, says Terry Sullivan, director of applied research and consulting at office furniture maker Steelcase. “Everyone is thinking about big dataand collecting all kinds of data to try to figure out how to create smarter people. CIOs can drive this effort.”

IT leaders are uniquely qualified to help their corporate counterparts navigate the minefield of issues associated with these nascent technologies and processes–including data quality, systems integration, security, privacy and change management. “The partnership with IT is critical,” says David Crumley, vice president of global HR information systems for Coca-Cola Enterprises.

There’s a broad array of uses for talent analytics: screening new hires, figuring out who should get promoted, efficiently staffing new projects, uncovering the characteristics of high-performing individuals or teams, and even predicting who’s likely to head out the door.

“The way I think about it is using data to understand how people get work done,” says Waber, CEO of Sociometric Solutions, a management-services firm that was built on his work at MIT Media Lab and that helps companies in one niche of the talent analytics field: collecting and analyzing sensor data to improve workforce performance.

Companies have collected employee data for years–from satisfaction surveys to ethnography. But, says Waber, this “next generation of stuff is moving away from those qualitative assessment modes into much harder behavioral modes, using digital data from email or sensors or ERP systems. That gives us radically more powerful information.”

Historically, HR used data to report headcount or turnover information. “We’re so far beyond that now,” says Crumley of Coca-Cola Enterprises. “HR wants to expand its capabilities to help the business grow. To do so, we need to be able to be more precise and surgical about our interventions. That’s where workforce analytics is huge–helping you determine where to place your bets.”

Laying the foundation

Employees generate petabytes of data about themselves every day, says Waber. But that data sits in disparate systems in different formats and is often messy. “To make it work, you need access to all of this information in real time,” Waber says. “IT is the backbone for this entire process.”

Implementing a single version of an HR information system itself may not sound revolutionary, but it’s a critical first step for companies interested in more advanced analytics.

Jo Stoner, senior vice president of worldwide HR for Informatica, knew predictive talent analytics could benefit the growing data-integration company. “A lot of companies don’t make it past a billion [in revenue]. We were starting to hit those awkward teenage years,” she explains. Managing the company’s assets would be critical to maintaining momentum. But “we don’t own buildings or raw materials,” says Stoner. “Our greatest asset is our talent.” First, though, the company had to bring all its HR data together, applying the master data management services Informatica delivers to clients to its own internal employee data in order to layer analytics atop it.

For most companies, just arriving at a single version of the HR truth can be beneficial. Paul Lones, senior vice president of IT at Fairchild Semiconductor, says that two years ago, managers at the chip maker lacked a single system that could provide an accurate tally of employees worldwide, let alone show the amount of employee turnover. Reports had to be compiled from multiple systems. Succession planning took place in Microsoft Word documents. Compensation decisions might be made in isolation.

Now that the company has implemented cloud-based Workday, managers can access data on all 9,000 employees in one place, including succession plans, turnover trends and salary information. “A manager in the Philippines considering a raise and promotion for an employee can see in seconds how that will compare with others in the group and with local compensation trends and make that decision,” says Lones.

It may not be rocket science, but it’s a start–one that’s been a long time coming for many HR groups. Chiquita Brands, for example, had multiple homegrown and manual HR systems.

“It was a cobbled-together thing,” says Kevin Ledford, Chiquita’s CIO. “People spent 90 percent of their time figuring out where the data was and 10 percent on analyzing it.” In 2008, the company moved to a global HR system, which came in handy when Chiquita moved its headquarters from Cincinnati, Ohio, to Charlotte, N.C., and lost 75 percent of its corporate employees.

“It was very tumultuous. We threw all of our monkeys in the air, and they all came down in different buckets,” says Ledford. “It would have been a nightmare [without the global HR system].” Now that the company is exploring predictive HR analytics, that success with master data management “is everything,” says Ledford.

At Arcadis, Endry has connected his cloud-based workforce-management system to 11 other pieces of software, including ERP, learning management, payroll and an active directory. The combined data helps the company, which provides engineering services regarding infrastructure, water, environment and buildings, to staff client projects more efficiently and effectively.

“In the past we couldn’t tell who was mobile,” says Endry. “Now when we have a giant project in Ohio, we can see on a dashboard that we’ve got these three people in Boston willing to move there.”

Marc Franciosa, CIO of Praxair, has tied the company’s HR and employee performance systems to non-HR systems like SharePoint as a foundation for the company’s talent analytics initiative–no small task for the $11 billion industrial and medical gases company with 26,000 employees in 50 countries.

“The underlying data and processes have to be consistent to be able to do any real analytics with confidence,” says Franciosa. “For companies that are fairly mature that haven’t had a global environment before, it’s going through that initial normalization and standardization process to make sure that this certification, for example, means the same thing around the world,” says Franciosa. (He implemented SumTotal’s HR management system and ElixHR platform to link disparate data.) “The cleanup has been a challenge.”

Now, when Praxair wants to make a bid or sign a new customer, managers can analyze HR implications first. Do they have people who speak Portuguese, have the necessary certification, and are willing to relocate to Rio de Janeiro? “We can do some modeling of the skill sets to determine if it’s doable or if we will have to recruit externally,” Franciosa says.

At GM, Arena has been implementing a three-phase analytics plan. First, integrate systems in a way that ensures highly accurate data is available. Next, push much of that data into standardized reporting tools and dashboards that business managers can use on their own. Then start building models. One of the first projects Arena implemented was a means-based comparison analysis of the top talent pool. The model examines every employee data field in the PeopleSoft database to look for important insights, Arena says. “Five or six experiences may jump out. Having international experience may statistically matter. Then we dig deeper. Are there certain types of international experiences that matter more than others? Does that need to happen earlier versus later?”

Divining interventions

The real power is in applying predictive analytics to a corporate population. “Everyone’s talking about it,” says Chiquita’s Ledford, “looking at all this data you have and trying to figure out the future.”

“The typical data warehouse approach is looking back, but what we wanted to do was start looking forward,” says Praxair’s Franciosa. “What are the leading indicators we should be looking for? What are those metrics or data sets we don’t have but, if we did, would really help us? What external data sources could we use to drive better decision-making?”

For example, Praxair is growing by double digits in China. “Rather than hiring a ton of people and trying to recreate the wheel [there], what I’ve been driving is how do we replicate rapidly those things that have made us successful in our mature geographies,” says Franciosa. “There’s a huge opportunity to use predictive analytics based on where we’re best-in-class.”

The predictive analytics market for HR is nascent and wide-open. “We partner with them all, from IBM to SuccessFactors to PeopleSoft,” says GM’s Arena. “They’re all trying to play in the space, but I don’t know that any of them have figured it out.”

Arena’s team has built a model that predicts what changes in attrition rates will mean for GM’s workforce. Previously, if someone proposed hiring a bunch of young engineers, no one could be certain if that was the best decision. “Now we can say, let’s see what that looks like five years from now,” Arena says. “What are the dividends if we hire 200 entry-level engineers? Might we be better off hiring 50 advanced engineers? We can take that information to the head of engineering and say, ‘Here’s what it will cost you.'”

Arena thinks that analyzing the interactions of networks of employees holds the most promise. The process starts with a survey. “We ask questions of a given network: Who do you go to when you want to shop a new idea? Where do you turn when you need resources to get things done? Then we run the analytics,” Arena explains. “We can tell you who the brokers are, who’s central in that network, who are the bridges across silos. We can even predict who’s a flight risk based on where they sit in the network.” And by identifying which employee networks are most productive, Arena says there’s a chance to improve performance across the company.

At Coca-Cola Enterprises, Crumley is integrating business data with HR data for predictive purposes. “That’s where you can really get sexy with it,” he says. While working with IT to clean and standardize all the data, Crumley is partnering with each corporate function to find out what business metric might be the key measure of success for their employees. By combining those business metrics with people data, he hopes to be able to “reverse engineer what a successful employee is, so we can get the best candidates in the future.”

Employee engagement is a leading indicator of talent retention at Coca-Cola Enterprises. And one of the biggest boosters of employee engagement numbers is access to on-the-job learning, so Crumley’s team is trying to figure out how to make training opportunities more universal. For example, why are folks in this shift at this plant not taking classes as much as other employees in that line of business? With answers to questions like that, HR can intervene to address the core reason, whether that’s an accessibility problem or a manager who needs more coaching. Crumley says the effort will gain even more steam when HR is able to show, through data analytics, a correlation between taking a specific training course and an improvement in sales or productivity.

At call-center provider NOVO 1, CTO Mitchell Swindell has implemented a predictive hiring tool from Evolv. Applicants complete a Web-based application that screens for attitude, propensity for customer service, and voice capabilities. The software also shows the candidate what it’s like to work in a call center in hopes of screening out those who would be a poor fit in the high-turnover industry. The tool then gives the candidate a red, yellow or green rating, at which point candidates rated green or yellow are invited for in-person interviews. The hiring decision is still in the hands of a human, but the system has predicted with 80 percent accuracy the company’s top performers, based on 90-day follow-up data on the hired employees. Since introducing the algorithm-enhanced hiring system, tenure is up by 25 percent, agent productivity has increased 30 percent, and the overall staffing budget has decreased 11 percent. Swindell has integrated Evolve with the company’s payroll, workforce-management and proprietary quality systems to help develop a more nuanced profile of the best employees.

At Chiquita, Ledford is exploring predictive analytics to help the company find, train and retain its “bananaeros”–experts in growing bananas. “Those guys are really hard to find, as bananas have taken a backseat to coffee and tourism,” says Ledford. Analytics could enable managers to predict which lower-level employees “could become our next wave of banana folks,” says Ledford, and determine the right training and grooming to make that happen.

Employee Tracking

There’s also a gold mine of information in how people move through an organisation, and a handful of companies are looking at physically tracking employees–often via RFID-enabled badges–to find out how people work and what impact that can have on business outcomes.

“The barrier at this point is not the technology,” says Waber, whose Sociometric Solutions is an early provider of sensor-based analysis. “I can tell you how much more money a company makes when two employees eat lunch together. We can do extremely sophisticated things. The challenge is that organisations are not used to looking at themselves this way.”

When GM’s Arena was senior vice president of leadership development at Bank of America in 2010, the financial services company used sensors to track 90 call-center workers over the course of several weeks and found that those in the most cohesive networks were the most productive. By switching from solo to group break times, encouraging more socialization, agents improved efficiency by 10 percent. “As silly as it sounds, it worked,” says Arena. “The analytics told us it was probably the right thing to do.” Sometimes it’s as simple as moving desks closer together, says Waber. Steelcase’s Sullivan has discovered that the size of lunch tables can have an impact on productivity. You can’t force people to interact more, says Waber, but based on the data, you can “engineer serendipity.”

Although Arena conducted a number of experiments using sensor data at BofA, he’s not quite ready to start tracking workers at GM. “I’m a huge advocate of sensor work,” Arena says. “But it’s laden with trust and privacy issues and a lot of organisations just aren’t ready for that. It can be a bit of a slippery slope.”

Praxair is conducting a pilot using sensors on its remote workers. The system will measure how long it takes a worker to, say, install a tank for a customer, by monitoring their movements via a sensor on their protective equipment. The sensor also monitors workers for exposure to harmful gases. If gas is detected, an alarm goes off and the monitoring center will attempt to communicate with the worker. Franciosa envisions integrating the sensor data into other corporate systems to uncover correlations between events and particular locations, types of employees, or certifications.

The Importance of Transparency

Franciosa expects employees to put up some resistance to being physically tracked, much like the pushback the company encountered when it was first placing computers onboard its trucks. “It was viewed as Big Brother wanting to know how fast I drive or how hard I brake,” says Franciosa. “The way to alleviate that is transparency. People won’t like being physically monitored if they think we’re trying to find out how long their break was. So we have to be completely transparent that we are using this for safety and long-term productivity. They’ll recognize the value in that.”

HR collects all kinds of sensitive employee information, but employees see physical tracking as particularly intrusive. “It is the boundary to cross,” says Steelcase’s Sullivan. All of Steelcase’s sensor-related experiments are opt-in. Company analysts see only aggregate data, not individual histories. And Sullivan’s team communicates the process and the intentions not just to those who have signed up, but also to everyone on the campus.

“In the U.S., employees don’t really legally have protections around this data. A company can track you wherever you go and listen to all your conversations,” says Waber. “But that defeats the purpose of this approach, which is trying to help people work better, be happier and stay at their jobs.”

Communication is critical with any collection and analysis of people data–not just sensor data. “I don’t think we’re doing anything that people haven’t been trying to do for years,” says Informatica’s Stoner. “But we have to say what we will do with that data.”

Praxair’s Franciosa has a close partnership with his legal teams around the world to navigate the various data privacy and protection issues in each country. “But even once we understand that we can have this data, we have to be very transparent and say, here’s why we want your picture or your talent profile,” Franciosa says. “That goes a long way toward gaining both credibility and traction.”

The role of data in the people business

“What’s really happening right now is a shift in HR from an art to a science,” says Crumley of Coca-Cola Enterprises, who’s currently exploring how social network data and gamification might become part of his HR analytics platform. “A lot of HR teams are trying to figure out how to make that shift quickly so it’s no longer HR sitting around waiting to be pulled in, but HR coming to the table with nuggets of wisdom.”

Data analytics could enable HR to elevate itself from a tactical support function to a business partner on strategy, which ought to sound pretty familiar to CIOs.

But there are limits to HR’s data-driven transformation. “[Analytics] are all about probability, and there’s just so far you can go with probability,” says Crumley. “If you want to figure out how many employees you need to launch a new product, it can get you in the right ballpark. When it comes to predicting turnover, it’s not an exact science. People are people.”

“It’s never black-and-white when you’re talking about people,” says Stoner of Informatica. While some folks get stars in their eyes when talking about big data, Stoner often sees a bigger haystack to sift through. But analytics, she says, help point companies in the right direction. “In HR, we live in a world where data brings more questions. You always have to look beneath it,” she says. “It’s not an exact science. But at least it gets us looking at the right part of the haystack so we can get to the answer faster.”

That’s why GM’s Arena says his talent analytics will never be fully automated. “Sometimes we get projections wrong for all kinds of reasons. It can take several iterations. But HR still loves it, because it equips them to make intelligent decisions for their business partners.”

Nine critical success factors for talent analytics

IT and HR leaders who have deployed workforce analytics systems offer these tips for success

Lay the foundation. Aim for a single source of HR information, if possible.

Account for imperfections. “We’ve got our foundational issues, for sure, but if you wait until it’s completely perfect, you won’t get anywhere,” says Michael Arena, GM’s director of global talent and organisational capability. IT can build reconciliation processes and automated audits to help HR with data issues.

Start small. Marc Franciosa, CIO of Praxair, began with an analytics pilot to map the company’s high-potential employees. “If we had tried to do one big-bang workforce analytics project, it would never have gone anywhere,” he says. “You have to get some traction in order to get credibility.”

Tap internal experts. Both Franciosa and Arena have taken advantage of statisticians and others from their corporate R&D groups to develop their talent analytics programs.

Share the load with HR. Take advantage of HR and IT’s complementary skills. IT can focus on vendor management, security and deployment, while HR might manage requirements gathering, process standardization and communication.

Bring in business know-how. David Crumley, VP of global HR information systems for Coca-Cola Enterprises, works with business leaders from functions such as supply chain, sales and finance to determine what data will drive talent analytics.

Hire external change-management help. Typically, HR leads change management in an organisation. But avoid DIY change management in analytics efforts, warns Mark Endry, CIO of Arcadis U.S., who recently spent six months as interim SVP of HR. Hire external help to guide HR through its big changes.

Take action. “Everyone wants to have more data, but we have to ensure that folks know how to use it,” says Crumley, who had to do more hand-holding than he initially anticipated. “It’s not that anyone is pushing back, but you have to embed the use of the data into the [corporate] DNA.”

Democratize the systems. For people analytics to truly deliver, they need to be self-service tools that business managers and leaders can use. “Early on, we thought the customer [for these tools] was HR,” says Crumley. “But it’s the business leaders that control these decisions daily.”

Big Data Analytics – Make it a business initiative and determine ownership

Although big data has recently taken the mainstream spotlight and become a major initiative in the enterprise, it has always been at play in the wireless space. The challenge now is delivering on the unique monetization opportunity that developing analytics and applications on top of the scale and timeliness of data presents.

Mobile operators are at an important crossroads. New entrants have forced these global brands to rethink how they will effectively compete to ensure long-term viability. For operators, it’s no longer about having the best network or hottest devices; it’s about having the smartest strategy for making the most of their greatest asset – customer data.

Although mobile operators will choose a variety of paths, the monetization of big data will be key to securing a viable future. For some, success will mean the ability to generate insights that improve customer interaction and proactively address customer demands. For others it will mean the creation of new revenue streams, such as selling data to third parties. Regardless of the path chosen, successful monetization rests on one key element – the strength of the analytics at play.

Operators agree that big data analytics should be a strategic priority to drive customer monetization opportunities, but the barriers are often significant. What is required is a major shift in mindset, skillset and technology. Operators must transform their organizations from the top-down in order to become truly data-driven. They must acquire new talent as they shift their focus from aggregating data to gleaning insights from it. And most importantly, they must embrace the latest technologies to be able to act on these insights in an automated fashion.

With increased competition and the heightened risk of over-the-top players infringing on their customer profits, forward-thinking operators are prioritizing their big data analytics strategies and turning to tactics that are proven to accelerate monetization opportunities.

–Identify the strategic need before laying the ground work: Too often, big data is seen as a technology initiative rather than a business one. Heavy emphasis is placed on determining the infrastructure required to support big data, but many times there’s not enough thought as to what’s next. This can result in costly efforts that fall short in return on investment. Operators are making a point to look ahead at what various owners will do with the data, i.e., what problems they may solve and how they can rethink future engagement strategies. Then they are focusing on the most effective plan for extracting, compiling and acting on the data.

–Determine the owner of the problem – and the solution: Misalignment among C-levels and functional teams has been deemed a major barrier for moving big data initiatives beyond the exploration stage. Although the CIO has traditionally “owned” anything in the data realm, we are seeing business owners become more proactive in not only identifying the problems to be solved but also which solutions can best solve them. For example, CMOs within several leading operators are tackling the problem of churn by leading a big data analytics initiative from initiation to execution. This “single-owner” approach ensures alignment between the what, the why and the how.

–Focus on the data that matters: Taking an inside-out approach of analyzing streams of data to then determine the problem you want to solve can leave you trying to boil the ocean. To combat analysis paralysis, operators are identifying specific business problems that when solved can have a substantial impact on the bottom line, i.e. declining recharge volume, data adoption, churn, etc. This focused effort accelerates the process of determining which behavior changes will have the greatest impact on the goal at hand and the specific data which is required for modeling, analyzing and monitoring those behaviors.

–Invest in more sophisticated analytics: Many operators continue to miss the mark on when, where, and how to connect with their customers despite the vast amount of data that they have. To determine what’s relevant for each customer and deliver it in the right context, operators are adopting more sophisticated analytics to understand dynamic customer behavior over time. Flexibility, timeliness and the ability to scale to analyze millions of customers across any number of dimensions are top of mind for operators as they invest in more sophisticated analytics.

–Move toward real-time: Many operators continue to rely on batch processing of events to then determine how and when to engage their customers. Yet to proactively address customer needs, they know they must be able to analyze data in real time. True customer-centric operators are moving toward real-time analytics by confronting the technology challenges that stand in the way of easily and quickly getting the data. It’s not realistic to expect that every data source can be analyzed in real-time but operators are prioritizing the sources that deliver rich behavioral insights, such as usage and transactional records, to drive timelier and higher value customer engagement.

–Pursue plug-and-play for greater ease and speed: While analytics were at play long before the term big data hit the spotlight, many operators remain challenged to advance their capabilities – especially given the explosion of new technologies and techniques specific to mobile. To leverage more advanced analytics, such as behavioral clustering, prescriptive analytics and machine learning, operators are turning to productized solutions which ease the IT pain and expedite speed to market. Operators are also expressing openness to a cloud based approach based on the cost and competitive value of “analytics and action in a box.”

With increased competition and the emergence of OTT players, it has never been more important for operators to leverage their customer data assets to get ahead of the curve from a customer experience and monetization standpoint. Strategically minded operators who are embracing the next wave of big data analytics are transforming their business and customer engagement models. They are becoming more competitive, increasing customer value and profitability.

Blog || CRM en Big Data: een succesvolle combinatie?

CRM wordt binnen de meeste organisaties voornamelijk gebruikt voor beheren van klantgegevens, het ondersteunen van het verkoop- en serviceproces en het identificeren van leads en opportunity’s via marketing campagnes. Dit terwijl het ‘traditionele’  CRM de laatste jaren heeft kunnen profiteren van een aantal nieuwe technologieën die zijn geïntroduceerd:

Dit terwijl het ‘traditionele’ CRM de laatste jaren heeft kunnen profiteren van een aantal nieuwe technologieën die zijn geïntroduceerd:

1. Social media: marketing en customer service is veranderd en webcare is inmiddels in veel organisaties verankerd.

2. Mobile: mobiele apparaten en applicaties hebben CRM toegankelijker gemaakt en verschillende nieuwe verkoop- en marketing kanalen gecreëerd.

3. Software as a Service (SaaS): cloud oplossingen hebben er voor gezorgd dat CRM goedkoper is geworden.


Big Data

Big Data is de volgende grote verandering waarvan geprofiteerd kan worden en die effect heeft op de manier waarop organisaties met klanten omgaan. Dit gaat ook de manier waarop we CRM inzetten veranderen.

Wat is Big Data? Er zijn verschillende definities beschikbaar en ze zijn niet eenduidig, maar onderstaande definitie uit het boek “Big Data: A Revolution That Will Transform How We Live, Work, and Think” van Viktor Mayer-Schonberger en Kenneth Niel Cukier sluit goed aan op de toepassing binnen CRM:

‘The ability of society to harness information in novel ways to produce useful insights or goods and services of significant value’ (pagina 2). En verderop: ‘At its core, big data is about predictions. It’s about applying math to huge quantities of data in order to infer probabilities’ (pagina 11 and 12).

Big Data gaat dus over het vermogen om informatie op nieuwe manieren in te zetten voor het verkrijgen van nuttige inzichten én over het voorspellen: wiskundige logica toepassen op grote hoeveelheden data om waarschijnlijkheden af te leiden.

Is Big Data dan alleen maar een technologische innovatie? Nee, zeker niet. Om het maximale voordeel te behalen uit Big Data moet uw organisatie zich ook aanpassen door haar processen te herdefiniëren om zo de analyse mogelijkheden en de besluiten die daaruit volgen te kunnen ondersteunen. Ook data kwaliteit met daarbij behorende visie, procedures en rollen wordt nóg belangrijker.


Toepassing

Hoe kunt u profiteren van Big Data en het inzetten om uw klantinzichten te verbeteren en uw bedrijf beter te laten presteren? Door alle gestructureerde, reeds vastgelegde data in CRM en ongestructureerde data (social media, audio, video, foto’s) die op ons af komt uit kanalen als Twitter en Youtube te structureren en te gebruiken ter analyse: ook wel Big Data Analytics genoemd.

Het aantal krachtige tools dat beschikbaar is en op de juiste manier logica toepast en correlaties legt tussen grote sets van data, neemt toe. Binnen afzienbare tijd zullen er nog meer toegankelijk zijn voor de verwerking en analyse van deze data (ook via de cloud).


Welke (nieuwe) CRM inzichten biedt Big Data Analytics?

Denk bijvoorbeeld aan:

  • het voorspellen van het koopgedrag van uw klanten (patronen en trends eerder herkennen);
  • betere besluitvorming over te voeren strategie door nieuwe inzichten en (online) evaluaties van klantervaringen;
  • continue verfijnen van uw klantgerichte bedrijfsprocessen doordat alle informatie over het resultaat van sales of marketing acties direct beschikbaar is;
  • tijdiger signaleren van prospects via verschillende interne en externe kanalen;
  • het verbeteren van de conversieratio door uw klanten beter te adviseren bij hun aankoop;
  • inzichtelijk maken van uw meest waardevolle klanten (ambassadeurs) om deze een voorkeursbehandeling te geven met als doel het verhogen van de lifetime customer value. Uw minst waardevolle klanten gaat u via marketing campagnes opnieuw benaderen.
  • het verbeteren van uw klantenservice door het monitoren van meningen van klanten over uw product of dienst; zowel feedback via interne kanalen als extern via social media en forums.

Nu denkt u, dit zijn inzichten die ik momenteel –met een beetje moeite- ook uit mijn (CRM) systeem kan halen. Dit klopt tot op zekere hoogte. Echter, het verschil is dat met de toepassing van Big Data Analytics een veel grotere hoeveelheid data –inclusief nooit eerder gebruikte ongestructureerde data- kan worden geanalyseerd. Dit leidt tot verbeterde, diepgaandere en meer accurate voorspellingen en inzichten waardoor competitief voordeel kan worden behaald.


Keerzijde

Zijn er dan alleen maar voordelen te behalen met het inzetten van Big Data? Natuurlijk niet, er zit ook een keerzijde aan de beschikbaarheid van al deze informatie: denk o.a. aan privacy gevoeligheid, het risico van correlatie versus causatie en het analyseren van irrelevante ongestructureerde data.

In mijn volgende artikel zal ik verder ingaan op de gevaren, risico’s en uitdagingen bij het inzetten van Big Data technologie binnen CRM.

Lees mijn artikel ook op Computable.nl: http://www.computable.nl/artikel/opinie/crm/4756445/2333360/crm-en-big-data-vormen-succesvolle-combinatie.html#ixzz2XnM8NJkn