How data-driven is your marketing department?

Marketing has evolved from creative newspaper, magazine and billboard advertising, to processes that are designed to drive customer engagement and accelerate business growth. With all the data at their hands nowadays, what is necessary to become a data-driven organisation?

“Ultimately, a data-driven marketing organisation learns to use data analytics as part of all marketing campaigns; from setting-up your campaign to post-campaign review. Within a data-driven organisation, information can move freely, is consistent across all channels and decision makers at all levels use data to better serve their customers.” (source: Forbes Insight Report) 

So what does this mean and how can your organisation use data analytics to create data-driven marketing campaigns? 
Lees verder How data-driven is your marketing department?

Data Presentation in Business Intelligence

Introduction

In last three articles (1, 2, and 3) of this series, we discussed the overall Business Intelligence system architecture, different components, and how ETL helps store source data in the required form, followed by how OLAP (CUBE) helps to process, aggregate, and store huge amount of data. In this article, we’ll discuss another part of the overall BI system architecture: information delivery. How can we visualize or deliver data to help business owners and other users in the community? You can help them to know business trends and usability of the system, including other benefits. In the next the part of this series, you’ll know more about information delivery and various ways to represent data to help business owners in decision making.

Importance of Data Presentation

With the basic understanding of Business Intelligence, we should know the importance of data presentation in information delivery and how useful it is for business users. After storing data in the required form, either in a data warehouse/data mart database or an OLAP cube, data can be showcased in certain forms that will become the source of required information for all BI users; this is known as Reports in the BI world. These Reports can be data reports, executive dashboards, or conference presentations.

Data Reports may have different types of layouts to present the data, such as Metrics, Chart, Funnel reports, animated representation of data, and so on. All these kinds of data layouts can be considered, based on the targeted audience.

All these reports may represent Key Performance Indicators (KPIs) to visualize/present the data. These KPIs are a type of performance measurement. An organization may use KPIs to evaluate its success or progress, or to evaluate the success/progress of a particular business in which it is engaged. This data presentation is the logical end point for any BI system and becomes the source of truth for decision makers.

Reporting Tools

A reporting tool is an interactive data exploration and visual presentation of data that is stored in a database or cube and can be achieved by using various available reporting tools; the selection of a reporting tool for your purpose is solely your discretion but with supporting facts like we discussed, some selection criteria for OLAP tools in the previous article. There is no thumb rule to adopt a reporting tool; I want to reiterate this fact that the identification and adoption of any BI tool is not a personal decision and can’t be finalized without thorough study. You need to refer to the previous articles of this series to know more about the reasons why this is important for the successful implementation of any BI system.

There are varieties of reporting tools available that can be useful to present data and deliver the information. Some of them that come with Business Intelligence suites are Microsoft SQL Server Reporting Services, Oracle Reports, IBM Congnos Report Studio, and MicroStrategy Reporting Suite. Some open source tools are BIRT, JasperReport, Pentaho, and few other available tools for reporting are Tableau, QlikView, and the like.

Reporting tools help to visualize data in multiple, interconnected ways by using data regions. You can display data organized in tables, matrices, or cross-tabs, expand/collapse groups, charts, gauges, indicators or KPIs, and maps, with the ability to nest charts in tables.

Types of Reports

Once we finalize the reporting tool, the next step is to select the report design and data display to present the data as information for the user community. We need to consider key factors to design any report; these factors are the structure and tone of the report, length of report, kinds of data to include (tables, figures, graphs, maps, or pictures), detail of information, data positioning in the report, and the most important: visual sophistication.

We can explore different types of reports while considering the preceding key factors and organizing data in a variety of ways to show the relationship of the general information to the detailed. We can put all the data in the report, but set it to be hidden until a user clicks to reveal details; this is a drilldown action. Data can be displayed in a data region, such as a table or chart, which is nested inside another data region, such as a table or matrix. Another way is display the data in a subreport that is completely contained within a main report. Or, you can put the detail data in drillthrough reports, separate reports that are displayed when a user clicks a link. We’ll explore those ideas next.

Drilldown Reports: A Drilldown report provides plus and minus icons on a text box; these icons enable users to hide and display items interactively. This is called a drilldown action. For a table or matrix, it shows or hides static rows and columns, or rows and columns that are associated with groups.

Drillthrough Reports: A Drillthrough report is a report that a user opens by clicking a link within another report. Drillthrough reports commonly contain details about an item that are contained in an original summary report.

Subreport: A Subreport is a report item that displays another report inside the body of a main report. Conceptually, a subreport in a report is similar to a frame in a Web page. It is used to embed a report within another report.

Nested Reports: A Nest report can nest one data region, such as a chart, inside another data region, such as a matrix, typically to display data summaries in a concise manner or to provide a visual display as well as a table or matrix display.

A report can apply to a specific type of report item, a layout design, or a solution design. A single report can have characteristics from more than one type; for example, a report can be, at the same time, a stand-alone report, a subreport referenced by a main report, the target of a drillthrough report in a different main report, and a drilldown report.

Data Sources for Reports

Reports can leverage stored data in databases in the form of DW or DM or a cube to display. Data can be exposed by using a data source and data sets property in any report. These data sources and data sets can be exposed only to one report or shared with multiple reports. Report definition is always independent with the data source and data set and can be managed separately.

Different types of reports allow you to anticipate different report properties, such as drillthrough actions, expand/collapse toggles, sort buttons, Tooltips, and the use of report parameters to enable report reader interactions. To control data display or user interaction, we can use report parameters that can be combined with expressions and provide the ability to customize how report data is filtered, grouped, and sorted.

Industry Trends in BI Reporting

Industry trends are changing rapidly and user expectations are also growing with respect to information delivery by a BI system. In the current data world, a user not only analyzes structured data but also wants quick information on top of huge, unstructured data. We need to adopt and support new trends to meet user objectives in a rapidly changing paradigm.

We are observing great change in data visualization; the initial visual data discovery releases from the big organizations like Microsoft, SAP, IBM, Microstrategy, SAS, and Oracle tended to have limited capabilities, but the gap is slowly closing. The specialty organizations and the heavyweights are trying to find the right balance between analysis and trusted data.

Another important trend is mobility. Initially, the mobility of information was “nice to have,” but now it is an absolutely essential component of any useful report. The first mobile BI products allowed users to look at data remotely on their tablets and mobile phones, and as devices improved, dashboards and other visual representations made it easier for people to access the information they needed in the format they desired.

Another new trend is to present enterprise-wide massive amounts of data (Big Data). Everyone is talking about big data, but in the business intelligence world it means one thing: Organizations are being overwhelmed by massive amounts of new information that they need to analyze quickly and accurately. In the current industry trend, unstructured data will be a big part of this change because the ability to look at information not stored in spreadsheets and databases is letting organizations analyze information that they couldn’t truly leverage a few years ago.

It’s always highly recommended that organizations should change their way of operating and align with industry trends for better results and use the best of technology in their own interest.

Summary

In this article, we discussed how data visualization plays an important role for the successful implementation of a BI system. We discussed various types of reports and the factors that help in decision making for designing and implementing any report. Overall Information delivery is the last layer for a BI system, but this is the data presentation layer that turns data in information. It helps a diverse user community leverage to present the information; otherwise, users won’t be in a good situation to make any decision.

Data visualization helps to bring data ‘alive’

Picture this: an easy-to-digest, up-to-the-minute window on how your business is performing. To some, it may sound like the panacea for the over-burdened executive desperate for details on their key targets. For others it may merely conjure up images of dispiriting data dashboards; technology that has promised so much yet delivered so little.

Now, however, a new breed of advanced data visualisation tools are arriving that claim to make good on those pledges of instant insight.

Part of the problem for many of the firms that wanted ready-made business performance information has been an over reliance on the wrong tools, says Jeremy Pile, technical director at food supply chain firm Muddy Boots. “By the time you’ve built a report using traditional business intelligence tools, it’s out of date,” he says.
Muddy Boots acts as an intermediary between food suppliers and the UK’s leading retailers, including Marks & Spencer. For example, if a retailer wants to ensure the quality of the food from its suppliers, Muddy Boots provides the data from the field to the shelf that shows how the produce has been cared for.

To help bring that data to life, Muddy Boots relies on data visualisation tools from software maker Logi Analytics. “Customers want us to present information that shows them how any given supplier is performing at a given time. They may want to know if that performance is consistent with previous performance. Having visualisation tools makes that much easier,” says Pile.

One aspect that sets the new breed of data visualisation tools apart from their BI precursors is that they provide different ways to present data, says Eddie Short, EMEA head of data and analytics at consulting firm KPMG. “Information is beautiful,” he says. “There have been analytical tools for as long as we’ve had computers, producing charts, and so on. But what we’re seeing now are tools that make it simple to understand the data.

“In the past, business reports were written by financial analysts and consumed by them and their management,” he adds.

Such reports might typically rely on bar charts, line graphs and the like; firms using advanced data-visualisation tools are more likely to be using bubble charts, geospatial heat maps or even word clouds.

Word clouds

Business intelligence purists might be sniffy about word clouds, but they turn out to be an excellent way of rapidly conveying important data, says Muddy Boots’ Pile. “For example, if one of our customers suddenly starts seeing a word like ‘packing’ appearing prominently in their word cloud, it might give them the first inkling that there could be a problem,” he says.

But there’s more to advanced data visualisation than mere fancy new graphs, says Stephen Few, principal analyst at Perceptual Edge. Users are beginning to expect to be able to ‘play’ with charts, recompile them on the fly or drill into the data.

That in turn requires a set of tools that can pull data from disparate sources – not just structured sources but, potentially, a myriad of unstructured sources. And, yes, the tools may frequently even be required to bring sense to that over-hyped buzzword, big data.

Data visualisation at Volvo

At Volvo, Andy Johnson, a support and project analyst at the car maker, has been using BI tools to monitor various aspects of business performance for many years. These were systems initially based on green screen systems, he notes. Over the past 15 months, Volvo UK has begun to explore ways of using data visualisation tools to monitor performance using much larger data sets.

The firm has always received sales data from the UK’s automotive trade body the Society of Motor Manufacturers and Traders, which helps Volvo track its performance against competitors. But historically, much of that data has been superfluous. “We target a fairly small fraction of the overall automotive market,” says Johnson.

Such practices are a common feature of trying to get business value from massive data sets, says Perceptual Edge’s Few. “At any given moment, most of the data that we collect is noise. This will always be true, because signals in data are the exception, not the rule.” But it is the ‘signals’ that give business leaders the insights they crave.

The analytic and data visualisation tools, QlikView, enable Volvo not just to zoom in on the market segment it’s competing in but compare how it is performing across the country, says Johnson. “Now, if we notice that sales have fallen unexpectedly in a particular location we might drill down and notice that one of our rivals has been offering a promotion in that region,” he explains.

But while the ability to create fancy charts using data from multiple sources is characteristic of advanced data visualisation tools, these are still capabilities that many traditional BI suppliers will claim for their products, says Short. To understand what really sets the new tools apart, you need to understand their underpinning philosophies, he adds.

Whereas typical business intelligence tools were built from a financial reporting perspective, with a bean-counter-like fixation on data accuracy, today’s data visualisation tools have been influenced by the philosophies of human-computer interaction, says Short.

Human-computer interaction

Human-computer interaction (HCI) emerged in the early 1980s, as computer scientists began to infuse software design with ideas from cognitive science and human factors engineering. But while concepts of usability percolated into office productivity, it has made minimal impact on enterprise applications – at least until recently.

The impact of HCI on analytic tools may be most easily understood as yet another example of the “Apple effect” on the enterprise. As Volvo’s Johnson explains, senior executives don’t just want to be able to access data visualisations on their desktop, they expect it on their smartphone or on their tablet.

And here, users are also coming to expect business charts to embrace the same type of interactivity they’ve grown accustomed to with Apple’s pinch-to-zoom capabilities, which set the iPhone’s image-processing capabilities apart.

While traditional BI vendors might see this as a systems integration problem, HCI professionals approach it from a usability perspective. A prime example is the work presented by Mikkel R Jakobsen and his colleague Kasper Hornbæk, researchers in human-centred computing at the University of Copenhagen at the VIS 2013 conference held in Atlanta, Georgia in October 2013.

Their paper explored how well users could interact with data visualisations when using devices with displays ranging from that of a typical smartphone to mega-size multiple monitor type displays. It’s often thought that the larger the display, the easier it is for people to work. But what Jakobsen and Hornbæk showed is that the time people take to complete tasks – in this case using maps – can actually be longer on a large display.

Other speakers at VIS 2013 also challenged the perceived wisdom over data visualisation, including Michelle Borkin from the School of Engineering & Applied Sciences at Harvard University. Her research demonstrated that the more data visualisations included annotations and decorations, often dismissed as “chart junk”, the more memorable they became.

Such observations are testament to the difficulty of successfully creating data visualisations. “If a business wants to get real insight from data – and not simply drown in it – it needs someone who understands both what the data presents and how to display the results,” says Short.

In many ways, advanced data visualisation tools fall neatly in line with one of the big trends in BI over recent years: the democratisation of analytics; taking data tools outside the purview of financial analysts and bringing them to the masses.

But to deploy advanced data visualisation tools propitiously, however, firms need to be wary of difference between this approach to data analysis and more traditional business intelligence. Creating successful data visualisations needs some capable of both “left brain” and “right brain”-type thinking, says Short. It’s the ability to understand what the data means and how that can be presented – on whatever device – in such a way that it will immediately convey meaning to whoever looks at it.

“When we first started using our visualisation tools, we were quite prescriptive about how they should be used,” says Volvo’s Johnson. “Then we began to see the ideas other people had and it opened up our thinking about what could be done.”

But in can also be prudent to place a limit on users’ freedoms as not every method of visualising data is useful, says Johnson. “It can be easy to get carried away,” he adds.

Tomatoes that send an email when they need more water: Sounds of the future?

Imagine cows fed and milked entirely by robots. Or tomatoes that send an e-mail when they need more water. Or a farm where all the decisions about where to plant seeds, spray fertilizer and steer tractors are made by software on servers on the other side of the sea.

This is what more and more of our agriculture may come to look like in the years ahead, as farming meets Big Data. There’s no shortage of farmers and industry gurus who think this kind of “smart” farming could bring many benefits. Pushing these tools onto fields, the idea goes, will boost our ability to control this fiendishly unpredictable activity and help farmers increase yields even while using fewer resources.
The big question is who exactly will end up owning all this data, and who gets to determine how it is used. On one side stand some of the largest corporations in agriculture, who are racing to gather and put their stamp on as much of this information as they can. Opposing them are farmers’ groups and small open-source technology start-ups, which want to ensure a farm’s data stays in the farmer’s control and serves the farmer’s interests.
Who wins will determine not just who profits from the information, but who, at the end of the day, directs life and business on the farm.

One recent round in this battle took place in October, when Monsanto spent close to $1 billion to buy the Climate Corporation, a data analytics firm. Last year the chemical and seed company also bought Precision Planting, another high-tech firm, and also launched a venture capital arm geared to fund tech start-ups.
In November, John Deere and DuPont Pioneer announced plans to partner to provide farmers information and prescriptions in near-real time. Deere has pioneered “precision farming” equipment in recent years, equipping tractors and combines to automatically transmit data collected from particular farms to company databases. DuPont, meanwhile, has rolled out a service that analyzes data into “actionable management strategies.”

Many farmers are wary that these giants could use these tools to win unprecedented levels of insight into the economics and operational workings of their farms. The issue is not that these companies would shower the farmers with ads, as Facebook does when it knows you’re looking to buy sneakers. For farmers, the risks of big data seem to pierce right to the heart of how they make a living. What would it mean, for instance, for Monsanto to know the intricacies of their business?
Farm advocacy groups are now scrambling to understand how — if given free rein — these corporations could misuse the data they collect. “We’re signing up for things without knowing what we’re giving up,” said Mark Nelson, director of commodities at the Kansas Farm Bureau. In May, the American Farm Bureau Federation, a national lobbying group, published a policy brief outlining some potential risks around these data-driven farm tools.
For farmers, the most immediate question is who owns the information these technologies capture. Many farmers have been collecting digitized yield data on their operations since the 1990s, when high-tech farm tools first emerged. But that information would sit on a tractor or monitor until the farmer manually transferred it to his computer, or handed a USB stick to an agronomist to analyze. Now, however, smart devices can wirelessly transfer data straight to a corporation’s servers, sometimes without a farmer’s knowledge.

“When I start storing information up on the Internet, I lose control of it,” said Walt Bones, who farms in Parker, S.D., and served as state agriculture secretary.
Justin Dikeman, a sales representative with DuPont, said farmers continue to own whatever data they collect on their operations. A spokesman for John Deere also said farmers own their data, and that farmers have the opportunity to opt-out of the company’s cloud services. Monsanto did not reply to a comment request.

Details on who owns what at which stage of the analytics process is less clear, though. Even if a contract guarantees that farmers own the raw data, for instance, it’s not obvious whether farmers will be able to access that data in a non-proprietary format. Nor is it evident how easily farmers can stop these devices from collecting and transmitting data.

How corporations use the information is another central concern. One worry is the giants will harness the data to engage in price discrimination, in which they charge some farmers more than others for the same product. For example, details on the economic worth of a farm operation could empower Monsanto or DuPont to calculate the exact value the farm derives from its products. Monsanto already varies its prices by region, so that Illinois farmers with a bumper crop might be charged more for seeds than Texas farmers facing a drought. Bigger heaps of data would enable these companies to price discriminate more finely, not just among different geographic regions but between neighbors.

Another issue is how the value of this information will be determined, and the profits divided. The prescription services Monsanto and DuPont are offering will draw on the vast amounts of data they amass from thousands of individual farms. Farmers consider much of this information – such as on soil fertility and crop yields – confidential, and most view details about particular farming techniques as akin to personal “trade secrets.” Even if the corporations agree not to disclose farm-specific information, some farmers worry that the information may end up being used against them in ways that dull their particular competitive edge.

“If you inadvertently teach Monsanto what it is that makes you a better farmer than your neighbor, it can sell that information to your neighbor,” said John McGuire, an agriculture technology consultant who runs Simplified Technology Services and developed geospatial tools for Monsanto in the late-1990s. And if the corporation gathers enough information, “it opens the door for Monsanto to say, ‘We know how to farm in your area better than you do,’” he said.

There are also no clear guidelines on how this information will be used within commodity markets. Real-time data is highly valuable to investors and financial traders, who bet billions of dollars in wheat, soybean and corn futures. In a market where the slightest informational edge makes the difference between huge profits and even bigger losses, corporations that gather big data will have a ready customer base if they choose to sell their knowledge. Or they could just use it to speculate themselves.

“If this real time yield data goes into the cloud and a lot of market investors get into it, there is potential for market distortion,” said Kyle Cline, policy advisor for national government relations at the Indiana Farm Bureau. “It could destabilize markets, make them more volatile,” he said.

John Deere has stated it will not share data with anyone it believes will use it to influence or gain an advantage in commodity markets. Monsanto, DuPont and other firms have not, however, issued similar public statements.

Some farmers and smaller manufacturers also worry that data analytics will give conglomerates like Monsanto and DuPont more power to compel farmers to buy other lines of products. Monsanto, for example, has proven highly adept at leveraging its wide suite of products to support one another. How Monsanto used its dominance in one business (genetic traits) to benefit others (seeds, fertilizer) was the focus of a three-year antitrust investigation by the Justice Department. (DOJ closed the probe last November without taking any action).

In recent years, Monsanto, DuPont and John Deere have also expanded into selling farmers a variety of financial services and insurance. John Deere, for one, acknowledges that its financial division may consult data from a farmer’s machinery, if the farmer permits.

Other private corporations are also competing for a share of the big data pie. Established equipment manufacturers like AgCo and Case IH have been expanding their data analytics services, and some high-tech upstarts are also joining the game. The Climate Corporation, the weather data and insurance company Monsanto bought in October, for example, was founded by a former Google employee.

Open-source groups attempting to provide farmers with some similar technologies include ISOBlue, a project based at Purdue University, which teaches farmers how to capture and independently store their own data. FarmLogs, a Michigan-based company backed by Silicon Valley money, sells software and data analytics that let farmers fully control the information collected. “We’re pushing back against the monopoly on information” that some existing vendors create, said FarmLogs founder Jesse Vollmar, who grew up on a farm.

What is not clear is whether these smaller open-source companies will be able to keep up with the established giants over the long run.

“Monsanto has its fingers awful deep within our industries,” McGuire said. “Its expansion [into data analytics] should scare a lot of people.”

To be sure, much depends on how widely farmers adopt the privately-designed “decision-support” services. Monsanto has estimated this to be a $20 billion market, but there is no proof yet whether the company will be able to process these reams of data into profitable farming and business strategies. Whether Monsanto’s bet will pay off is “tough to validate,” said Paul Massoud, an analyst with Stifel Nicolaus.

Some experts question whether relying on prescription-based services is in farmers’ best interest at all. “I don’t see farmers themselves crunching numbers, so [I] doubt they’ll be learning anything more about how to farm well,” said Bill Freese, expert on agricultural biotech and science policy analyst with Center for Food Safety. “Monsanto’s scheme does not really represent farmers embracing data analytics, but Monsanto embracing it to better sell the seeds it wants to sell with a pseudo-scientific rationale.”

A new group called the Grower Information Services Cooperative thinks the best way farmers can protect their interests during the transition to big data is to organize. Formed in west Texas last December, GISC is pushing a model where farmers would store their data in a repository through the co-op, and companies would pay the group a fee to access it. The system would give farmers technical as well as legal ownership, and provide a way for them to share in its monetary value, said Mark Cox, controller and communications director for GISC.

“Growers need to be proactive in how their information is managed,” Cox said. “Otherwise all that economic power will consolidate to these corporations and the grower will be at even more of a disadvantage. We don’t want the grower to become a tenant on his own farm.” GISC began accepting members this month, and is meeting with farm bureaus around the country to publicize its mission.

Soil sensors and seed planting algorithms may be a game-changer. Whether farmers fully reap the fruits of that harvest, though, will depend not on technologies but on the legal technicalities that bind their use.

The Predictive Power of Big Data

In computer science, the unit used to measure information is the bit, short for “binary digit.” You can think about a single bit as the answer to a yes-or-no question, where 1 is yes and 0 is no. Eight bits is called a byte.

Right now, the average person’s data footprint—the annual amount of data produced worldwide, per capita—is just a little short of one terabyte. That’s equivalent to about eight trillion yes-or-no questions. As a collective, that means humanity produces five zettabytes of data every year: 40,000,000,000,000,000,000,000 (forty sextillion) bits.

Such large numbers are hard to fathom, so let’s try to make things a bit more concrete. If you wrote out the information contained in one megabyte by hand, the resulting line of 1s and 0s would be more than five times as tall as Mount Everest. If you wrote out one gigabyte by hand, it would circumnavigate the globe at the equator. If you wrote out one terabyte by hand, it would extend to Saturn and back twenty-five times. If you wrote out one petabyte by hand, you could make a round trip to the Voyager 1 probe, the most distant man-made object in the universe. If you wrote out one exabyte by hand, you would reach the star Alpha Centauri. If you wrote out all five zettabytes that humans produce each year by hand, you would reach the galactic core of the Milky Way. If instead of sending e-mails and streaming movies, you used your five zettabytes as an ancient shepherd might have—to count sheep—you could easily count a flock that filled the entire universe, leaving no empty space at all.

This is why people call these sorts of records big data. And today’s big data is just the tip of the iceberg. The total data footprint of Homo sapiens is doubling every two years, as data storage technology improves, bandwidth increases, and our lives gradually migrate onto the Internet. Big data just gets bigger and bigger and bigger.

THE DIGITAL LENS

Arguably the most crucial difference between the cultural records of today and those of years gone by is that today’s big data exists in digital form. Like an optic lens, which makes it possible to reliably transform and manipulate light, digital media make it possible to reliably transform and manipulate information. Given enough digital records and enough computing power, a new vantage point on human culture becomes possible, one that has the potential to make awe-inspiring contributions to how we understand the world and our place in it.

Consider the following question: Which would help you more if your quest was to learn about contemporary human society— unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online?

On the one hand, the members of the sociology faculty benefit from brilliant insights culled from many lifetimes dedicated to learning and study. On the other hand, Facebook is part of the day-to-day social lives of a billion people. It knows where they live and work, where they play and with whom, what they like, when they get sick, and what they talk about with their friends. So the answer to our question may very well be Facebook. And if it isn’t — yet —then what about a world twenty years down the line, when Facebook or some other site like it stores ten thousand times as much information, about every single person on the planet?

These kinds of ruminations are starting to cause scientists and even scholars of the humanities to do something unfamiliar: to step out of the ivory tower and strike up collaborations with major companies. Despite their radical differences in outlook and inspiration, these strange bedfellows are conducting the types of studies that their predecessors could hardly have imagined, using datasets whose sheer magnitude has no precedent in the history of human scholarship.

Jon Levin, an economist at Stanford, teamed up with eBay to examine how prices are established in real-world markets. Levin exploited the fact that eBay vendors often perform miniature experiments in order to decide what to charge for their goods. By studying hundreds of thousands of such pricing experiments at once, Levin and his co-workers shed a great deal of light on the theory of prices, a well-developed but largely theoretical subfield of economics. Levin showed that the existing literature was often right—but that it sometimes made significant errors. His work was extremely influential. It even helped him win a John Bates Clark Medal—the highest award given to an economist under forty and one that often presages the Nobel Prize.

A research group led by UC San Diego’s James Fowler partnered with Facebook to perform an experiment on sixty-one million Facebook members. The experiment showed that a person was much more likely to register to vote after being informed that a close friend had registered. The closer the friend, the greater the influence. Aside from its fascinating results, this experiment— which was featured on the cover of the prestigious scientific journal Nature—ended up increasing voter turnout in 2010 by more than three hundred thousand people. That’s enough votes to swing an election.

Albert-László Barabási, a physicist at Northeastern, worked with several large phone companies to track the movements of millions of people by analyzing the digital trail left behind by their cell phones. The result was a novel mathematical analysis of ordinary human movement, executed at the scale of whole cities. Barabási and his team got so good at analyzing movement histories that, occasionally, they could even predict where someone was going to go next.

Inside Google, a team led by software engineer Jeremy Ginsberg observed that people are much more likely to search for influenza symptoms, complications, and remedies during an epidemic.

They made use of this rather unsurprising fact to do something deeply important: to create a system that looks at what people in a particular region are Googling, in real time, and identifies emerging flu epidemics. Their early warning system was able to identify new epidemics much faster than the U.S. Centers for Disease Control could, despite the fact that the CDC maintains a vast and costly infrastructure for exactly this purpose.

Raj Chetty, an economist at Harvard, reached out to the Internal Revenue Service. He persuaded the IRS to share information about millions of students who had gone to school in a particular urban district. He and his collaborators then combined this information with a second database, from the school district itself, which recorded classroom assignments. Thus, Chetty’s team knew which students had studied with which teachers. Putting it all together, the team was able to execute a breathtaking series of studies on the long-term impact of having a good teacher, as well as a range of other policy interventions. They found that a good teacher can have a discernible influence on students’ likelihood of going to college, on their income for many years after graduation, and even on their likelihood of ending up in a good neighborhood later in life. The team then used its findings to help improve measures of teacher effectiveness. In 2013, Chetty, too, won the John Bates Clark Medal.

And over at the incendiary FiveThirtyEight blog, a former baseball analyst named Nate Silver has been exploring whether a big data approach might be used to predict the winners of national elections. Silver collected data from a vast number of presidential polls, drawn from Gallup, Rasmussen, RAND, Mellman, CNN, and many others. Using this data, he correctly predicted that Obama would win the 2008 election, and accurately forecast the winner of the Electoral College in forty-nine states and the District of Columbia. The only state he got wrong was Indiana. That doesn’t leave much room for improvement, but the next time around, improve he did. On the morning of Election Day 2012, Silver announced that Obama had a 90.9 percent chance of beating Romney, and correctly predicted the winner of the District of Columbia and of every single state—Indiana, too.

The list goes on and on. Using big data, the researchers of today are doing experiments that their forebears could not have dreamed of.

Emerging Big Data Opportunities For CFOs

Big data is one of the most influential technology trends for the accounting profession, according to a recent report from IMA® and ACCA (Association of Chartered Certified Accountants). To explore what this trend specifically means for CFOs, check what the assistant controller of IBM said about emerging types of data from channels such as social media and how it can be used to inform strategy.

How do CFOs harness big data differently from other C-suite executives such as the CMO?

For CFOs, big data is an opportunity to glean deeper insights into the internal and external forces that influence their companies’ performance. The integration of big data brings new inputs and variables that help anticipate the future, enabling rolling views of leading indicators versus more traditional retrospective views. More frequent inputs allow the enterprise to respond more quickly to change.
By capitalizing on big data using business analytics tools, the role of the CFO is moving beyond optimizing the finance function to transforming the enterprise.
For other members of the C-suite, for example, the Chief Marketing Officer, big data provides insights on customer behavior, delivering information about customer sentiment and general perceptions of the company and its products to help shape future strategy. While CMOs are using big data and analytics to better understand the individual customer, CFOs are utilizing data to better understand the environment.

What new types of data have emerged that CFOs should pay attention to?

CFOs are used to dealing with highly structured and verifiable data – big data isn’t like that and it requires a change in mindset for CFOs to feel comfortable making use of it themselves.
Many CFOs are driving business agility by analyzing performance metrics such as resource productivity and inventory turnover. They are using this data to better understand their businesses and re-engineer and integrate business processes as part of a broader enterprise transformation.
From a CFO perspective, big data isn’t just about new types of data; it’s also about harnessing the availability of existing data. Business analytics tools enhance access while removing manual activity and enabling the processing of large volumes of data to make new connections and accelerate decision making.

What are some examples of CFOs incorporating data from social media sites and online forums as part of their integrated financial reporting responsibilities?

CFOs can make use of data from social media sites and online forums, and we see this in areas such as regulatory compliance, supply chain and labor climate. They monitor formal and informal media to predict sales trends and understand investor sentiment.
CFOs are also starting to generate their own big data by instrumenting their businesses to collect and incorporate large volumes of performance data supporting their new challenges of driving strategy and growth.

What typical roadblocks do CFOs face in using big data to drive strategy?

Many of the roadblocks that CFOs face are similar to those experienced by other members of the C-suite – the availability and usability of the data. However, tools are becoming available which make it easier to access data from less structured sources. CFOs have the opportunity to use this data to support fact-based decisions, developing their role as trusted business advisors.
A different challenge arises where big data cuts across traditional silos, encouraging enterprises to operate in a more integrated fashion. This changes the dynamics of the C-suite, with traditional roles becoming less clear-cut.

How will the way CFOs use big data change in the next five years?

Big data and business analytics raises the expectations of finance; taking advantage of this will call on different skills and finance departments should be focusing on building those skills.
CFOs are asked to analyze different options and strategies under consideration by the business; their finance teams must be able to use analytics to predict business outcomes and influence business leaders to deliver optimum results. CFOs will also become more effective advocates for change by building and communicating business cases based on big data findings.

Predictive Analytics Most Used To Gain Customer Insight

Using analytics to better understand customer satisfaction, profitability, retention and churn while increasing cross-sell and up-sell are the most dominant uses of cloud-based analytics today, following the results of a recent study.

Key takeaways of the study results include the following:

  • Customer Analytics (72%), followed by supply chain, business optimization, marketing optimization (57%), risk and fraud (52%), and marketing (58%) are the areas in which respondents reported the strongest interest.
  • When the customer analytics responses were analyzed in greater depth they showed most interest in customer satisfaction (50%) followed by customer profitability (34%), customer retention/churn (32%), customer management(30%), and cross-sell/up-sell (26%).
  • Adoption was increasingly widespread and growing, with over 90% of respondents reporting that they expected to deploy one or more type of predictive analytics in the cloud solution.
  • Industries with the most impact from predictive analytics include retail (13% more than average), Financial Services (12%) and hardware/software (4%). Lagging industries include health care delivery (-9%), insurance -11%) and (surprisingly) telecommunications (-33%).  The following graphic illustrates the relative impact of cloud-based predictive analytics applications by industry.

Adoption of Cloud-based Predictive Analytics by Industry

 

  • The most widespread analytics scenarios include prepackaged solutions (52%), cloud-based analytics modeling (47%) and cloud-based analytic embedding of applications (46%).  Comparing the 2011 and 2013 surveys showed significant gains in all three categories, with the greatest being in the area of cloud-based analytic modeling.  This category increased from 51% in 2011 to 75% in 2013, making it the most likely analytics application respondents are going to implement this year.

Comparison of Analytics Applications Most Likely To Deploy, 2011 versus 2013

  • 63% of respondents report that when predictive analytics are tightly integrated into operations using Decision Management, enterprises have the intelligence they need to transform their businesses.

Impact of Predictive Analytics Integration Across The Enterprise

 

  • Data security and privacy (61%) followed by regulatory compliance (50%) are the two most significant concerns respondent companies have regarding predictive analytics adoption in their companies.  Compliance has increased as a concern significantly since 2011, probably as more financial services firms are adopting cloud computing for mainstream business strategies.

Concerns of Enterprises Who Are Using Cloud-based Predictive Analytics Today

 

  • Internal cloud deployments (41%) are the most common approach to implementing central cloud platforms, followed by managed vendor clouds (23% and hybrid clouds (23%). Private and managed clouds continue to grow as preferred platforms for cloud-based analytics, as respondents seek greater security and stability of their applications.  The continued adoption of private and managed clouds are a direct result of respondents’ concerns regarding data security, stability, reliability and redundancy.

Approach To Cloud Deployment

  • The study concludes that structured data is the most prevalent type of data, followed by third party data and unstructured data.
  • While there was no widespread impact on results from Big Data, predictive analytics cloud deployments that have a Big Data component are more likely to contribute to a transformative impact on their organizations’ performance.  Similarly those with more experience deploying predictive analytics in the cloud were more likely to use Big Data.
  • In those predictive analytics cloud deployments already operating or having an impact, social media data from the cloud, voice or other audio data, and image or video data were all much more broadly used as the following graphic illustrates.

Which Data Types Deliver The Most Positive Impact In A Big Data Context

Data scientists invade HR departments

The age of ‘trust me, this will work’ is over. HR is being held accountable to deliver business results. And the language of the business is analytics.

When General Motors was looking for someone to lead its global talent  and organisational capability group, the $152 billion carmaker clearly wasn’t looking for a paper-pushing administrator. Michael Arena, who took the position 18 months ago, is an engineer by training. He was a visiting scientist at MIT Media Lab. He’s a Six Sigma black belt. He’s got a Ph.D.

This is not your father’s human resources executive. But it is a sign of where the corporate HR function is headed. Arena is dedicated to the hot field of talent analytics–crunching data about employees to get “the right people with the right talent in the right place at the right time at the right cost,” he says. ” Talent management is a soft space. Historically, we haven’t been able to measure definitely the things that we intuitively believe to be true,” says Arena. “But businesses are mandating it.” The age of “trust me, this will work” is over, says Arena. “HR is being held accountable to deliver business results. And the language of the business is analytics.”

The growing importance of sophisticated analytics to HR–not simply reporting what already exists in an organisation but predicting what could or should be–is a result of “the recognition that the efficient use of labor and deployment of resources is critically important to the business results of the company,” says Mark Endry, CIO of Arcadis U.S. He recently spent six months as interim senior vice president of HR at the $3.3 billion company.
In recent years, enterprises have developed more mature techniques for applying analytics to customer information. “They’ve been able to see–with relatively little data–how much they can do and how powerful the results can be,” says Ben Waber, author of People Analytics: How Social Sensing Technology Will Transform Business and What It Tells Us about the Future of Work. “When you think about what’s going on within companies, you have potentially billions of records generated every day about each person. They’re starting to see how valuable and important that data is.”

IT must be at the center of the unfolding data-driven transformation. Not everyone has an HR data scientist like GM. Arena emphasizes the importance of his partnership with Bill Houghton, GM’s CIO for global corporate functions. “A big piece is integration–ensuring the right systems are connected so we know where to draw the data from,” says Arena. “IT has to play a role in that.”
Indeed, GM’s CIO is counting on a new enterprise data warehouse–and hiring more IT professionals with a business intelligence background–to support HR’s efforts. “Right now the analysis is being done by small group of smart people,” says CIO Houghton. “The next step is how do we make the analytics more available to the everyday manager or the organisational leadership. We want to get this out of the hands of the rocket scientists and into the hands of managers.”
CIOs are the key to helping the organisation figure out what data matters, says Terry Sullivan, director of applied research and consulting at office furniture maker Steelcase. “Everyone is thinking about big dataand collecting all kinds of data to try to figure out how to create smarter people. CIOs can drive this effort.”

IT leaders are uniquely qualified to help their corporate counterparts navigate the minefield of issues associated with these nascent technologies and processes–including data quality, systems integration, security, privacy and change management. “The partnership with IT is critical,” says David Crumley, vice president of global HR information systems for Coca-Cola Enterprises.

There’s a broad array of uses for talent analytics: screening new hires, figuring out who should get promoted, efficiently staffing new projects, uncovering the characteristics of high-performing individuals or teams, and even predicting who’s likely to head out the door.

“The way I think about it is using data to understand how people get work done,” says Waber, CEO of Sociometric Solutions, a management-services firm that was built on his work at MIT Media Lab and that helps companies in one niche of the talent analytics field: collecting and analyzing sensor data to improve workforce performance.

Companies have collected employee data for years–from satisfaction surveys to ethnography. But, says Waber, this “next generation of stuff is moving away from those qualitative assessment modes into much harder behavioral modes, using digital data from email or sensors or ERP systems. That gives us radically more powerful information.”

Historically, HR used data to report headcount or turnover information. “We’re so far beyond that now,” says Crumley of Coca-Cola Enterprises. “HR wants to expand its capabilities to help the business grow. To do so, we need to be able to be more precise and surgical about our interventions. That’s where workforce analytics is huge–helping you determine where to place your bets.”

Laying the foundation

Employees generate petabytes of data about themselves every day, says Waber. But that data sits in disparate systems in different formats and is often messy. “To make it work, you need access to all of this information in real time,” Waber says. “IT is the backbone for this entire process.”

Implementing a single version of an HR information system itself may not sound revolutionary, but it’s a critical first step for companies interested in more advanced analytics.

Jo Stoner, senior vice president of worldwide HR for Informatica, knew predictive talent analytics could benefit the growing data-integration company. “A lot of companies don’t make it past a billion [in revenue]. We were starting to hit those awkward teenage years,” she explains. Managing the company’s assets would be critical to maintaining momentum. But “we don’t own buildings or raw materials,” says Stoner. “Our greatest asset is our talent.” First, though, the company had to bring all its HR data together, applying the master data management services Informatica delivers to clients to its own internal employee data in order to layer analytics atop it.

For most companies, just arriving at a single version of the HR truth can be beneficial. Paul Lones, senior vice president of IT at Fairchild Semiconductor, says that two years ago, managers at the chip maker lacked a single system that could provide an accurate tally of employees worldwide, let alone show the amount of employee turnover. Reports had to be compiled from multiple systems. Succession planning took place in Microsoft Word documents. Compensation decisions might be made in isolation.

Now that the company has implemented cloud-based Workday, managers can access data on all 9,000 employees in one place, including succession plans, turnover trends and salary information. “A manager in the Philippines considering a raise and promotion for an employee can see in seconds how that will compare with others in the group and with local compensation trends and make that decision,” says Lones.

It may not be rocket science, but it’s a start–one that’s been a long time coming for many HR groups. Chiquita Brands, for example, had multiple homegrown and manual HR systems.

“It was a cobbled-together thing,” says Kevin Ledford, Chiquita’s CIO. “People spent 90 percent of their time figuring out where the data was and 10 percent on analyzing it.” In 2008, the company moved to a global HR system, which came in handy when Chiquita moved its headquarters from Cincinnati, Ohio, to Charlotte, N.C., and lost 75 percent of its corporate employees.

“It was very tumultuous. We threw all of our monkeys in the air, and they all came down in different buckets,” says Ledford. “It would have been a nightmare [without the global HR system].” Now that the company is exploring predictive HR analytics, that success with master data management “is everything,” says Ledford.

At Arcadis, Endry has connected his cloud-based workforce-management system to 11 other pieces of software, including ERP, learning management, payroll and an active directory. The combined data helps the company, which provides engineering services regarding infrastructure, water, environment and buildings, to staff client projects more efficiently and effectively.

“In the past we couldn’t tell who was mobile,” says Endry. “Now when we have a giant project in Ohio, we can see on a dashboard that we’ve got these three people in Boston willing to move there.”

Marc Franciosa, CIO of Praxair, has tied the company’s HR and employee performance systems to non-HR systems like SharePoint as a foundation for the company’s talent analytics initiative–no small task for the $11 billion industrial and medical gases company with 26,000 employees in 50 countries.

“The underlying data and processes have to be consistent to be able to do any real analytics with confidence,” says Franciosa. “For companies that are fairly mature that haven’t had a global environment before, it’s going through that initial normalization and standardization process to make sure that this certification, for example, means the same thing around the world,” says Franciosa. (He implemented SumTotal’s HR management system and ElixHR platform to link disparate data.) “The cleanup has been a challenge.”

Now, when Praxair wants to make a bid or sign a new customer, managers can analyze HR implications first. Do they have people who speak Portuguese, have the necessary certification, and are willing to relocate to Rio de Janeiro? “We can do some modeling of the skill sets to determine if it’s doable or if we will have to recruit externally,” Franciosa says.

At GM, Arena has been implementing a three-phase analytics plan. First, integrate systems in a way that ensures highly accurate data is available. Next, push much of that data into standardized reporting tools and dashboards that business managers can use on their own. Then start building models. One of the first projects Arena implemented was a means-based comparison analysis of the top talent pool. The model examines every employee data field in the PeopleSoft database to look for important insights, Arena says. “Five or six experiences may jump out. Having international experience may statistically matter. Then we dig deeper. Are there certain types of international experiences that matter more than others? Does that need to happen earlier versus later?”

Divining interventions

The real power is in applying predictive analytics to a corporate population. “Everyone’s talking about it,” says Chiquita’s Ledford, “looking at all this data you have and trying to figure out the future.”

“The typical data warehouse approach is looking back, but what we wanted to do was start looking forward,” says Praxair’s Franciosa. “What are the leading indicators we should be looking for? What are those metrics or data sets we don’t have but, if we did, would really help us? What external data sources could we use to drive better decision-making?”

For example, Praxair is growing by double digits in China. “Rather than hiring a ton of people and trying to recreate the wheel [there], what I’ve been driving is how do we replicate rapidly those things that have made us successful in our mature geographies,” says Franciosa. “There’s a huge opportunity to use predictive analytics based on where we’re best-in-class.”

The predictive analytics market for HR is nascent and wide-open. “We partner with them all, from IBM to SuccessFactors to PeopleSoft,” says GM’s Arena. “They’re all trying to play in the space, but I don’t know that any of them have figured it out.”

Arena’s team has built a model that predicts what changes in attrition rates will mean for GM’s workforce. Previously, if someone proposed hiring a bunch of young engineers, no one could be certain if that was the best decision. “Now we can say, let’s see what that looks like five years from now,” Arena says. “What are the dividends if we hire 200 entry-level engineers? Might we be better off hiring 50 advanced engineers? We can take that information to the head of engineering and say, ‘Here’s what it will cost you.'”

Arena thinks that analyzing the interactions of networks of employees holds the most promise. The process starts with a survey. “We ask questions of a given network: Who do you go to when you want to shop a new idea? Where do you turn when you need resources to get things done? Then we run the analytics,” Arena explains. “We can tell you who the brokers are, who’s central in that network, who are the bridges across silos. We can even predict who’s a flight risk based on where they sit in the network.” And by identifying which employee networks are most productive, Arena says there’s a chance to improve performance across the company.

At Coca-Cola Enterprises, Crumley is integrating business data with HR data for predictive purposes. “That’s where you can really get sexy with it,” he says. While working with IT to clean and standardize all the data, Crumley is partnering with each corporate function to find out what business metric might be the key measure of success for their employees. By combining those business metrics with people data, he hopes to be able to “reverse engineer what a successful employee is, so we can get the best candidates in the future.”

Employee engagement is a leading indicator of talent retention at Coca-Cola Enterprises. And one of the biggest boosters of employee engagement numbers is access to on-the-job learning, so Crumley’s team is trying to figure out how to make training opportunities more universal. For example, why are folks in this shift at this plant not taking classes as much as other employees in that line of business? With answers to questions like that, HR can intervene to address the core reason, whether that’s an accessibility problem or a manager who needs more coaching. Crumley says the effort will gain even more steam when HR is able to show, through data analytics, a correlation between taking a specific training course and an improvement in sales or productivity.

At call-center provider NOVO 1, CTO Mitchell Swindell has implemented a predictive hiring tool from Evolv. Applicants complete a Web-based application that screens for attitude, propensity for customer service, and voice capabilities. The software also shows the candidate what it’s like to work in a call center in hopes of screening out those who would be a poor fit in the high-turnover industry. The tool then gives the candidate a red, yellow or green rating, at which point candidates rated green or yellow are invited for in-person interviews. The hiring decision is still in the hands of a human, but the system has predicted with 80 percent accuracy the company’s top performers, based on 90-day follow-up data on the hired employees. Since introducing the algorithm-enhanced hiring system, tenure is up by 25 percent, agent productivity has increased 30 percent, and the overall staffing budget has decreased 11 percent. Swindell has integrated Evolve with the company’s payroll, workforce-management and proprietary quality systems to help develop a more nuanced profile of the best employees.

At Chiquita, Ledford is exploring predictive analytics to help the company find, train and retain its “bananaeros”–experts in growing bananas. “Those guys are really hard to find, as bananas have taken a backseat to coffee and tourism,” says Ledford. Analytics could enable managers to predict which lower-level employees “could become our next wave of banana folks,” says Ledford, and determine the right training and grooming to make that happen.

Employee Tracking

There’s also a gold mine of information in how people move through an organisation, and a handful of companies are looking at physically tracking employees–often via RFID-enabled badges–to find out how people work and what impact that can have on business outcomes.

“The barrier at this point is not the technology,” says Waber, whose Sociometric Solutions is an early provider of sensor-based analysis. “I can tell you how much more money a company makes when two employees eat lunch together. We can do extremely sophisticated things. The challenge is that organisations are not used to looking at themselves this way.”

When GM’s Arena was senior vice president of leadership development at Bank of America in 2010, the financial services company used sensors to track 90 call-center workers over the course of several weeks and found that those in the most cohesive networks were the most productive. By switching from solo to group break times, encouraging more socialization, agents improved efficiency by 10 percent. “As silly as it sounds, it worked,” says Arena. “The analytics told us it was probably the right thing to do.” Sometimes it’s as simple as moving desks closer together, says Waber. Steelcase’s Sullivan has discovered that the size of lunch tables can have an impact on productivity. You can’t force people to interact more, says Waber, but based on the data, you can “engineer serendipity.”

Although Arena conducted a number of experiments using sensor data at BofA, he’s not quite ready to start tracking workers at GM. “I’m a huge advocate of sensor work,” Arena says. “But it’s laden with trust and privacy issues and a lot of organisations just aren’t ready for that. It can be a bit of a slippery slope.”

Praxair is conducting a pilot using sensors on its remote workers. The system will measure how long it takes a worker to, say, install a tank for a customer, by monitoring their movements via a sensor on their protective equipment. The sensor also monitors workers for exposure to harmful gases. If gas is detected, an alarm goes off and the monitoring center will attempt to communicate with the worker. Franciosa envisions integrating the sensor data into other corporate systems to uncover correlations between events and particular locations, types of employees, or certifications.

The Importance of Transparency

Franciosa expects employees to put up some resistance to being physically tracked, much like the pushback the company encountered when it was first placing computers onboard its trucks. “It was viewed as Big Brother wanting to know how fast I drive or how hard I brake,” says Franciosa. “The way to alleviate that is transparency. People won’t like being physically monitored if they think we’re trying to find out how long their break was. So we have to be completely transparent that we are using this for safety and long-term productivity. They’ll recognize the value in that.”

HR collects all kinds of sensitive employee information, but employees see physical tracking as particularly intrusive. “It is the boundary to cross,” says Steelcase’s Sullivan. All of Steelcase’s sensor-related experiments are opt-in. Company analysts see only aggregate data, not individual histories. And Sullivan’s team communicates the process and the intentions not just to those who have signed up, but also to everyone on the campus.

“In the U.S., employees don’t really legally have protections around this data. A company can track you wherever you go and listen to all your conversations,” says Waber. “But that defeats the purpose of this approach, which is trying to help people work better, be happier and stay at their jobs.”

Communication is critical with any collection and analysis of people data–not just sensor data. “I don’t think we’re doing anything that people haven’t been trying to do for years,” says Informatica’s Stoner. “But we have to say what we will do with that data.”

Praxair’s Franciosa has a close partnership with his legal teams around the world to navigate the various data privacy and protection issues in each country. “But even once we understand that we can have this data, we have to be very transparent and say, here’s why we want your picture or your talent profile,” Franciosa says. “That goes a long way toward gaining both credibility and traction.”

The role of data in the people business

“What’s really happening right now is a shift in HR from an art to a science,” says Crumley of Coca-Cola Enterprises, who’s currently exploring how social network data and gamification might become part of his HR analytics platform. “A lot of HR teams are trying to figure out how to make that shift quickly so it’s no longer HR sitting around waiting to be pulled in, but HR coming to the table with nuggets of wisdom.”

Data analytics could enable HR to elevate itself from a tactical support function to a business partner on strategy, which ought to sound pretty familiar to CIOs.

But there are limits to HR’s data-driven transformation. “[Analytics] are all about probability, and there’s just so far you can go with probability,” says Crumley. “If you want to figure out how many employees you need to launch a new product, it can get you in the right ballpark. When it comes to predicting turnover, it’s not an exact science. People are people.”

“It’s never black-and-white when you’re talking about people,” says Stoner of Informatica. While some folks get stars in their eyes when talking about big data, Stoner often sees a bigger haystack to sift through. But analytics, she says, help point companies in the right direction. “In HR, we live in a world where data brings more questions. You always have to look beneath it,” she says. “It’s not an exact science. But at least it gets us looking at the right part of the haystack so we can get to the answer faster.”

That’s why GM’s Arena says his talent analytics will never be fully automated. “Sometimes we get projections wrong for all kinds of reasons. It can take several iterations. But HR still loves it, because it equips them to make intelligent decisions for their business partners.”

Nine critical success factors for talent analytics

IT and HR leaders who have deployed workforce analytics systems offer these tips for success

Lay the foundation. Aim for a single source of HR information, if possible.

Account for imperfections. “We’ve got our foundational issues, for sure, but if you wait until it’s completely perfect, you won’t get anywhere,” says Michael Arena, GM’s director of global talent and organisational capability. IT can build reconciliation processes and automated audits to help HR with data issues.

Start small. Marc Franciosa, CIO of Praxair, began with an analytics pilot to map the company’s high-potential employees. “If we had tried to do one big-bang workforce analytics project, it would never have gone anywhere,” he says. “You have to get some traction in order to get credibility.”

Tap internal experts. Both Franciosa and Arena have taken advantage of statisticians and others from their corporate R&D groups to develop their talent analytics programs.

Share the load with HR. Take advantage of HR and IT’s complementary skills. IT can focus on vendor management, security and deployment, while HR might manage requirements gathering, process standardization and communication.

Bring in business know-how. David Crumley, VP of global HR information systems for Coca-Cola Enterprises, works with business leaders from functions such as supply chain, sales and finance to determine what data will drive talent analytics.

Hire external change-management help. Typically, HR leads change management in an organisation. But avoid DIY change management in analytics efforts, warns Mark Endry, CIO of Arcadis U.S., who recently spent six months as interim SVP of HR. Hire external help to guide HR through its big changes.

Take action. “Everyone wants to have more data, but we have to ensure that folks know how to use it,” says Crumley, who had to do more hand-holding than he initially anticipated. “It’s not that anyone is pushing back, but you have to embed the use of the data into the [corporate] DNA.”

Democratize the systems. For people analytics to truly deliver, they need to be self-service tools that business managers and leaders can use. “Early on, we thought the customer [for these tools] was HR,” says Crumley. “But it’s the business leaders that control these decisions daily.”

Big Data Analytics – Make it a business initiative and determine ownership

Although big data has recently taken the mainstream spotlight and become a major initiative in the enterprise, it has always been at play in the wireless space. The challenge now is delivering on the unique monetization opportunity that developing analytics and applications on top of the scale and timeliness of data presents.

Mobile operators are at an important crossroads. New entrants have forced these global brands to rethink how they will effectively compete to ensure long-term viability. For operators, it’s no longer about having the best network or hottest devices; it’s about having the smartest strategy for making the most of their greatest asset – customer data.

Although mobile operators will choose a variety of paths, the monetization of big data will be key to securing a viable future. For some, success will mean the ability to generate insights that improve customer interaction and proactively address customer demands. For others it will mean the creation of new revenue streams, such as selling data to third parties. Regardless of the path chosen, successful monetization rests on one key element – the strength of the analytics at play.

Operators agree that big data analytics should be a strategic priority to drive customer monetization opportunities, but the barriers are often significant. What is required is a major shift in mindset, skillset and technology. Operators must transform their organizations from the top-down in order to become truly data-driven. They must acquire new talent as they shift their focus from aggregating data to gleaning insights from it. And most importantly, they must embrace the latest technologies to be able to act on these insights in an automated fashion.

With increased competition and the heightened risk of over-the-top players infringing on their customer profits, forward-thinking operators are prioritizing their big data analytics strategies and turning to tactics that are proven to accelerate monetization opportunities.

–Identify the strategic need before laying the ground work: Too often, big data is seen as a technology initiative rather than a business one. Heavy emphasis is placed on determining the infrastructure required to support big data, but many times there’s not enough thought as to what’s next. This can result in costly efforts that fall short in return on investment. Operators are making a point to look ahead at what various owners will do with the data, i.e., what problems they may solve and how they can rethink future engagement strategies. Then they are focusing on the most effective plan for extracting, compiling and acting on the data.

–Determine the owner of the problem – and the solution: Misalignment among C-levels and functional teams has been deemed a major barrier for moving big data initiatives beyond the exploration stage. Although the CIO has traditionally “owned” anything in the data realm, we are seeing business owners become more proactive in not only identifying the problems to be solved but also which solutions can best solve them. For example, CMOs within several leading operators are tackling the problem of churn by leading a big data analytics initiative from initiation to execution. This “single-owner” approach ensures alignment between the what, the why and the how.

–Focus on the data that matters: Taking an inside-out approach of analyzing streams of data to then determine the problem you want to solve can leave you trying to boil the ocean. To combat analysis paralysis, operators are identifying specific business problems that when solved can have a substantial impact on the bottom line, i.e. declining recharge volume, data adoption, churn, etc. This focused effort accelerates the process of determining which behavior changes will have the greatest impact on the goal at hand and the specific data which is required for modeling, analyzing and monitoring those behaviors.

–Invest in more sophisticated analytics: Many operators continue to miss the mark on when, where, and how to connect with their customers despite the vast amount of data that they have. To determine what’s relevant for each customer and deliver it in the right context, operators are adopting more sophisticated analytics to understand dynamic customer behavior over time. Flexibility, timeliness and the ability to scale to analyze millions of customers across any number of dimensions are top of mind for operators as they invest in more sophisticated analytics.

–Move toward real-time: Many operators continue to rely on batch processing of events to then determine how and when to engage their customers. Yet to proactively address customer needs, they know they must be able to analyze data in real time. True customer-centric operators are moving toward real-time analytics by confronting the technology challenges that stand in the way of easily and quickly getting the data. It’s not realistic to expect that every data source can be analyzed in real-time but operators are prioritizing the sources that deliver rich behavioral insights, such as usage and transactional records, to drive timelier and higher value customer engagement.

–Pursue plug-and-play for greater ease and speed: While analytics were at play long before the term big data hit the spotlight, many operators remain challenged to advance their capabilities – especially given the explosion of new technologies and techniques specific to mobile. To leverage more advanced analytics, such as behavioral clustering, prescriptive analytics and machine learning, operators are turning to productized solutions which ease the IT pain and expedite speed to market. Operators are also expressing openness to a cloud based approach based on the cost and competitive value of “analytics and action in a box.”

With increased competition and the emergence of OTT players, it has never been more important for operators to leverage their customer data assets to get ahead of the curve from a customer experience and monetization standpoint. Strategically minded operators who are embracing the next wave of big data analytics are transforming their business and customer engagement models. They are becoming more competitive, increasing customer value and profitability.

Blog || CRM en Big Data: een succesvolle combinatie?

CRM wordt binnen de meeste organisaties voornamelijk gebruikt voor beheren van klantgegevens, het ondersteunen van het verkoop- en serviceproces en het identificeren van leads en opportunity’s via marketing campagnes. Dit terwijl het ‘traditionele’  CRM de laatste jaren heeft kunnen profiteren van een aantal nieuwe technologieën die zijn geïntroduceerd:

Dit terwijl het ‘traditionele’ CRM de laatste jaren heeft kunnen profiteren van een aantal nieuwe technologieën die zijn geïntroduceerd:

1. Social media: marketing en customer service is veranderd en webcare is inmiddels in veel organisaties verankerd.

2. Mobile: mobiele apparaten en applicaties hebben CRM toegankelijker gemaakt en verschillende nieuwe verkoop- en marketing kanalen gecreëerd.

3. Software as a Service (SaaS): cloud oplossingen hebben er voor gezorgd dat CRM goedkoper is geworden.


Big Data

Big Data is de volgende grote verandering waarvan geprofiteerd kan worden en die effect heeft op de manier waarop organisaties met klanten omgaan. Dit gaat ook de manier waarop we CRM inzetten veranderen.

Wat is Big Data? Er zijn verschillende definities beschikbaar en ze zijn niet eenduidig, maar onderstaande definitie uit het boek “Big Data: A Revolution That Will Transform How We Live, Work, and Think” van Viktor Mayer-Schonberger en Kenneth Niel Cukier sluit goed aan op de toepassing binnen CRM:

‘The ability of society to harness information in novel ways to produce useful insights or goods and services of significant value’ (pagina 2). En verderop: ‘At its core, big data is about predictions. It’s about applying math to huge quantities of data in order to infer probabilities’ (pagina 11 and 12).

Big Data gaat dus over het vermogen om informatie op nieuwe manieren in te zetten voor het verkrijgen van nuttige inzichten én over het voorspellen: wiskundige logica toepassen op grote hoeveelheden data om waarschijnlijkheden af te leiden.

Is Big Data dan alleen maar een technologische innovatie? Nee, zeker niet. Om het maximale voordeel te behalen uit Big Data moet uw organisatie zich ook aanpassen door haar processen te herdefiniëren om zo de analyse mogelijkheden en de besluiten die daaruit volgen te kunnen ondersteunen. Ook data kwaliteit met daarbij behorende visie, procedures en rollen wordt nóg belangrijker.


Toepassing

Hoe kunt u profiteren van Big Data en het inzetten om uw klantinzichten te verbeteren en uw bedrijf beter te laten presteren? Door alle gestructureerde, reeds vastgelegde data in CRM en ongestructureerde data (social media, audio, video, foto’s) die op ons af komt uit kanalen als Twitter en Youtube te structureren en te gebruiken ter analyse: ook wel Big Data Analytics genoemd.

Het aantal krachtige tools dat beschikbaar is en op de juiste manier logica toepast en correlaties legt tussen grote sets van data, neemt toe. Binnen afzienbare tijd zullen er nog meer toegankelijk zijn voor de verwerking en analyse van deze data (ook via de cloud).


Welke (nieuwe) CRM inzichten biedt Big Data Analytics?

Denk bijvoorbeeld aan:

  • het voorspellen van het koopgedrag van uw klanten (patronen en trends eerder herkennen);
  • betere besluitvorming over te voeren strategie door nieuwe inzichten en (online) evaluaties van klantervaringen;
  • continue verfijnen van uw klantgerichte bedrijfsprocessen doordat alle informatie over het resultaat van sales of marketing acties direct beschikbaar is;
  • tijdiger signaleren van prospects via verschillende interne en externe kanalen;
  • het verbeteren van de conversieratio door uw klanten beter te adviseren bij hun aankoop;
  • inzichtelijk maken van uw meest waardevolle klanten (ambassadeurs) om deze een voorkeursbehandeling te geven met als doel het verhogen van de lifetime customer value. Uw minst waardevolle klanten gaat u via marketing campagnes opnieuw benaderen.
  • het verbeteren van uw klantenservice door het monitoren van meningen van klanten over uw product of dienst; zowel feedback via interne kanalen als extern via social media en forums.

Nu denkt u, dit zijn inzichten die ik momenteel –met een beetje moeite- ook uit mijn (CRM) systeem kan halen. Dit klopt tot op zekere hoogte. Echter, het verschil is dat met de toepassing van Big Data Analytics een veel grotere hoeveelheid data –inclusief nooit eerder gebruikte ongestructureerde data- kan worden geanalyseerd. Dit leidt tot verbeterde, diepgaandere en meer accurate voorspellingen en inzichten waardoor competitief voordeel kan worden behaald.


Keerzijde

Zijn er dan alleen maar voordelen te behalen met het inzetten van Big Data? Natuurlijk niet, er zit ook een keerzijde aan de beschikbaarheid van al deze informatie: denk o.a. aan privacy gevoeligheid, het risico van correlatie versus causatie en het analyseren van irrelevante ongestructureerde data.

In mijn volgende artikel zal ik verder ingaan op de gevaren, risico’s en uitdagingen bij het inzetten van Big Data technologie binnen CRM.

Lees mijn artikel ook op Computable.nl: http://www.computable.nl/artikel/opinie/crm/4756445/2333360/crm-en-big-data-vormen-succesvolle-combinatie.html#ixzz2XnM8NJkn

Gartner: bi en analytics één van de snelst groeiende softwaremarkten

De markt voor business intelligence (bi) en analytics blijft volgens onderzoeksbureau Gartner de komende jaren één van de snelst groeiende softwaremarkten. Veel grote bedrijven hebben beschrijvende analyses inmiddels afgerond en willen nu stappen zetten richting het uitvoeren van diagnostische, voorspellende en voorschrijvende analyses. Gartner benoemt tien leiders in de markt; Microsoft, IBM, QlikTech, Oracle, Tableau Software, SAS, MicroStrategy, Tibco Spotfire, Information Builders en SAP.

De business intelligence (bi) en analytics markt staat al jaren bovenaan de prioriteitenlijst van chief information officers (cio’s), maar toch zijn er nog veel onvervulde taken. Zo heeft elk bedrijf verschillende afdelingen, zoals human resource (hr), marketing en social, die nog geen start hebben gemaakt met bi en analytics. Dit blijkt het uit Magic Quadrant for Business Intelligence and Analytics Platforms van onderzoeksbureau Gartner.

De meeste grote bedrijven hebben de beschrijvende analyses voor bijvoorbeeld financiën en sales afgerond, maar Gartner verwacht nog veel groei op het gebied van diagnostische, voorspellende en voorschrijvende analyses. Veel middelgrote bedrijven moeten nog starten met bi en analytics. De markt blijft volgens Gartner daardoor één van de snelst groeiende softwaremarkten. Het jaarlijkse groeipercentage voor de bi-en analytics ruimte bedraagt naar verwachting tot en met 2016 7 procent.

Softwareleveranciers in de markt willen tegemoet komen aan deze organisaties die proberen een volwassenheidsslag te maken door de focus op beschrijvende analyses te verleggen naar diagnostische analyses. Ook reageerden veel softwareleverancier op een volgens Gartner belangrijk thema in 2012; data discovery als hoofdstroom voor de bi en analytics architectuur. Zo verbeterde MicoStrategy zijn oplossing Visual Insight, kwam SAP met Visual Intelligence en heeft Microsoft PowerPivot versterkt met Power View.

De spelers in de markt
De softwareleveranciers Microsoft, IBM, QlikTech, Oracle, Tableau Software, SAS, Microstrategy, Tibco Spotfire, Information Builders en SAP zijn door Gartner als leider benoemd in het kwadrant voor bi en analytics. Een groot deel van deze leiders focust op data discovery en hierdoor versnelt de trend op het gebied van decentralisatie en gebruiker-empowerment van bi en analytics. Ook zorgt deze focus ervoor dat organisaties het vermogen kunnen ontwikkelen om diagnostische analyses uit te voeren.

Uitdagers in de markt zijn de partijen LogiXML en Birst. Gartner benoemde de softwareleveranciers Prognoz, Bitam, Board International, Actuate, Salient Management Company, Panorama Software, Alteryx, Jaspersoft, Pentaho, Targit, arcplan en GoodData tot nichespelers van de markt. Er zijn in dit kwadrant geen visionaire partijen benoemd.

Gartner