Data Presentation in Business Intelligence

Introduction

In last three articles (1, 2, and 3) of this series, we discussed the overall Business Intelligence system architecture, different components, and how ETL helps store source data in the required form, followed by how OLAP (CUBE) helps to process, aggregate, and store huge amount of data. In this article, we’ll discuss another part of the overall BI system architecture: information delivery. How can we visualize or deliver data to help business owners and other users in the community? You can help them to know business trends and usability of the system, including other benefits. In the next the part of this series, you’ll know more about information delivery and various ways to represent data to help business owners in decision making.

Importance of Data Presentation

With the basic understanding of Business Intelligence, we should know the importance of data presentation in information delivery and how useful it is for business users. After storing data in the required form, either in a data warehouse/data mart database or an OLAP cube, data can be showcased in certain forms that will become the source of required information for all BI users; this is known as Reports in the BI world. These Reports can be data reports, executive dashboards, or conference presentations.

Data Reports may have different types of layouts to present the data, such as Metrics, Chart, Funnel reports, animated representation of data, and so on. All these kinds of data layouts can be considered, based on the targeted audience.

All these reports may represent Key Performance Indicators (KPIs) to visualize/present the data. These KPIs are a type of performance measurement. An organization may use KPIs to evaluate its success or progress, or to evaluate the success/progress of a particular business in which it is engaged. This data presentation is the logical end point for any BI system and becomes the source of truth for decision makers.

Reporting Tools

A reporting tool is an interactive data exploration and visual presentation of data that is stored in a database or cube and can be achieved by using various available reporting tools; the selection of a reporting tool for your purpose is solely your discretion but with supporting facts like we discussed, some selection criteria for OLAP tools in the previous article. There is no thumb rule to adopt a reporting tool; I want to reiterate this fact that the identification and adoption of any BI tool is not a personal decision and can’t be finalized without thorough study. You need to refer to the previous articles of this series to know more about the reasons why this is important for the successful implementation of any BI system.

There are varieties of reporting tools available that can be useful to present data and deliver the information. Some of them that come with Business Intelligence suites are Microsoft SQL Server Reporting Services, Oracle Reports, IBM Congnos Report Studio, and MicroStrategy Reporting Suite. Some open source tools are BIRT, JasperReport, Pentaho, and few other available tools for reporting are Tableau, QlikView, and the like.

Reporting tools help to visualize data in multiple, interconnected ways by using data regions. You can display data organized in tables, matrices, or cross-tabs, expand/collapse groups, charts, gauges, indicators or KPIs, and maps, with the ability to nest charts in tables.

Types of Reports

Once we finalize the reporting tool, the next step is to select the report design and data display to present the data as information for the user community. We need to consider key factors to design any report; these factors are the structure and tone of the report, length of report, kinds of data to include (tables, figures, graphs, maps, or pictures), detail of information, data positioning in the report, and the most important: visual sophistication.

We can explore different types of reports while considering the preceding key factors and organizing data in a variety of ways to show the relationship of the general information to the detailed. We can put all the data in the report, but set it to be hidden until a user clicks to reveal details; this is a drilldown action. Data can be displayed in a data region, such as a table or chart, which is nested inside another data region, such as a table or matrix. Another way is display the data in a subreport that is completely contained within a main report. Or, you can put the detail data in drillthrough reports, separate reports that are displayed when a user clicks a link. We’ll explore those ideas next.

Drilldown Reports: A Drilldown report provides plus and minus icons on a text box; these icons enable users to hide and display items interactively. This is called a drilldown action. For a table or matrix, it shows or hides static rows and columns, or rows and columns that are associated with groups.

Drillthrough Reports: A Drillthrough report is a report that a user opens by clicking a link within another report. Drillthrough reports commonly contain details about an item that are contained in an original summary report.

Subreport: A Subreport is a report item that displays another report inside the body of a main report. Conceptually, a subreport in a report is similar to a frame in a Web page. It is used to embed a report within another report.

Nested Reports: A Nest report can nest one data region, such as a chart, inside another data region, such as a matrix, typically to display data summaries in a concise manner or to provide a visual display as well as a table or matrix display.

A report can apply to a specific type of report item, a layout design, or a solution design. A single report can have characteristics from more than one type; for example, a report can be, at the same time, a stand-alone report, a subreport referenced by a main report, the target of a drillthrough report in a different main report, and a drilldown report.

Data Sources for Reports

Reports can leverage stored data in databases in the form of DW or DM or a cube to display. Data can be exposed by using a data source and data sets property in any report. These data sources and data sets can be exposed only to one report or shared with multiple reports. Report definition is always independent with the data source and data set and can be managed separately.

Different types of reports allow you to anticipate different report properties, such as drillthrough actions, expand/collapse toggles, sort buttons, Tooltips, and the use of report parameters to enable report reader interactions. To control data display or user interaction, we can use report parameters that can be combined with expressions and provide the ability to customize how report data is filtered, grouped, and sorted.

Industry Trends in BI Reporting

Industry trends are changing rapidly and user expectations are also growing with respect to information delivery by a BI system. In the current data world, a user not only analyzes structured data but also wants quick information on top of huge, unstructured data. We need to adopt and support new trends to meet user objectives in a rapidly changing paradigm.

We are observing great change in data visualization; the initial visual data discovery releases from the big organizations like Microsoft, SAP, IBM, Microstrategy, SAS, and Oracle tended to have limited capabilities, but the gap is slowly closing. The specialty organizations and the heavyweights are trying to find the right balance between analysis and trusted data.

Another important trend is mobility. Initially, the mobility of information was “nice to have,” but now it is an absolutely essential component of any useful report. The first mobile BI products allowed users to look at data remotely on their tablets and mobile phones, and as devices improved, dashboards and other visual representations made it easier for people to access the information they needed in the format they desired.

Another new trend is to present enterprise-wide massive amounts of data (Big Data). Everyone is talking about big data, but in the business intelligence world it means one thing: Organizations are being overwhelmed by massive amounts of new information that they need to analyze quickly and accurately. In the current industry trend, unstructured data will be a big part of this change because the ability to look at information not stored in spreadsheets and databases is letting organizations analyze information that they couldn’t truly leverage a few years ago.

It’s always highly recommended that organizations should change their way of operating and align with industry trends for better results and use the best of technology in their own interest.

Summary

In this article, we discussed how data visualization plays an important role for the successful implementation of a BI system. We discussed various types of reports and the factors that help in decision making for designing and implementing any report. Overall Information delivery is the last layer for a BI system, but this is the data presentation layer that turns data in information. It helps a diverse user community leverage to present the information; otherwise, users won’t be in a good situation to make any decision.

Visualization techniques revolutionize modern business intelligence gathering

Data scientists and analysts love digging into the architecture of data to grasp its essence, exploring how it works and divining what secrets it may hold. For some expert users, the very complexity of the data is what provides the “thrill of the chase.” However, the average user wants data that’s easy to understand. Visualization has proven to be the best way to make this happen.

Why does visualization work so well?

Noah Iliinsky, data consultant and coauthor of Beautiful Visualization and Designing Data Visualizations, spoke at a LinkedIn Tech Talk about the reason visualization is so powerful for analytics. “It turns out that our eyes and our brains have very sophisticated software built into them for things like pattern recognition and detecting when there are pattern violations on a variety of factors in terms of position, skew, color, size, blur, shape, etc. They are called ‘pre-attentive properties’. We can detect very quickly when something is different or out of position. If you leverage these well, you can design things where you can get a lot of information into someone’s brain very easily and very quickly.”

Enterprises are demanding more visualization

Business users may not know the science behind the way their brains process data, but they know what works. Presentation is king. Distilling data into the essential intelligence that will inform business decisions is pointless if the resulting reports are visually opaque. According to a recent TDWI white paper on self-service BI, “For information consumers, the results need to be easier to consume and use, and the solution here is to employ more sophisticated visualization techniques. These vary considerably, from using technologies such as Google Maps to display location-specific data, to visualization approaches such as small multiples, scatter plots, heat maps, enclosure diagrams, node links, arc diagrams, and more. Advanced visualization ranked third highest in the survey for enhanced user interface requirements, with 41% of respondents saying this was a ‘very important’ requirement.”

Tools must match the source, complexity, and variety of data

Being able to look at the same data in many different ways is critical since each perspective can add depth. The data visualization tools of the past, with their two-dimensional pie and bar charts, simply aren’t refined enough to offer real insight into complex data sets. Imagine trying to track the proliferation of power stations across the United States over time using a traditional Excel spreadsheet and a set of static graphs or charts. Assuming you have the relevant information stored on your SQL server, you could sort and present the data by date of initial operation, by state, by county, by power station type, and so on.

It turns out that our eyes and our brains have very sophisticated software built into them for things like pattern recognition.
Noah Iliinsky,
coauthor of Beautiful Visualization
However, making sense of the data really requires a map—and some way to visually express the changes that are taking place over time. The new “GeoFlow” visualization tool from Microsoft is a good example of how geographic data can be viewed in a way that permits the eye to easily detect patterns. It also includes the ability to drill down into the data after the overall trends become apparent so that users can uncover additional intelligence.

More features of a smart enterprise BI tool

Beyond offering many ways to present BI, the right solution will also give users more control over reporting. While visuals are important, they shouldn’t distract from the data or from business objectives. Every feature should be easy to use and fill a functional role. Here are a few key features that make a difference:

1. Interactivity, especially the ability to slice and dice the data
2. Full OLAP support
3. Static and dynamic capabilities
4. Visuals that relate to the real world within which the business operates
5. Multidimensional analysis
6. Collaboration for team analytics
7. Real time or near real time capability for live BI needs
8. Meta-visualization in dashboards

Visualization is about more than individual reports designed for distinct purposes or for certain departments. Dr. Joseph Morabito, Industry Associate Professor at the Stevens Institute, maintained in his 2012 talk about Big Data that different users require different types of visualization in terms of dashboards.

“The strategic dashboard is focused on high-level measures of performance. Typically, they feature static snapshots of data on a daily, weekly, or monthly basis, and there is little user interaction. You don’t want too much here. It’s better to be simple. Analytical displays are designed for detailed data analysis. Here, you’re going to have comparatively more data (and more complex data) but richer comparisons. You’ll have extensive historical data, but still mostly periodic snapshots. You’ll have a lot of interaction here with many OLAP features. Operational data requires a dynamic environment where we are using real-time or near real-time data (as in monitoring a supply chain management system). Here we need to keep it simple, as we do with the executive dashboard, but for different reasons. We need to see problems right away and then drill on demand so we can locate problems as they arise in real time.”

In the final analysis, users want more than BI solutions that enable them to achieve goals in their business. They want tools that make them feel smart. That’s the kind of positive reinforcement that provides the motivation leading to innovation. A data visualization solution that is versatile, well-rounded, and accessible for self-service is the best BI tool for this purpose.

The Predictive Power of Big Data

In computer science, the unit used to measure information is the bit, short for “binary digit.” You can think about a single bit as the answer to a yes-or-no question, where 1 is yes and 0 is no. Eight bits is called a byte.

Right now, the average person’s data footprint—the annual amount of data produced worldwide, per capita—is just a little short of one terabyte. That’s equivalent to about eight trillion yes-or-no questions. As a collective, that means humanity produces five zettabytes of data every year: 40,000,000,000,000,000,000,000 (forty sextillion) bits.

Such large numbers are hard to fathom, so let’s try to make things a bit more concrete. If you wrote out the information contained in one megabyte by hand, the resulting line of 1s and 0s would be more than five times as tall as Mount Everest. If you wrote out one gigabyte by hand, it would circumnavigate the globe at the equator. If you wrote out one terabyte by hand, it would extend to Saturn and back twenty-five times. If you wrote out one petabyte by hand, you could make a round trip to the Voyager 1 probe, the most distant man-made object in the universe. If you wrote out one exabyte by hand, you would reach the star Alpha Centauri. If you wrote out all five zettabytes that humans produce each year by hand, you would reach the galactic core of the Milky Way. If instead of sending e-mails and streaming movies, you used your five zettabytes as an ancient shepherd might have—to count sheep—you could easily count a flock that filled the entire universe, leaving no empty space at all.

This is why people call these sorts of records big data. And today’s big data is just the tip of the iceberg. The total data footprint of Homo sapiens is doubling every two years, as data storage technology improves, bandwidth increases, and our lives gradually migrate onto the Internet. Big data just gets bigger and bigger and bigger.

THE DIGITAL LENS

Arguably the most crucial difference between the cultural records of today and those of years gone by is that today’s big data exists in digital form. Like an optic lens, which makes it possible to reliably transform and manipulate light, digital media make it possible to reliably transform and manipulate information. Given enough digital records and enough computing power, a new vantage point on human culture becomes possible, one that has the potential to make awe-inspiring contributions to how we understand the world and our place in it.

Consider the following question: Which would help you more if your quest was to learn about contemporary human society— unfettered access to a leading university’s department of sociology, packed with experts on how societies function, or unfettered access to Facebook, a company whose goal is to help mediate human social relationships online?

On the one hand, the members of the sociology faculty benefit from brilliant insights culled from many lifetimes dedicated to learning and study. On the other hand, Facebook is part of the day-to-day social lives of a billion people. It knows where they live and work, where they play and with whom, what they like, when they get sick, and what they talk about with their friends. So the answer to our question may very well be Facebook. And if it isn’t — yet —then what about a world twenty years down the line, when Facebook or some other site like it stores ten thousand times as much information, about every single person on the planet?

These kinds of ruminations are starting to cause scientists and even scholars of the humanities to do something unfamiliar: to step out of the ivory tower and strike up collaborations with major companies. Despite their radical differences in outlook and inspiration, these strange bedfellows are conducting the types of studies that their predecessors could hardly have imagined, using datasets whose sheer magnitude has no precedent in the history of human scholarship.

Jon Levin, an economist at Stanford, teamed up with eBay to examine how prices are established in real-world markets. Levin exploited the fact that eBay vendors often perform miniature experiments in order to decide what to charge for their goods. By studying hundreds of thousands of such pricing experiments at once, Levin and his co-workers shed a great deal of light on the theory of prices, a well-developed but largely theoretical subfield of economics. Levin showed that the existing literature was often right—but that it sometimes made significant errors. His work was extremely influential. It even helped him win a John Bates Clark Medal—the highest award given to an economist under forty and one that often presages the Nobel Prize.

A research group led by UC San Diego’s James Fowler partnered with Facebook to perform an experiment on sixty-one million Facebook members. The experiment showed that a person was much more likely to register to vote after being informed that a close friend had registered. The closer the friend, the greater the influence. Aside from its fascinating results, this experiment— which was featured on the cover of the prestigious scientific journal Nature—ended up increasing voter turnout in 2010 by more than three hundred thousand people. That’s enough votes to swing an election.

Albert-László Barabási, a physicist at Northeastern, worked with several large phone companies to track the movements of millions of people by analyzing the digital trail left behind by their cell phones. The result was a novel mathematical analysis of ordinary human movement, executed at the scale of whole cities. Barabási and his team got so good at analyzing movement histories that, occasionally, they could even predict where someone was going to go next.

Inside Google, a team led by software engineer Jeremy Ginsberg observed that people are much more likely to search for influenza symptoms, complications, and remedies during an epidemic.

They made use of this rather unsurprising fact to do something deeply important: to create a system that looks at what people in a particular region are Googling, in real time, and identifies emerging flu epidemics. Their early warning system was able to identify new epidemics much faster than the U.S. Centers for Disease Control could, despite the fact that the CDC maintains a vast and costly infrastructure for exactly this purpose.

Raj Chetty, an economist at Harvard, reached out to the Internal Revenue Service. He persuaded the IRS to share information about millions of students who had gone to school in a particular urban district. He and his collaborators then combined this information with a second database, from the school district itself, which recorded classroom assignments. Thus, Chetty’s team knew which students had studied with which teachers. Putting it all together, the team was able to execute a breathtaking series of studies on the long-term impact of having a good teacher, as well as a range of other policy interventions. They found that a good teacher can have a discernible influence on students’ likelihood of going to college, on their income for many years after graduation, and even on their likelihood of ending up in a good neighborhood later in life. The team then used its findings to help improve measures of teacher effectiveness. In 2013, Chetty, too, won the John Bates Clark Medal.

And over at the incendiary FiveThirtyEight blog, a former baseball analyst named Nate Silver has been exploring whether a big data approach might be used to predict the winners of national elections. Silver collected data from a vast number of presidential polls, drawn from Gallup, Rasmussen, RAND, Mellman, CNN, and many others. Using this data, he correctly predicted that Obama would win the 2008 election, and accurately forecast the winner of the Electoral College in forty-nine states and the District of Columbia. The only state he got wrong was Indiana. That doesn’t leave much room for improvement, but the next time around, improve he did. On the morning of Election Day 2012, Silver announced that Obama had a 90.9 percent chance of beating Romney, and correctly predicted the winner of the District of Columbia and of every single state—Indiana, too.

The list goes on and on. Using big data, the researchers of today are doing experiments that their forebears could not have dreamed of.

Big Data Analytics – Make it a business initiative and determine ownership

Although big data has recently taken the mainstream spotlight and become a major initiative in the enterprise, it has always been at play in the wireless space. The challenge now is delivering on the unique monetization opportunity that developing analytics and applications on top of the scale and timeliness of data presents.

Mobile operators are at an important crossroads. New entrants have forced these global brands to rethink how they will effectively compete to ensure long-term viability. For operators, it’s no longer about having the best network or hottest devices; it’s about having the smartest strategy for making the most of their greatest asset – customer data.

Although mobile operators will choose a variety of paths, the monetization of big data will be key to securing a viable future. For some, success will mean the ability to generate insights that improve customer interaction and proactively address customer demands. For others it will mean the creation of new revenue streams, such as selling data to third parties. Regardless of the path chosen, successful monetization rests on one key element – the strength of the analytics at play.

Operators agree that big data analytics should be a strategic priority to drive customer monetization opportunities, but the barriers are often significant. What is required is a major shift in mindset, skillset and technology. Operators must transform their organizations from the top-down in order to become truly data-driven. They must acquire new talent as they shift their focus from aggregating data to gleaning insights from it. And most importantly, they must embrace the latest technologies to be able to act on these insights in an automated fashion.

With increased competition and the heightened risk of over-the-top players infringing on their customer profits, forward-thinking operators are prioritizing their big data analytics strategies and turning to tactics that are proven to accelerate monetization opportunities.

–Identify the strategic need before laying the ground work: Too often, big data is seen as a technology initiative rather than a business one. Heavy emphasis is placed on determining the infrastructure required to support big data, but many times there’s not enough thought as to what’s next. This can result in costly efforts that fall short in return on investment. Operators are making a point to look ahead at what various owners will do with the data, i.e., what problems they may solve and how they can rethink future engagement strategies. Then they are focusing on the most effective plan for extracting, compiling and acting on the data.

–Determine the owner of the problem – and the solution: Misalignment among C-levels and functional teams has been deemed a major barrier for moving big data initiatives beyond the exploration stage. Although the CIO has traditionally “owned” anything in the data realm, we are seeing business owners become more proactive in not only identifying the problems to be solved but also which solutions can best solve them. For example, CMOs within several leading operators are tackling the problem of churn by leading a big data analytics initiative from initiation to execution. This “single-owner” approach ensures alignment between the what, the why and the how.

–Focus on the data that matters: Taking an inside-out approach of analyzing streams of data to then determine the problem you want to solve can leave you trying to boil the ocean. To combat analysis paralysis, operators are identifying specific business problems that when solved can have a substantial impact on the bottom line, i.e. declining recharge volume, data adoption, churn, etc. This focused effort accelerates the process of determining which behavior changes will have the greatest impact on the goal at hand and the specific data which is required for modeling, analyzing and monitoring those behaviors.

–Invest in more sophisticated analytics: Many operators continue to miss the mark on when, where, and how to connect with their customers despite the vast amount of data that they have. To determine what’s relevant for each customer and deliver it in the right context, operators are adopting more sophisticated analytics to understand dynamic customer behavior over time. Flexibility, timeliness and the ability to scale to analyze millions of customers across any number of dimensions are top of mind for operators as they invest in more sophisticated analytics.

–Move toward real-time: Many operators continue to rely on batch processing of events to then determine how and when to engage their customers. Yet to proactively address customer needs, they know they must be able to analyze data in real time. True customer-centric operators are moving toward real-time analytics by confronting the technology challenges that stand in the way of easily and quickly getting the data. It’s not realistic to expect that every data source can be analyzed in real-time but operators are prioritizing the sources that deliver rich behavioral insights, such as usage and transactional records, to drive timelier and higher value customer engagement.

–Pursue plug-and-play for greater ease and speed: While analytics were at play long before the term big data hit the spotlight, many operators remain challenged to advance their capabilities – especially given the explosion of new technologies and techniques specific to mobile. To leverage more advanced analytics, such as behavioral clustering, prescriptive analytics and machine learning, operators are turning to productized solutions which ease the IT pain and expedite speed to market. Operators are also expressing openness to a cloud based approach based on the cost and competitive value of “analytics and action in a box.”

With increased competition and the emergence of OTT players, it has never been more important for operators to leverage their customer data assets to get ahead of the curve from a customer experience and monetization standpoint. Strategically minded operators who are embracing the next wave of big data analytics are transforming their business and customer engagement models. They are becoming more competitive, increasing customer value and profitability.

Gartner: bi en analytics één van de snelst groeiende softwaremarkten

De markt voor business intelligence (bi) en analytics blijft volgens onderzoeksbureau Gartner de komende jaren één van de snelst groeiende softwaremarkten. Veel grote bedrijven hebben beschrijvende analyses inmiddels afgerond en willen nu stappen zetten richting het uitvoeren van diagnostische, voorspellende en voorschrijvende analyses. Gartner benoemt tien leiders in de markt; Microsoft, IBM, QlikTech, Oracle, Tableau Software, SAS, MicroStrategy, Tibco Spotfire, Information Builders en SAP.

De business intelligence (bi) en analytics markt staat al jaren bovenaan de prioriteitenlijst van chief information officers (cio’s), maar toch zijn er nog veel onvervulde taken. Zo heeft elk bedrijf verschillende afdelingen, zoals human resource (hr), marketing en social, die nog geen start hebben gemaakt met bi en analytics. Dit blijkt het uit Magic Quadrant for Business Intelligence and Analytics Platforms van onderzoeksbureau Gartner.

De meeste grote bedrijven hebben de beschrijvende analyses voor bijvoorbeeld financiën en sales afgerond, maar Gartner verwacht nog veel groei op het gebied van diagnostische, voorspellende en voorschrijvende analyses. Veel middelgrote bedrijven moeten nog starten met bi en analytics. De markt blijft volgens Gartner daardoor één van de snelst groeiende softwaremarkten. Het jaarlijkse groeipercentage voor de bi-en analytics ruimte bedraagt naar verwachting tot en met 2016 7 procent.

Softwareleveranciers in de markt willen tegemoet komen aan deze organisaties die proberen een volwassenheidsslag te maken door de focus op beschrijvende analyses te verleggen naar diagnostische analyses. Ook reageerden veel softwareleverancier op een volgens Gartner belangrijk thema in 2012; data discovery als hoofdstroom voor de bi en analytics architectuur. Zo verbeterde MicoStrategy zijn oplossing Visual Insight, kwam SAP met Visual Intelligence en heeft Microsoft PowerPivot versterkt met Power View.

De spelers in de markt
De softwareleveranciers Microsoft, IBM, QlikTech, Oracle, Tableau Software, SAS, Microstrategy, Tibco Spotfire, Information Builders en SAP zijn door Gartner als leider benoemd in het kwadrant voor bi en analytics. Een groot deel van deze leiders focust op data discovery en hierdoor versnelt de trend op het gebied van decentralisatie en gebruiker-empowerment van bi en analytics. Ook zorgt deze focus ervoor dat organisaties het vermogen kunnen ontwikkelen om diagnostische analyses uit te voeren.

Uitdagers in de markt zijn de partijen LogiXML en Birst. Gartner benoemde de softwareleveranciers Prognoz, Bitam, Board International, Actuate, Salient Management Company, Panorama Software, Alteryx, Jaspersoft, Pentaho, Targit, arcplan en GoodData tot nichespelers van de markt. Er zijn in dit kwadrant geen visionaire partijen benoemd.

Gartner