Gartner Predicts 2018 – Analytics and BI Strategy

Gareth Herschel, 33-41 minutes gartner.com

Published: 26 March 2018 ID: G00341404

Analyst(s): Alexander Linden, Rita L. Sallam, Svetlana Sicular, Jim Hare, Jorgen Heizenberg, Erick Brethenoux, Douglas Laney

Summary

Analytics will adapt to the needs of employees and customers instead of forcing them to adopt traditional approaches to producing and consuming analysis. Data and analytics leaders must plan for fewer constraints in how analysis is done, and more choices in how it can be used.

Overview

Key Findings

  • Analysis will become increasingly embedded in processes, devices and applications; it will therefore be both more pervasive and less visible — flipping the burden of prompting its use from the user to the machine.
  • Augmented analytics will enable easier interactions with analytic tools by pushing more of the complex analytics workload away from the user and over to the machine.
  • The shortage of data scientists will gradually cease to be a constraint as a variety of automation and people-centric innovations reduce the need for high-end specialists.

Recommendations

For data and analytics leaders involved in analytics and business intelligence strategies:

  • Begin your adoption of push-driven, conversational, augmented analytics by identifying common errors in your business processes that could have been avoided with simple analysis.
  • Change your product criteria for analytics, to include elements that reflect the context in which a decision is made, by evaluating its ability to respond to location, activity and user profile in creating analysis.
  • Create a culture of citizen data scientists by sharing examples of how analysis is impacting business performance across your organization and encouraging the use of analysis to question assumptions and drive business decisions.
  • Create an effective process for identifying valid analytic techniques for each business decision by combining analysts, line-of-business decision makers and your legal department into an analytic governance committee.

Strategic Planning Assumptions

By 2021, 75% of prebuilt reports will be replaced with or augmented by automated insights delivered on a “most needed” basis.

By 2022, every personalized interaction between users and applications or devices will be adaptive.

By 2022, 30% of customer interactions will be influenced by real-time location analysis, up from 4% in 2017.

By 2023, artificial intelligence and deep-learning techniques will replace traditional machine learning techniques as the most common approach for new applications of data science.

By 2024, a scarcity of data scientists will no longer hinder the adoption of data science and machine learning in organizations.

Analysis

What You Need to Know

The power of analysis is transforming organizations and industries. Subjective decisions such as employee recruitment or product branding are increasingly data-driven, and objective decisions such as logistics planning or customer risk analysis are being made using more sophisticated analysis on more complex data than was previously available.

Three key trends are worth focusing on as organizations plan their analytic strategies, all of which will evolve significantly during the next few years:

  • Empower the Masses — How will organizations expand the reach and relevance of analysis across the entire organization? Two market movements will accelerate this reach to a broader range of employees than ever before.
    • The first is the flip to proactive pushing of analysis — based on context rather than requiring the user to pull the data from the system — which increases the relevance of the analysis and reduces the cognitive load for users.
    • The second will be the ability of artificial intelligence (AI) to allow casual users to access technology and insight more naturally, reducing the interface-learning barrier to adoption.
  • Embrace the Complexity — How can organizations use analysis to challenge conventional behaviors with fundamentally new approaches to traditional problems? Two changes will transform this dynamic:
    • Increasing the demand for sophisticated analysis
    • Increasing the availability of the analysis
  • Although neither will occur as quickly as the trends we see in “Empower to the Masses” above, over time they will be profound so organizations should start thinking about the implications today. On the demand side, we anticipate that organizations will become increasingly comfortable with the idea of “black-box” analytics (analytics that is not accessible to or understandable by its user; is hidden from its user). This will allow wider adoption of analytics to support a variety of use cases. At the same time, the creation of more sophisticated analysis will become less dependent on users with traditional data science skills. Whether this outcome is the result of greater automation, or increasing the ease of use of data science platform tools for less skilled users, or a combination of the two, the outcome is the same. Data science insights can be gained without the need for data scientists.
  • Transform the Enterprise Ecosystem — How will analysis create insights that change the way the organization understands itself and relates to its environment? Sometimes, the future is created by mixing current best practices. Our final prediction is an example of just that, with elements of real-time and context-driven analysis incorporating customer location as a typical variable in building recommendations.

Figure 1. Key Changes in Analytics

Source: Gartner (March 2018)

Strategic Planning Assumptions

Strategic Planning Assumption: By 2021, 75% of prebuilt reports will be replaced with or augmented by automated insights delivered on a “most needed” basis.

Analysis by: Erick Brethenoux and Rita Sallam

Key Findings:

  • Report stacks and static information delivery are of limited use for decision making in a digital business — competition for management attention and business velocity have rendered traditional reports obsolete.
  • Predetermined report formats are increasingly being replaced by a set of more intuitive interaction mechanisms such as conversational analytics.
  • Citizen data scientists (CDSs) and business users are being equipped with increasingly “smart” discovery techniques and selective warning capabilities, fueling the augmented analytics trend.

Market Implications:

Relevancy is one of the most critical qualities for analytics. Users should not have to look for information — the system should recognize that a piece of information is relevant to the user and deliver the insight on an intuitive and need-to-know basis.

Many analytical platforms already integrate augmented analytic techniques that automatically evaluate data quality and offer corrective actions; detect trends and correlations in the data; and suggest analytics paths and the most appropriate format for showing the results.

To ease interactions, conversational analytics enables business people to generate queries, explore data, and receive and act on insights in natural language (voice or text) via mobile devices and personal assistants. It also enables devices to generate natural-language text or speech, conveying data and insights to consumers or providing a trigger such as an alert. For example, instead of accessing a daily dashboard, a decision maker with access to Amazon Alexa might say, “Alexa, analyze my sales results for the past three months!” or “Alexa, what are the top three things I can do to improve my close rate today?” Self-learning of the users’ relevancy triggers means the device could also initiate the interaction by calling or signaling the user.

Recommendations

For data and analytics leaders:

  • Maximize the relevancy of business analysis by encouraging CDSs to work collaboratively and iteratively with expert data scientists and business users.
  • Invest in intuitive interface mechanisms that can adjust to the analytics of a consumer’s business context for an increasingly natural interaction and effective collaboration.
  • Experiment with AI techniques aimed at aggressively filtering the amount and the quality of information that requires user’s attention.
  • Explore the automatic analysis and self-discovery capabilities offered by modern analytics platforms, while also leveraging the advanced event processing capabilities offered by modern middleware solutions.

Related Research:

  • “Gartner Analytics Evolution Framework”
  • “Augmented Analytics Is the Future of Data and Analytics”

Strategic Planning Assumption: By 2022, every personalized interaction between users and applications or devices will be adaptive.

Analysis by: Erick Brethenoux and Doug Laney

Key Findings:

  • Using data gathered by our devices, the interactions are often personalized to ensure the needs of each specific user are met.
  • Users, both professionals and consumers, are getting buzzed, beeped, vibrated and flashed to a point of attention numbness.
  • AI techniques are starting to protect users from attention-grabbing abusive analytics capabilities. The adaptability of solutions and devices should adhere to “calm computing” principles, where the technology requires the smallest amount of a user’s attention.
  • Given the dramatic rise in the information required to deliver contextually relevant services, and the accompanying computing demand putting undue pressure on the power autonomy of mobile devices, edge computing will become a critical component in personalization architectures.

Market Implications:

Adaptability is at the heart of evolution, and survival. In the case of technology, it must adapt to our lifestyles, constraints and desires. A technology that is not intuitive, easy to use and highly relevant (that is, critical to our day to day life) often ends up on a shelf or at the bottom of a forgotten drawer. Calm technology (introduced by Mark Weiser and John Seely-Brown at Xerox PARC in 1996) 1 is a set of principles aimed at remediating the increasing level of disturbance brought about by an increasing number of devices clamoring for our attention:

  1. Technology should require the smallest amount of our attention — create ambient and highly contextual awareness through different senses.
  2. Technology should inform and calm — give users what they need to solve their problem, and nothing more.
  3. Technology should make use of the periphery — the periphery is informing without overwhelming, while giving the proper level of control if needed.

The smart fusion of the vast amount of personal data gathered (with permission) in real-time requires an increasing data load to be processed and acted upon at the device level. Depending on the user’s context and situation, the application interface should self-adjust its parameters to augment relevancy while minimizing distraction. Smart devices’ cadence and speed sensors can now determine if their users are walking, biking or driving, and therefore anticipate the manner in which to grab their attention if (and only if) the information to be delivered is critical. Self-learning mechanisms can also watch users’ behavior and adapt to their preferences without being explicitly set. Finally, to reach high levels of relevancy and adaptability, machine learning and prescriptive analytics techniques can deliver real-time process morphing capabilities.

Context-aware computing focuses on increasing the relevancy of information to assist users in real time. In a nutshell, context is viewed as something that can be sensed and measured using location, time, person, activity type and other dimensions. In addition to maximizing context, personalized interactions also require the application to minimize uncertainty while condensing all available data to a set manageable by applications or devices in a short time frame. Today, most of the challenge in building “condensers” still resides upstream, in the appropriate choice of the data sources and their quality.

As more information is produced at the periphery, pushed by the combined influence of the Internet of Things (IoT) and mobile computing, a new computing platform paradigm is emerging. That platform will opportunistically connect devices without, or with minimal, intervention of a cloud capability and make extensive use of smart and autonomous software agents. That intelligence at the edge will minimize the amount of data transfer, bring contextual situational awareness for local processes and allow localized interactions to adapt faster to local situations.

Recommendations

For data and analytics leaders:

  • Enhance application and interface design capabilities by adopting user-centric disciplines such as calm technology and design-thinking methodologies.
  • Prototype experimental business applications and solutions by combining emerging techniques for contextual computing such as IoT, machine learning, prescriptive analytics and virtual assistants.
  • Optimize the localization and adaptability of applications and local devices at the periphery of the network by exploring agent-based computing paradigms.

Related Research:

  • “Cool Vendors in AI for Conversational Platforms, 2017”

Strategic Planning Assumption: By 2022, 30% of customer interactions will be influenced by real-time location analysis, up from 4% in 2017.

Analysis by: Jim Hare

Key Findings:

  • Organizations are increasingly using more real-time location analytics, because of the pressure to increase the speed and accuracy of business processes as well as to improve customer experiences. Real-time location data is fueling innovation in services such as Uber and Waze, which are particularly location-aware.
  • The adoption of smartphones and IoT devices is generating location data that can be used to personalize interactions and improve experience. Penetration of smartphones in the U.S. exceeded 80% in 2016, 2 and is expected to exceed 40% worldwide by 2022. 3
  • Today, about 29% of organizations are actively using geospatial and location intelligence capabilities (see “Survey Analysis: BI and Analytics Spending Intentions, 2017” ). However, we estimate that only about 4% are performing location analytics in real time.
  • Microlocation technologies such as Bluetooth, beacons and Wi-Fi are closing the gaps between online and offline customer interactions. Location data generated by smartphones is accurate only to approximately 30 meters; smartphones using microlocation technologies can enable accuracy to less than one meter.

Market Implications:

Time and place underpin everything that happens in our lives and everything we know and learn about the world. Today’s technology allows us to collect information about nearly all of these events, fueling an explosion of real-time, location-aware data. With smartphone penetration rates on the rise in emerging and advanced economies alike, the proliferation of location infrastructure such as RFID, beacons, GPS and cell towers, as well as the IoT is tipped to go mainstream within a few years. Location data already is, and will continue to be, a growing component of all business data; however, rather than adding to the complexity of the data landscape, location has the power to bring order to it.

Businesses are increasingly leveraging the streaming data generated by devices, sensors and people to make faster decisions. They are seeking new ways to improve all aspects of the customer experience. For example, a shift to a localized view of the customer, or the use of more real-time analytics at the most relevant moment in a customer’s journey in order to optimize the experience.

It is not just real-time data about the customer that creates context for business decisions, it’s also real-time data about other aspects of company operations. For example, a restaurant chain looking at real-time data showing lower-than-anticipated sales at a location and, at the same time, a surplus at that location could respond by texting offers to customers close to that location. The customers get a good deal, creating a great experience, and the restaurant sells all its food.

An increasing number of disruptive digital businesses are built on top of real-time, location-based insights. For example, Uber uses data to predict traffic patterns, estimate near-accurate waiting times for taxis and reroute to the fastest possible route to the destination in order to give users a seamless travel experience from point A to point B. If Uber didn’t have access to real-time location analytics, its business wouldn’t exist. The use of real-time location analytics is expected to expand to traditional businesses as well.

Here are some other sample use cases for real-time location analytics:

  • Retail — In-store location technology can be used to identify and engage customers directly through smartphone apps. Sales staff can be automatically alerted of high-value customers and provide a concierge level of service.
  • Mobile Marketing — Hyperlocal targeted mobile messages and offers can be delivered using current or future location as a context. This ensures the customers are at the right place and time to take advantage of a personalized offer the moment it is delivered.
  • Customer Experience — Location analytics can be used to improve guest experiences in large venues such as stadiums or resorts, or to improve experiences in airports and travel. Way-finding, ordering and even queue management for services such as restrooms or drinks can all be enhanced using location. It can also be used to give customers more accurate times for package deliveries or field service repairs.

Recommendations

For data and analytics leaders:

  • Use location to add more precision and context to your customer analytics. In the digital world, it’s no longer enough to know who your customers are. You also need to know where they are (avoiding creepiness or bombarding people with messages just because they walk near a store) and their communication preferences, in order to deliver a real hyperpersonalized and responsive experience.
  • Work with your business teams to explore how real-time location analytics can be used to improve customer interactions and increase engagement. Understand the difference between real-time and non-real-time location use cases. In non-real-time use cases, data can be collected and analyzed offline — whereas in real-time use cases the location data and analytics occurs in near real time.
  • Match the speed of the location analytics to the speed of the customer interaction. Real-time use cases typically require different technologies that are able to handle spatial processing and analytics to reduce latency and automate the action.
  • Real-time location analytics requires solid data management and governance and the right location precision. Inaccurate or imprecise location or consumer data can have the opposite effect and create a negative customer experience.
  • Understand the different technologies needed for indoor versus outdoor use cases. Each requires different map data in different formats. The data transmission technologies are also different, depending on where the data is collected; for example, within a building or on the road (technologies such as GPS don’t work well indoors).

Related Research:

  • “How to Move Analytics to Real Time”
  • “Forecast Snapshot: Location Intelligence Software, Worldwide, 2017”
  • “Mapping the Mobile Customer Decision Journey”
  • “Market Guide for Indoor Location Application Platforms”
  • “Market Trends: Ways CSPs Can Exploit Location Data”

Strategic Planning Assumption: By 2023, artificial intelligence and deep-learning techniques will replace traditional machine learning techniques as the most common approach for new applications of data science.

Analysis by: Gareth Herschel, Alexander Linden and Svetlana Sicular

Key Findings:

  • One of the most significant barriers to organizational adoption of analytics is trust in the analysis. The need for trust takes several forms, from statistical accuracy to ethical suitability.
  • As business problems become more complicated, the sophistication of the analysis required to solve them continues to increase. Also, the level of knowledge required to understand the analysis becomes more sophisticated (albeit at a lower level), placing a continual brake on the adoption of more sophisticated analysis.
  • As a decision maker’s personal experience with the use of deep-learning techniques increases, the reluctance to trust analysis that is not understood decreases.

Market Implications:

To solve increasingly complex problems, organizations need to adopt more sophisticated analysis; however, adoption and usage of the analysis also require trust in the suitability and accuracy of the analysis that was performed.

Any analytic technique can be assessed for accuracy. However, the elements that determine whether the model should be used (because it incorporates certain types of data), or whether it is trustworthy (being able to provide a hypothesis about why the model works rather than just statistical evidence that it seems to work), are also important. Historically, the most commonly used analytic techniques have been comprehensible, at least in conceptual terms. Even if users don’t understand the details of how the specific model was built, they can understand the logic behind a multivariate regression or decision tree.

Deep learning (a form of AI) poses a fundamentally different question than that of pure accuracy. Although it often delivers more accurate results, its lack of transparency has limited its use to cases where traditional machine-learning techniques would not be suitable (such as virtual assistants and self-driving cars). As deep-learning techniques become more widely available, the question of whether to use them or traditional machine learning techniques to solve the same problem (such as anti-money laundering or fraud risk) will become more common.

With AI becoming a normal part of everyday life, the idea that we would trust it with our lives but not our money becomes implausible. A number of academic studies have indicated correlation between familiarity and trust. 4 As willingness to trust the analysis because it works increases, even if the user is unable to understand exactly how or why it works, deep-learning techniques will become the new normal — supporting use cases which are unlikely to result in litigation. The threat of civil liability over difficult-to-explain algorithms will pose the most severe restriction on corporate adoption of deep-learning techniques, although restrictive government regulations similar to those around genetically modified organism (GMO) produce are always a possibility.

Recommendations

For data and analytics leaders:

  • Create a process for identifying valid analytic techniques for different types of business decisions by bringing together analysts, line-of-business decision makers and your legal department into an analytics governance committee.
  • Implement a model audit process by linking assessments of how well the model is performing “in the field,” as well as the business impact of the model, back to the model development process.
  • Change your product selection criteria to include assessments of how well the product explains how a model was created and used.

Related Research:

  • “Innovation Insight for Deep Learning”

Strategic Planning Assumption: By 2024, a scarcity of data scientists will no longer hinder the adoption of data science and machine learning in organizations.

Analysis by: Gareth Herschel, Rita L. Sallam, Svetlana Sicular, Jim Hare, Jorgen Heizenberg, Doug Laney

Key Findings:

  • The role of data scientist is one of the most in-demand jobs in the analytics space.
  • The need for data-science-generated insights will continue to grow within organizations and across multiple industries.
  • Most organizations are facing a scarcity of data scientists to deliver the business benefits they are expecting.
  • Solving the problem of the shortage of data scientists is a significant opportunity that multiple organizations are pursuing from different perspectives.

Market Implications:

Markets respond to the laws of supply and demand. Not always perfectly, and not always as initially anticipated, but eventually market forces tend to increase the supply of scarce resources or find a substitute. In the field of advanced analysis, traditional limitations such as the unavailability of data or the high cost of computing power have been reduced with the advent of big data and cloud computing. However, the scarce resource that is a skilled data scientist remains, making them one of the most sought after (and expensive) roles in a data and analytics organization.

Although universities are increasing the number of data science programs they provide, to increase the number of data scientists they produce, this is not the only way that the shortage of data scientists can be solved. During the next few years, we will see an increasing number of approaches combine to reduce and potentially eliminate this scarcity. That is not to say that the role of a data scientist disappears, nor is it to suggest that the role and value of a skilled data scientist become less important. What it does mean is that it will become feasible for organizations to pursue data-science-driven initiatives without the cost and difficulty of hiring a team of data scientists. These alternatives to a data scientist will get you 80% of the benefit at 20% of cost and effort.

There are several new approaches to fulfilling the demand for data scientists.

  • Higher levels of automation. Initially, we see a continuation and extension of the current capability for models to be self-refining over an increasing number of use cases. Over time, models have the potential to be autogenerated based on business objectives and the ubiquitous availability of data (closer to the approach of providing new classes of analytic tool to enable a CDS). In the longer term, the idea of AI collaborating with a data scientist may dramatically improve the productivity of traditional data scientists as well.
  • Combine off-the-shelf models created by experts that are made available as building blocks for other applications. An example might be Siri or Alexa, where the virtual assistant has been developed by one organization but can be used as an interface for a variety of other applications. Applying this concept to a variety of other analytic techniques, such as automated image classification and obstacle avoidance, would allow the creation of voice-activated smart robots to collect medicines or packages. What would once have required a team of data scientists now requires only a team of engineers — still a highly skilled team, but one that it is possible to create.
  • The sharing economy. With the diffusion of analytics skills, the addition of relevant experience becomes a more significant differentiator in the performance of a data scientist. Anyone can build a predictive model, the question is who can build the best predictive churn model for a restaurant chain, for example. In the sharing economy, the ability to find people with the specific experience to match the problem becomes easier — lowering one barrier to entry for organizations wanting to incorporate data science into their analytics portfolio.

In a world where data science still offers competitive advantage but “possession” of a data scientist does not, the source of competitive advantage shifts to the organization that can apply data science analysis to the most compelling use cases and ensure adoption of the resulting insights.

Recommendations

For data and analytics leaders:

  • Plan for a future of data science abundance by setting an enterprise strategy based on differentiation by the use of analysis, not differentiation by the possession of analysis.
  • Create a portfolio of data science resources by automating parts of the data science process and by engaging with both traditional data science service providers and crowdsourced approaches to data science.
  • Update the hiring and performance assessment criteria for data scientists by increasing the emphasis on business-relevant problem solving and communication skills, rather than just programming and statistical data science skills.

Related Research:

  • “Leading Upskilling Initiatives in Data Science and Machine Learning”
  • “Market Guide for Data Science and Machine Learning Service Providers”

A Look Back

In response to your requests, we are taking a look back at some key predictions from previous years. We have intentionally selected predictions from opposite ends of the scale — one where we were wholly or largely on target, as well as one we missed.

On Target: 2014 Prediction — By 2017, analytic applications offered by software vendors will be indistinguishable from analytic applications offered by service providers.

In order to address specific business processes or market verticals, analytics service providers are adding software to their services, while analytics software vendors are adding services to their products. The result is a converged software and service offering, or “servware,” often taking the form of an analytic platform or application. This convergence is disrupting the practices and behaviors of existing players. It is also creating new opportunities for analytics leaders to weave analytics into the operational fabric of the business. Servware will enable organizations to shift from being an analytics consumer to an analytics producer, and creates the opportunity to monetize their domain expertise and analytics insights (see “Take Advantage of the Disruptive Convergence of Analytics Services and Software” ).

Missed: 2015 Prediction — Through 2017, the number of citizen data scientists will grow five times faster than the number of highly skilled data scientists.

Searching for “data scientist” on LinkedIn yields around 68,000 hits, while “citizen data scientist” yields fewer than 100 hits. Based on these results, it would seem that this prediction was a major miss; however, that would be to misunderstand the situation. Data scientist is inscribed on business cards and resumés, while CDS is a frame of mind/activity/skill set, rather than an official job title. Gartner defines a citizen data scientist as a “person who creates or generates models that use advanced diagnostic analytics or predictive and prescriptive capabilities, but whose primary job function is outside the field of statistics and analytics.” Anybody who uses the built-in k -means clustering function in a data visualization, who models and shares a data integration flow in a self-service data preparation tool, or who uses technology to autogenerate a predictive algorithm is exhibiting CDS behavior. Based on the growth in the use of these technology functions alone, since 2015, there are certainly many hundreds of thousands of people doing this worldwide. In fact, the prediction is a miss because it understated the growth in CDS activity — it may actually be 10 times the number of data scientists.

The implications of the growing number of CDSs for organizations are felt both internally and externally. Internally, CDS-minded staff will seek self-service capabilities for self-service diagnostic and predictive capabilities that need to be catered for within the analytics and BI technology portfolio. Externally, CDS-minded customers, suppliers and partners will increasingly want access to data services that allow them to go beyond simply monitoring their relationship with the organizations they interact with.

Finally, to misuse a cliché, a little knowledge can be a dangerous thing. To ensure that powerful statistical or analytic techniques are used appropriately by CDS staff, and to encourage broader growth of higher order analytics, organizations need to focus on their data literacy. They need to build programs to develop this competency, as part of their analytic community of excellence (ACE) team’s core tasks.

Related Research:

  • “Information as a Second Language: Enabling Data Literacy for Digital Society”
  • “Pursue Citizen Data Science to Expand Analytics Use Cases”

Acronym Key and Glossary Terms

AI
citizen data scientist (CDS)
artificial intelligence Gartner defines a “citizen data scientist” as a person who creates or generates models that use predictive or prescriptive analytics but whose primary job function is outside the field of statistics and analytics. The person is not typically a member of an analytics team (for example, an analytics center of excellence) and does not necessarily have a job description that lists analytics as his or her primary role. This person is typically in a line of business, outside IT and outside a BI team. However, an IT or BI professional may be a citizen data scientist if the professional’s work on analytics is secondary to his or her primary role. Citizen data scientists are “power users” who are able to use simple and moderately sophisticated analytic applications that would previously have required more expertise.
IoT
artificial intelligence Internet of Things
AI artificial intelligence
citizen data scientist (CDS) Gartner defines a “citizen data scientist” as a person who creates or generates models that use predictive or prescriptive analytics but whose primary job function is outside the field of statistics and analytics. The person is not typically a member of an analytics team (for example, an analytics center of excellence) and does not necessarily have a job description that lists analytics as his or her primary role. This person is typically in a line of business, outside IT and outside a BI team. However, an IT or BI professional may be a citizen data scientist if the professional’s work on analytics is secondary to his or her primary role. Citizen data scientists are “power users” who are able to use simple and moderately sophisticated analytic applications that would previously have required more expertise.
IoT Internet of Things

Evidence

1 “The Coming Age of Calm Technology,” CalmTechnology.com.

2 “U.S. Smartphone Penetration Surpassed 80 Percent in 2016,” comScore.

3 “Smartphone User Penetration as Percentage of Total Global Population From 2014 to 2020,” statista. 4 “Familiarity and Trust: Measuring Familiarity With a Website,” semanticscholar.org (PDF).