Mahi Sall, Advisor, Fintech-Bank Partnerships, Payments and Financial Inclusivity
January 25th, 2023
McKinsey&Company | By Chiara Brocchi, Davide Grande, Kayvaun Rowshankish, Tamim Saleh, and Allen Weinberg | Oct 2018
The CEOs of most financial institutions have had data on their agenda for at least a decade. However, the explosion in data availability over the past few years—coupled with the dramatic fall in storage and processing costs and an increasing regulatory focus on data quality, policy, governance, models, aggregation, metrics, reporting, and monitoring—has prompted a change in focus. Most financial institutions are now engaged in transformation programs designed to reshape their business models by harnessing the immense potential of data.
Leading financial institutions that once used descriptive analytics to inform decisionmaking are now embedding analytics in products, processes, services, and multiple front-line activities. And where they once built relational data warehouses to store structured data from specific sources, they are now operating data lakes with large-scale distributed file systems that capture, store, and instantly update structured and unstructured data from a vast range of sources to support faster and easier data access. At the same time, they are taking advantage of cloud technology to make their business more agile and innovative, and their operations leaner and more efficient. Many have set up a new unit under a chief data officer to run their data transformation and ensure disciplined data governance.
Successful data transformations can yield enormous benefits. One US bank expects to see more than $400 million in savings from rationalizing its IT data assets and $2 billion in gains from additional revenues, lower capital requirements, and operational efficiencies. Another institution expects to grow its bottom line by 25 percent in target segments and products thanks to data-driven business initiatives. Yet many other organizations are struggling to capture real value from their data programs, with some seeing scant returns from investments totaling hundreds of millions of dollars.
A 2016 global McKinsey survey found that a number of common obstacles are holding financial institutions back: a lack of front-office controls that leads to poor data input and limited validation; inefficient data architecture with multiple legacy IT systems; a lack of business support for the value of a data transformation; and a lack of attention at executive level that prevents the organization committing itself fully (Exhibit 1). To tackle these obstacles, smart institutions follow a systematic five-step process to data transformation.
Obvious though this step may seem, only about 30 percent of the banks in our survey had a data strategy in place. Others had embarked on ambitious programs to develop a new enterprise data warehouse or data lake without an explicit data strategy, with predictably disappointing results. Any successful data transformation begins by setting a clear ambition for the value it expects to create.
In setting this ambition, institutions should take note of the scale of improvement other organizations have achieved. In our experience, most of the value of a data transformation flows from improved regulatory compliance, lower costs, and higher revenues. Reducing the time it takes to respond to data requests from the supervisor can generate cost savings in the order of 30 to 40 percent, for instance. Organizations that simplify their data architecture, minimize data fragmentation, and decommission redundant systems can reduce their IT costs and investments by 20 to 30 percent. Banks that have captured benefits across risk, costs, and revenues have been able to boost their bottom line by 15 to 20 percent. However, the greatest value is unlocked when a bank uses its data transformation to transform its entire business model and become a data-driven digital bank.
Actions: Define the guiding vision for your data transformation journey; design a strategy to transform the organization; establish clear and measurable milestones
Identifying use cases that create value for the business is key to getting everyone in the organization aligned behind and committed to the transformation journey. This process typically comprises four steps.
In the first step, the institution breaks down its data strategy into the main goals it wants to achieve, both as a whole and within individual functions and businesses.
Next it draws up a shortlist of use cases with the greatest potential for impact, ensures they are aligned with broader corporate strategy, and assesses their feasibility in terms of commercial, risk, operational efficiency, and financial control. These use cases can range from innovations such as new reporting services to more basic data opportunities, like the successful effort by one European bank to fix quality issues with pricing data for customer campaigns, which boosted revenues by 5 percent.
Third, the institution prioritizes the use cases, taking into account the scale of impact they could achieve, the maturity of any technical solutions they rely on, the availability of the data needed, and the organization’s capabilities. It then launches pilots of the top-priority use cases to generate quick wins, drive change, and provide input into the creation of a comprehensive business case to support the overall data transformation. This business case includes the investments that will be needed for data technologies, infrastructure, and governance.
The final step is to mobilize data capabilities and implement the operating model and data architecture to deploy the use cases through agile sprints, facilitate scaling up, and deliver tangible business value at each step (Exhibit 2). At one large European bank, this exercise identified almost $1 billion in expected bottom-line impact.
Actions: Select a range of use cases and prioritize them in line with your goals; use top-priority use cases to boost internal capabilities and start laying solid data foundations.
Leading organizations radically remodel their data architecture to meet the needs of different functions and users and enable the business to pursue data-monetization opportunities. Many institutions are creating data lakes: large, inexpensive repositories that keep data in its raw and granular state to enable fast and easy storage and access by multiple users, with no need for pre-processing or formatting. One bank with data fragmented across more than 600 IT systems managed to consolidate more than half of this data into a new data lake, capturing enormous gains in the speed and efficiency of data access and storage. Similarly, Goldman Sachs has reportedly consolidated 13 petabytes of data into a single data lake that will enable it to develop entirely new data-science capabilities.
Choosing an appropriate approach to data ingestion is essential if institutions are to avoid creating a “data swamp”: dumping raw data into data lakes without appropriate ownership or a clear view of business needs, and then having to undertake costly data-cleaning processes. By contrast, successful banks build into their architecture a data-governance system with a data dictionary and a full list of metadata. They ingest into their lakes only the data needed for specific use cases, and clean it only if the business case proves positive, thereby ensuring that investments are always linked to value creation and deliver impact throughout the data transformation.
However, data lakes are not a replacement for traditional technologies such as data warehouses, which will still be required to support tasks such as financial and regulatory reporting. And data-visualization tools, data marts, and other analytic methods and techniques will also be needed to support the business in extracting actionable insights from data. Legacy and new technologies will coexist side by side serving different purposes.
The benefits of new use-based data architecture include a 360-degree view of consumers; faster and more efficient data access; synchronous data exchange via APIs with suppliers, retailers, and customers; and dramatic cost savings as the price per unit of storage (down from $10 per gigabyte in 2000 to just 3 cents by 2015) continues to fall.
In addition, the vast range of services offered by the hundreds of cloud and specialist providers—including IaaS (infrastructure as a service), GPU (graphics-processing unit) services for heavy-duty computation, and the extension of PaaS (platform as a service) computing into data management and analytics—has inspired many organizations to delegate their infrastructure management to third parties and use the resulting savings to reinvest in higher-value initiatives.
Consider ANZ’s recently announced partnership with Data Republic to create secure data-sharing environments to accelerate innovation. The bank’s CDO, Emma Grey, noted that “Through the cloud-based platform we will now be able to access trusted experts and other partners to develop useful insights for our customers in hours rather than months.”
Actions: Define the technical support needed for your roadmap of use cases; design a modular, open data architecture that makes it easy to add new components later.
The National Crowdfunding & Fintech Association (NCFA Canada) is a financial innovation ecosystem that provides education, market intelligence, industry stewardship, networking and funding opportunities and services to thousands of community members and works closely with industry, government, partners and affiliates to create a vibrant and innovative fintech and funding industry in Canada. Decentralized and distributed, NCFA is engaged with global stakeholders and helps incubate projects and investment in fintech, alternative finance, crowdfunding, peer-to-peer finance, payments, digital assets and tokens, blockchain, cryptocurrency, regtech, and insurtech sectors. Join Canada's Fintech & Funding Community today FREE! Or become a contributing member and get perks. For more information, please visit: www.ncfacanada.org
Support NCFA by Following us on Twitter!Follow @NCFACanada |
Leave a Reply