7 Big Data Analytics Use Cases for Financial Institutions

Big Data Analytics
Big Data Analytics

Recently we hear a lot about Big Data Analytics’ ability to deliver usable insight – but what does this mean exactly for the financial service industry?

While much of the Big Data activity in the market up to now has been experimenting about Big Data technologies and proof-of-concept projects, I like to show in this post seven issues banks and insurances can address with Big Data Analytics:

1. Dynamic 360º View of the Customer:
Extend your existing customer views by incorporating dynamic internal and external information sources. Gain a full understanding of customers – what makes them tick, why they buy, how they prefer to shop, why they switch, what they’ll buy next, and what factors lead them to recommend a company to others.

2. Enhanced Commercial Scorecard Design and Implementation:
Financial institutions use Big Data solutions to analyze commercial loan origination, developing scorecards and scoring, and ultimately improving accuracy as well as optimizing price and risk management.

3. Risk Concentration Identification and Management:
Identify risk concentration hotspots by decomposing risk into customized insights. Clearly see factor contribution to risks and gain allocation consensus through downside risk budgeting.

4. Next Best Action Recommendations:
Make “next best action” an integral part of your marketing strategy and proactive customer care. With analytical insight from Big Data, you can answer such questions as: What approach will get the most out of the customer relationship? Is selling more important than retention?

5. Fraud Detection Optimization:
Preventing fraud is a major priority for all financial services organizations. But to deal with the escalating volumes of financial
transaction data, statisticians need better ways to mine data for insight. Optimization for your current fraud detection techniques help to leverage your existing fraud detection assets.

6. Data and Insights Monetization:
Use your customer transaction data to improve targeting of cross-sell offers. Partners are increasingly promoting merchant based reward programs which leverage a bank’s or credit card issuer’s data and provide discounts to customers at the same time.

7. Regulatory and Data Retention Requirements:
The need for more robust regulatory and data retention management is a legal requirement for financial services organizations across the globe to comply with the myriad of local, federal, and international laws (such as Basel III) that mandate the retention of certain types of data.

What is the Best Onshore/Offshore Ratio for Consulting Firms?

Challenges and potential of the extended usage of offshore resources for consulting firms
Screenshot of Onshore/Offshore Ratio survey

At the time of a new engagement, managers take into consideration many activities like project planning, effort estimation, defining goals and metrics, cost, outcome, etc. One factor that is most important for any project to succeed is engaging the right onshore/offshore staffing ratio to execute the project. This factor is mostly not given adequate importance in many recent delivery models. For managers to meet project profit margins, they try to limit the cost spent on project resources and execution. With the limited resourcing budget, it is not feasible to have a default onshore/offshore ratio that fits all projects.

After gathering some experience in working offshore (2007-2008 in Bangalore, India) and onshore (in Germany and Switzerland) I started to wonder if there is a optimal onshore/offshore ratio. Quite soon I concluded that this question is not easy to answer. So I did a breakdown to certain aspects and instead of answering them by myself, I set up a survey and hope to get your support!

Start the survey: http://bit.ly/offshoreratio
[Update 15 Nov 2014]: After collecting data over four weeks (18 Oct – 14 Nov), the survey is closed. Results will follow soon.

Basically, I’d like to address three groups to answer this survey:

  • Employees of traditional consulting firms
  • Employees of Indian pure players (such as Infosys, TCS, HCL, Wipro, etc.)
  • Employees of clients of consulting firms

Of course, I’m going to share the results after evaluation. Thank you for participating and sharing the link with your colleagues! Also retweets are highly appreciated…

India still the Top Destination for Outsourcing

SAP Labs India Pvt. Ltd. in Bangalore
SAP Labs India Pvt. Ltd. in Bangalore

Asian countries, especially countries in South Asia and Southeast Asia, keep on being favored picks among organizations interested in contract out business processes offshore. India remains the top outsourcing destination, with its unrivaled advantages in scale and people skills, said the 2014 Global Services Location Index (GSLI) released by A.T. Kearney. China and Malaysia are second and third respectively.

The GSLI, which tracks offshoring patterns to lower-cost developing countries and the ascent of new locations, measures the underlying fundamentals of 51 nations focused on measurements in three general classifications, such as financial attractiveness, people skills and availability, and business environment.

Distributed since 2004 the GSLI, revealed that leading IT-services companies in India, to whom IT-related functions were outsourced, are extending their traditional offerings to incorporate research and development, product development and other niche services. The line between IT and business-procedure outsourcing there is obscuring, as players offer packages and specialized services to their customers and are developing skills in niche domains.

Furthermore, the GSLI identified a trend of multinationals reassessing their outsourcing strategies, after having aggressively outsourced back office operations in the mid-2000s; it has been noted that some companies are starting to reclaim some of these functions and undertaking them in-house again.

Goodbye Academia, Hello Capgemini

CERN Main Auditorium
CERN Main Auditorium

I have enjoyed research for the last four years. Yet, I have decided to resigned from my postgraduate position at CERN, and to move to Capgemini. I will continue on the areas I love: Data and Analytics!

Capgemini is one of the world’s largest consulting corporations. Like many other consulting company, Capgemini does not yet have a dedicated team to offer effective strategies and solutions employing Big Data, Analytics and Machine Learning.

I love these technologies, and I am very confident that I will elaborate a business development plan to drive business growth, through customer and market definition, including new services such as:

  • Data Science Strategy (enable organizations to solve business problems increasingly with insights from analytics)
  • Consulting (answering questions using data)
  • Development (building custom data-related tools like interactive dashboards, pipelines, customized Hadoop setup, data prep scripts…)
  • Training (across a variety of skill levels; from basic dashboard design to deep dive in R, Python and D3.js)

This plan is also accompanied by a go-to-market strategy, which I don’t want to unveil on my blog. Maybe retrospective in some years, so stay tuned…

Data Science: Enabling Research at CERN with Big Data

Wow, time flies. One year has passed since I started to work at CERN as a data scientist. CERN, surrounded by snow-capped mountains and Lake Geneva, is known for its particle accelerator Large Hadron Collider (LHC) and its adventure in search of the Higgs boson. Underneath the research there is an tremendous amount of data that are analysed by data scientists.

Filters, known as High Level Triggers, reduce the flow of data from a petabyte (PB) a second to a gigabyte per second, which is then transferred from the detectors to the LHC Computing Grid. Once there, the data is stored on about 50PB of tape storage and 20PB of disk storage. The disks are managed as a cloud service (Hadoop), on which up to two millions of tasks are performed every day.

High Level Trigger data flow
High Level Trigger data flow, as applied in the ALICE experiment

CERN relies on software engineers and data scientists to streamline the management and operation of its particle accelerator. It is crucial for research to allow real-time analysis. Data extractions need to remain scalable and predictive. Machine learning is applied to identify new correlations between variables (LHC data and external data) that were not previously connected.

So what is coming up next? Scalability remains a very important area, as the CERN’s data will continue to grow exponentially. However, the role of data scientists goes much further. We need to transfer knowledge throughout the organisation and enable a data-driven culture. In addition, we need to evaluate and incorporate new innovative technologies for data analysis that are appropriate for our use cases.