How about some visual takeaways from the IMF’s World Economic Outlook? Recently I prepared two nifty data visualizations with Tableau that I like to share with you.
These visualizations allow you to explore plenty of economical data, including IMF staff estimates until 2020. Don’t forget to choose “Units” after switching “Subject” on the right-side bar. A detailed description on each subject is displayed below.
We are deep in the Big Data jungle. According to Gartner’s Hype Cycle for Emerging Technologies, Big Data has now officially passed the “peak of inflated expectations”, and is now on a one-way trip to the “trough of disillusionment”. Gartner says it’s done so rather fast, because we already have consistency in the way we approach this technology, and because most new advances are additive rather than revolutionary.
Pig, Hive, Impala, Tez and Spark: which one suits for which use case?
With so much hype and so many new advances, it’s easy to get lost. This little guide gives you an overview on data processing technologies in the Big Data jungle and tries to identify the best use cases for each.
Pig: Pig is often useful for pulling apart unstructured and nested data like text or JSON. Since Pig Latin is a procedural language, it is a very good choice for developing data pipelines on Hadoop. Pig is based on MapReduce and has tools for data storage, data execution and data manipulation.
Hive: Hive was original “relational on Hadoop” and is the first Hadoop SQL (HiveQL to be precise) query engine. Hive is still the most mature engine from all in this guide, as well as the slowest one. Hive is also based on MapReduce and is a very good choice for heavy ETL tasks where reliability is important, eg. daily aggregation jobs.
Impala: Impala is the only native open-source SQL query engine in the Hadoop world. It skips MapReduce entirely and is best used for SQL queries over big volumes. Impala is also capable of delivering results interactively over bigger volumes and with a much faster speed than other Hadoop query engines.
Tez: Tez may be considered as a better and faster base for query engines like Pig and Hive. Tez gets around limitations imposed by MapReduce and enables use cases with near-real-time performance and Machine Learning, which do not fit well into the MapReduce paradigm.
Spark: Spark is an in-memory query engine that also skips MapReduce. Perfect use cases for Spark are streaming, interactive data processing and ad-hoc analysis of moderate-sized data sets (as big as the cluster’s RAM). The ability of Spark to reuse data in-memory is the real highlight for these use cases. Spark SQL offers relational connectivity.
Recently we hear a lot about Big Data Analytics’ ability to deliver usable insight – but what does this mean exactly for the financial service industry?
While much of the Big Data activity in the market up to now has been experimenting about Big Data technologies and proof-of-concept projects, I like to show in this post seven issues banks and insurances can address with Big Data Analytics:
1. Dynamic 360º View of the Customer:
Extend your existing customer views by incorporating dynamic internal and external information sources. Gain a full understanding of customers – what makes them tick, why they buy, how they prefer to shop, why they switch, what they’ll buy next, and what factors lead them to recommend a company to others.
2. Enhanced Commercial Scorecard Design and Implementation:
Financial institutions use Big Data solutions to analyze commercial loan origination, developing scorecards and scoring, and ultimately improving accuracy as well as optimizing price and risk management.
3. Risk Concentration Identification and Management:
Identify risk concentration hotspots by decomposing risk into customized insights. Clearly see factor contribution to risks and gain allocation consensus through downside risk budgeting.
4. Next Best Action Recommendations:
Make “next best action” an integral part of your marketing strategy and proactive customer care. With analytical insight from Big Data, you can answer such questions as: What approach will get the most out of the customer relationship? Is selling more important than retention?
5. Fraud Detection Optimization:
Preventing fraud is a major priority for all financial services organizations. But to deal with the escalating volumes of financial
transaction data, statisticians need better ways to mine data for insight. Optimization for your current fraud detection techniques help to leverage your existing fraud detection assets.
6. Data and Insights Monetization:
Use your customer transaction data to improve targeting of cross-sell offers. Partners are increasingly promoting merchant based reward programs which leverage a bank’s or credit card issuer’s data and provide discounts to customers at the same time.
7. Regulatory and Data Retention Requirements:
The need for more robust regulatory and data retention management is a legal requirement for financial services organizations across the globe to comply with the myriad of local, federal, and international laws (such as Basel III) that mandate the retention of certain types of data.
Recently Tableau released an exciting new feature: R integration via RServe. Tableau with R seems to bring my data science toolbox to the next level! In this tutorial I’m going to walk you through the installation and connecting Tableau with RServe. I will also give you an example of calling an R function with a parameter from Tableau to visualize the results in Tableau.
1. Install and start R and RServe
You can download base R from r-project.org. Next, invoke R from the terminal to install and run the RServe package:
[Update 26 Jun 2016]: Tableau 8.1 screenshots were updated with Tableau 10.0 (Beta) screenshots due to my upcoming Advanced Analytics session at TC16, which is going to reference back to this blog post.
There are four key issues to overcome if you want to tame Big Data: volume (quantity of data), variety (different forms of data), velocity (how fast the data is generated and processed) and veracity (variation in quality of data). You have to be able to deal with lots and lots, of all kinds of data, moving really quickly.
That is why Big Data Analytics has a huge impact on how we plan CERN’s overall technology strategy as well as specific strategies for High-Energy Physics analysis. We want to profit from our data investment and extract the knowledge. This has to be done in a proactive, predictive and intelligent way.
The following presentation shows you how we use Big Data Analytics to improve the operation of the Large Hardron Collider.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.