Do you want to do some quick, in depth technical analysis of stock prices?
After I left CERN to work as consultant and to earn an MBA, I was engaged in many exciting projects in the finance sector, analyzing financial data, such as stock prices, exchange rates and so on. Obviously there are a lot of available models to fit, analyze and predict these types of data. For instance, basic time series model arima(p,d,q), Garch model, and multivariate time series model such as VARX model, state space models.
Although it is a little hard to propose a new and effective model in a short time, I believe that it is also meaningful to apply the existing models and methods to play the financial data. Probably some valuable conclusions will be found. For those of you who wish to have data to experiment with financial models, I put together a web application written in R:
My Data Science journey starts at CERN where I finished my master thesis in 2009. CERN, the European Organization for Nuclear Research, is the home of the Large Hadron Collider (LHC) and has some questions to answer: like how the universe works and what is it made of. CERN collects nearly unbelievable amounts of data – 35 petabytes of data per year that needs analysis. After submitted my thesis, I continued my Data Science research at CERN.
Today companies have realized that Business Analytics needs to be an essential part of their competitive strategy. The demand on Data Scientists grows exponentially. To me, Data Science is more about the right questions being asked than the actual data. The MBA enabled me to understand that data does not provide insights unless appropriately questioned. Delivering excellent Big Data projects requires a full understanding of the business, developing the questions, distilling the adequate amount of data to answer those questions and communicating the proposed solution to the target audience.
„The task of leaders is to simplify. You should be able to explain where you have to go in two minutes.“ – Jeroen van der Veer, former CEO of Royal Dutch Shell
We are deep in the Big Data jungle. According to Gartner’s Hype Cycle for Emerging Technologies, Big Data has now officially passed the “peak of inflated expectations”, and is now on a one-way trip to the “trough of disillusionment”. Gartner says it’s done so rather fast, because we already have consistency in the way we approach this technology, and because most new advances are additive rather than revolutionary.
Pig, Hive, Impala, Tez and Spark: which one suits for which use case?
With so much hype and so many new advances, it’s easy to get lost. This little guide gives you an overview on data processing technologies in the Big Data jungle and tries to identify the best use cases for each.
Pig: Pig is often useful for pulling apart unstructured and nested data like text or JSON. Since Pig Latin is a procedural language, it is a very good choice for developing data pipelines on Hadoop. Pig is based on MapReduce and has tools for data storage, data execution and data manipulation.
Hive: Hive was original “relational on Hadoop” and is the first Hadoop SQL (HiveQL to be precise) query engine. Hive is still the most mature engine from all in this guide, as well as the slowest one. Hive is also based on MapReduce and is a very good choice for heavy ETL tasks where reliability is important, eg. daily aggregation jobs.
Impala: Impala is the only native open-source SQL query engine in the Hadoop world. It skips MapReduce entirely and is best used for SQL queries over big volumes. Impala is also capable of delivering results interactively over bigger volumes and with a much faster speed than other Hadoop query engines.
Tez: Tez may be considered as a better and faster base for query engines like Pig and Hive. Tez gets around limitations imposed by MapReduce and enables use cases with near-real-time performance and Machine Learning, which do not fit well into the MapReduce paradigm.
Spark: Spark is an in-memory query engine that also skips MapReduce. Perfect use cases for Spark are streaming, interactive data processing and ad-hoc analysis of moderate-sized data sets (as big as the cluster’s RAM). The ability of Spark to reuse data in-memory is the real highlight for these use cases. Spark SQL offers relational connectivity.
Recently we hear a lot about Big Data Analytics‘ ability to deliver usable insight – but what does this mean exactly for the financial service industry?
While much of the Big Data activity in the market up to now has been experimenting about Big Data technologies and proof-of-concept projects, I like to show in this post seven issues banks and insurances can address with Big Data Analytics:
1. Dynamic 360º View of the Customer:
Extend your existing customer views by incorporating dynamic internal and external information sources. Gain a full understanding of customers – what makes them tick, why they buy, how they prefer to shop, why they switch, what they’ll buy next, and what factors lead them to recommend a company to others.
2. Enhanced Commercial Scorecard Design and Implementation:
Financial institutions use Big Data solutions to analyze commercial loan origination, developing scorecards and scoring, and ultimately improving accuracy as well as optimizing price and risk management.
3. Risk Concentration Identification and Management:
Identify risk concentration hotspots by decomposing risk into customized insights. Clearly see factor contribution to risks and gain allocation consensus through downside risk budgeting.
4. Next Best Action Recommendations:
Make „next best action“ an integral part of your marketing strategy and proactive customer care. With analytical insight from Big Data, you can answer such questions as: What approach will get the most out of the customer relationship? Is selling more important than retention?
5. Fraud Detection Optimization:
Preventing fraud is a major priority for all financial services organizations. But to deal with the escalating volumes of financial
transaction data, statisticians need better ways to mine data for insight. Optimization for your current fraud detection techniques help to leverage your existing fraud detection assets.
6. Data and Insights Monetization:
Use your customer transaction data to improve targeting of cross-sell offers. Partners are increasingly promoting merchant based reward programs which leverage a bank’s or credit card issuer’s data and provide discounts to customers at the same time.
7. Regulatory and Data Retention Requirements:
The need for more robust regulatory and data retention management is a legal requirement for financial services organizations across the globe to comply with the myriad of local, federal, and international laws (such as Basel III) that mandate the retention of certain types of data.
Recently, Tableau released an exciting feature that enhances the capabilities of data analytics: R integration via RServe. By bringing together Tableau and R, data scientists and analysts can now enjoy a more comprehensive and powerful data science toolbox. Whether you’re an experienced data scientist or just starting your journey in data analytics, this tutorial will guide you through the process of integrating R with Tableau.
Step by Step: Integrating R in Tableau
1. Install and start R and RServe
You can download base R from r-project.org. Next, invoke R from the terminal to install and run the RServe package:
To ensure RServe is running, you can try Telnet to connect to it:
Protip: If you prefer an IDE for R, I can highly recommend you to install RStudio.
2. Connecting Tableau to RServe
Now let’s open Tableau and set up the connection:
3. Adding R code to a Calculated Field
You can invoke R scripts in Tableau’s Calculated Fields, such as k-means clustering controlled by an interactive parameter slider:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
You can now use your R calculation as an alternate Calculated Field in your Tableau worksheet:
Feel free to download the Tableau Packaged Workbook (twbx) here.
Connect and Stay Updated
Stay on top of the latest in data science and analytics by following me on Twitter and LinkedIn. I frequently share tips, tricks, and insights into the world of data analytics, machine learning, and beyond. Join the conversation, and let’s explore the possibilities together!
Blog post updates:
Tableau 8.1 screenshots were updated with Tableau 10.0 (preview) screenshots due to my upcoming Advanced Analytics session at TC16, which is going to reference back to this blog post.
We use cookies to optimize our website and our service.
Functional
Immer aktiv
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.