How to perform Text Mining at the Speed of Thought directly in Tableau

Interactive real-time text mining with Tableau Desktop 9.2
Interactive real-time text mining with Tableau Desktop

Tableau is an incredibly versatile tool, commonly known for its ability to create stunning visualizations. But did you know that with Tableau, you can also perform real-time, interactive text mining? Let’s delve into how we can harness this function to gain rapid insights from our textual data.

Previously, during text mining tasks, you might have found yourself reaching for a scripting language like R, Python, or Ruby, only to feed the results back into Tableau for visualization. This approach has Tableau serving merely as a communications tool to represent insights.

However, wouldn’t it be more convenient and efficient to perform text mining and further analysis directly in Tableau?

While Tableau has some relatively basic text processing functions that can be used for calculated fields, these often fall short when it comes to performing tasks like sentiment analysis, where text needs to be split into tokens. Even Tableau’s beloved R integration does not lend a hand in these scenarios.

The Power of Postgres for Text Mining in Tableau

Faced with these challenges, I decided to harness the power of Postgres‘ built-in string functions for text mining tasks. These functions perform much faster than most scripting languages. For example, I used the function regexp_split_to_table for word count, which takes a piece of text (like a blog post), splits it by a pattern, and returns the tokens as rows:

select
guid
, regexp_split_to_table(lower(post_content), '\s+') as word
, count(1) as word_count
from
alexblog_posts
group by
guid, word

Incorporating Custom SQL into Tableau Visualization

I joined this code snippet as a Custom SQL Query to my Tableau data source, which is connected to the database that is powering my blog:

Join with Custom SQL Query in Tableau applying the Postgres function regexp_split_to_table
Join with Custom SQL Query in Tableau applying the Postgres function regexp_split_to_table

And here we go, I was able to create an interactive word count visualization right in Tableau:

This example can be easily enhanced with data from Google Analytics, or adapted to analyze user comments, survey results, or social media feeds. The possibilities for Custom SQL in Tableau are vast and versatile. Do you have some more fancy ideas for real-time text mining with Tableau? Leave me a comment!

Update (TC Pro Tip): Identifying Twitter Hashtags in Tableau

A simple calculated field in Tableau can help identify words within tweets as hashtags or user references, eliminating the need for another regular expression via a Custom SQL Query:

CASE LEFT([Word], 1)
WHEN "#" THEN "Hash Tag"
WHEN "@" THEN "User Reference"
ELSE "Regular Content"
END

Looking for an example? Feel free to check out the Tweets featuring #tableau Dashboard on Tableau Public and download the Packaged Workbook (twbx):

Tableau dashboard that shows tweets featuring the hashtag #tableau (presented at Tableau Conference)
Tableau dashboard that shows tweets featuring the hashtag #tableau (presented at Tableau Conference)

Any more feedback, ideas, or questions? I hope this post provides you with valuable insights into how to master text mining in Tableau, and I look forward to hearing about your experiences and creative applications. You can find more tutorials like this in my new book Visual Analytics with Tableau (Amazon).

Transparency: This blog contains affiliate links. If you click on them, you will be redirected to the merchant. If you decide to make a purchase, we will receive a small commission. The price does not change for you. Affiliate links have no influence on our writing.

How to load data to Hadoop with Alteryx and visualize with Tableau via Impala?

This YouTube tutorial shows you a handy way to load your Excel data to Cloudera Hadoop with Alteryx, and how to see and understand your data even faster with Tableau connected to Impala.

The same tool chain to load and access data can be used with Hive (eg. on Hortonworks) or Spark SQL (eg. on MapR). A overview on common data process technologies can be found in the Big Data jungle guide.

How to unleash Data Science with an MBA?

Servers record a copy of LHC data and distribute it around the world for Analytics

My Data Science journey starts at CERN where I finished my master thesis in 2009. CERN, the European Organization for Nuclear Research, is the home of the Large Hadron Collider (LHC) and has some questions to answer: like how the universe works and what is it made of. CERN collects nearly unbelievable amounts of data – 35 petabytes of data per year that needs analysis. After submitted my thesis, I continued my Data Science research at CERN.

I began to wonder: Which insights are to be discovered beyond Particle Physics? How can traditional companies benefit from Data Science? After almost four exciting years at CERN with plenty of Hadoop and Map/Reduce, I decided to join Capgemini to develop business in Big Data Analysics, and to boost their engagements in Business Intelligence. In order to leverage my data-driven background I enrolled for the Executive MBA program at Frankfurt School of Finance & Management including an Emerging Markets module at CEIBS in Shanghai.

Today companies have realized that Business Analytics needs to be an essential part of their competitive strategy. The demand on Data Scientists grows exponentially. To me, Data Science is more about the right questions being asked than the actual data. The MBA enabled me to understand that data does not provide insights unless appropriately questioned. Delivering excellent Big Data projects requires a full understanding of the business, developing the questions, distilling the adequate amount of data to answer those questions and communicating the proposed solution to the target audience.

„The task of leaders is to simplify. You should be able to explain where you have to go in two minutes.“ – Jeroen van der Veer, former CEO of Royal Dutch Shell

A Data Processing Guide in the Big Data Jungle

14514437527_f687202d5d_k
Too many choices? Don’t get lost!

We are deep in the Big Data jungle. According to Gartner’s Hype Cycle for Emerging Technologies, Big Data has now officially passed the “peak of inflated expectations”, and is now on a one-way trip to the “trough of disillusionment”. Gartner says it’s done so rather fast, because we already have consistency in the way we approach this technology, and because most new advances are additive rather than revolutionary.

Pig, Hive, Impala, Tez and Spark: which one suits for which use case?

With so much hype and so many new advances, it’s easy to get lost. This little guide gives you an overview on data processing technologies in the Big Data jungle and tries to identify the best use cases for each.

  • Pig: Pig is often useful for pulling apart unstructured and nested data like text or JSON. Since Pig Latin is a procedural language, it is a very good choice for developing data pipelines on Hadoop. Pig is based on MapReduce and has tools for data storage, data execution and data manipulation.
  • Hive: Hive was original “relational on Hadoop” and is the first Hadoop SQL (HiveQL to be precise) query engine. Hive is still the most mature engine from all in this guide, as well as the slowest one. Hive is also based on MapReduce and is a very good choice for heavy ETL tasks where reliability is important, eg. daily aggregation jobs.
  • Impala: Impala is the only native open-source SQL query engine in the Hadoop world. It skips MapReduce entirely and is best used for SQL queries over big volumes. Impala is also capable of delivering results interactively over bigger volumes and with a much faster speed than other Hadoop query engines.
  • Tez: Tez may be considered as a better and faster base for query engines like Pig and Hive. Tez gets around limitations imposed by MapReduce and enables use cases with near-real-time performance and Machine Learning, which do not fit well into the MapReduce paradigm.
  • Spark: Spark is an in-memory query engine that also skips MapReduce. Perfect use cases for Spark are streaming, interactive data processing and ad-hoc analysis of moderate-sized data sets (as big as the cluster’s RAM). The ability of Spark to reuse data in-memory is the real highlight for these use cases. Spark SQL offers relational connectivity.

7 Big Data Analytics Use Cases for Financial Institutions

Big Data Analytics
Big Data Analytics

Recently we hear a lot about Big Data Analytics‘ ability to deliver usable insight – but what does this mean exactly for the financial service industry?

While much of the Big Data activity in the market up to now has been experimenting about Big Data technologies and proof-of-concept projects, I like to show in this post seven issues banks and insurances can address with Big Data Analytics:

1. Dynamic 360º View of the Customer:
Extend your existing customer views by incorporating dynamic internal and external information sources. Gain a full understanding of customers – what makes them tick, why they buy, how they prefer to shop, why they switch, what they’ll buy next, and what factors lead them to recommend a company to others.

2. Enhanced Commercial Scorecard Design and Implementation:
Financial institutions use Big Data solutions to analyze commercial loan origination, developing scorecards and scoring, and ultimately improving accuracy as well as optimizing price and risk management.

3. Risk Concentration Identification and Management:
Identify risk concentration hotspots by decomposing risk into customized insights. Clearly see factor contribution to risks and gain allocation consensus through downside risk budgeting.

4. Next Best Action Recommendations:
Make „next best action“ an integral part of your marketing strategy and proactive customer care. With analytical insight from Big Data, you can answer such questions as: What approach will get the most out of the customer relationship? Is selling more important than retention?

5. Fraud Detection Optimization:
Preventing fraud is a major priority for all financial services organizations. But to deal with the escalating volumes of financial
transaction data, statisticians need better ways to mine data for insight. Optimization for your current fraud detection techniques help to leverage your existing fraud detection assets.

6. Data and Insights Monetization:
Use your customer transaction data to improve targeting of cross-sell offers. Partners are increasingly promoting merchant based reward programs which leverage a bank’s or credit card issuer’s data and provide discounts to customers at the same time.

7. Regulatory and Data Retention Requirements:
The need for more robust regulatory and data retention management is a legal requirement for financial services organizations across the globe to comply with the myriad of local, federal, and international laws (such as Basel III) that mandate the retention of certain types of data.