Recently Tableau released an exciting new feature: R integration via RServe. Tableau with R seems to bring my data science toolbox to the next level! In this tutorial I’m going to walk you through the installation and connecting Tableau with RServe. I will also give you an example of calling an R function with a parameter from Tableau to visualize the results in Tableau.
1. Install and start R and RServe
You can download base R from r-project.org. Next, invoke R from the terminal to install and run the RServe package:
[Update 26 Jun 2016]: Tableau 8.1 screenshots were updated with Tableau 10.0 (Beta) screenshots due to my upcoming Advanced Analytics session at TC16, which is going to reference back to this blog post.
I have enjoyed research for the last four years. Yet, I have decided to resign from my postgraduate position at CERN, and to move to Capgemini. I will continue on the areas I love: Data and Analytics!
Capgemini is one of the world’s largest consulting corporations. Like many other consulting company, Capgemini does not yet have a dedicated team to offer effective strategies and solutions employing Big Data, Analytics and Machine Learning.
I love these technologies, and I am very confident that I will elaborate a business development plan to drive business growth, through customer and market definition, including new services such as:
Data Science Strategy (enable organizations to solve business problems increasingly with insights from analytics)
Consulting (answering questions using data)
Development (building custom data-related tools like interactive dashboards, pipelines, customized Hadoop setup, data prep scripts…)
Training (across a variety of skill levels; from basic dashboard design to deep dive in R, Python and D3.js)
This plan is also accompanied by a go-to-market strategy, which I don’t want to unveil on my blog. Maybe retrospective in some years, so stay tuned…
Physics projects don’t get any bigger than this. The active European Organization for Nuclear Research, aka CERN, formed in 1954 and is headquartered in Geneva, Switzerland, employs thousands of world-class scientists on the forefront of breakthrough research. Its claim to fame is unmatched as the origin of the World Wide Web and creator of underground 17-mile-long particle accelerator called the Large Hadron Collider. Here, see photos of the many aspects of an international institution that may discover a way to move faster than the speed of light and how our universe was pieced together.
Flickr API error: Service currently unavailable (Site Disabled)
There are four key issues to overcome if you want to tame Big Data: volume (quantity of data), variety (different forms of data), velocity (how fast the data is generated and processed) and veracity (variation in quality of data). You have to be able to deal with lots and lots, of all kinds of data, moving really quickly.
That is why Big Data Analytics has a huge impact on how we plan CERN’s overall technology strategy as well as specific strategies for High-Energy Physics analysis. We want to profit from our data investment and extract the knowledge. This has to be done in a proactive, predictive and intelligent way.
The following presentation shows you how we use Big Data Analytics to improve the operation of the Large Hardron Collider.
I became a Python geek and GnuPlot maniac since I joined CERN around three years ago. I have to admit, however, that I really enjoy the flexibility of D3.js, and its capability to render histograms directly in the web browser.
The following example loads a CSV file, which includes 10,000 dimuon events (i.e. events containing two muons) from the CMS detector, and displays the distribution of the invariant mass M (in GeV, in bins of size 0.1 GeV):
Feel free to download the sample CSV dataset here.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.