Join us for the global launch of Tableau’s super fast data engine, Hyper! Hyper brings faster data refreshes and query performance to Tableau extracts, plus increased scalability in a platform-wide update.
This is your opportunity to get to know the Hyper dev team, hear from Tableau beta customers about their hands-on Hyper experience, and participate in live Q&A. Best of all, learn more about Hyper’s patent-pending technology as well as some of the other features headed your way in 10.5. (Viz in Tooltip, anyone?)
Tableau is hosting the Hyperfest meetup – come and celebrate with the community and the world on the upcoming release of Hyper. In addition to the Hyper presentation, we will also have food, drinks and Tableau swag, so don’t miss it!
Hyper is a Hybrid transactional/analytical processing (HTAP) database system and replaces Tableau Data Extracts (TDE). The change will be mostly transparent for end users, other than everything being faster. Hyper significantly improves extract refresh times, query times and overall performance.
2. What is Hybrid transactional/analytical processing?
Hybrid transaction/analytical processing (HTAP) is an emerging application architecture that “breaks the wall” between transaction processing and analytics. It enables more informed and “in business real time” decision making.
The two areas of online transaction processing (OLTP) and online analytical processing (OLAP) present different challenges for database architectures. Currently, customers with high rates of mission-critical transactions have split their data into two separate systems, one database for OLTP and one so-called data warehouse for OLAP. While allowing for decent transaction rates, this separation has many disadvantages including data freshness issues due to the delay caused by only periodically initiating the Extract Transform Load (ETL) data staging and excessive resource consumption due to maintaining two separate information systems.
3. Does Hyper satisfy the ACID properties?
Hyper, initially developed at the Technical University of Munich and acquired by Tableau in 2016, can handle both OLTP and OLAP simultaneously. Hyper possesses the rare quality of being able to handle data updates and insertions at the same time as queries by using hardware-assisted replication mechanisms to maintain consistent snapshots of the transactional data. Hyper is an in-memory database that guarantees the ACID properties (Atomicity, Consistency, Isolation, Durability) of OLTP transactions and executes OLAP query sessions (multiple queries) on the same, arbitrarily current and consistent snapshot.
4. What makes Hyper so fast?
The utilization of the processor-inherent support for virtual memory management (address translation, caching, copy on update) yields both at the same time: unprecedentedly high transaction rates as high as 100,000 per second and very fast OLAP query response times on a single system executing both workloads in parallel. This would support real-time streaming of data in future releases of Tableau. These performance increases come from the nature of the Hyper data structures, but also from smart use of contemporary hardware technology, and particularly nvRam memory. Additional cores provide a linear increment in performance.
5. What does this mean for Tableau?
With Hyper now powering the Tableau platform, your organization will see faster extract creation and better query performance for large data sets. Since Hyper is designed to handle exceptionally large data sets, you can choose to extract your data based on what you need, not data volume limitations. Hyper improves performance for common computationally-intensive queries, like count distinct, calculated fields, and text field manipulations. This performance boost will improve your entire Enterprise Analytics workflow.
Michael, a data scientist, who is working for a German railway and logistics company, recently told me during an FATUG Meetup that he loves Tableau’s R and Pyhton integration. As he continued, he raised the raised the question for using functions they have written in Julia. Julia, a high-level dynamic programming language for high-performance numerical analysis, is an integral part of newly developed data strategy in the Michael’s organization.
Tableau, however, does not come with native support for Julia. I didn’t want to keep Michael’s team down and was looking for an alternative way to integrate Julia with Tableau.
This solution is working flawless in a production environment since several months. In this tutorial I’m going to walk you through the installation and connecting Tableau with R and Julia. I will also give you an example of calling a Julia statement from Tableau to calculate the sphere volume.
With over 3 billion active social media users, establishing an active presence on social media networks is becoming increasingly essential in getting your business front of your ideal audience. These days, more and more consumers are looking to engage, connect and communicate with their favorite brands on social media.
Adding social media to your customer-centric data strategy will help boost brand awareness, increase followership, drive traffic to your website and generate leads for your sales funnel. In 2017, no organization should be without a plan that actively places their brand on social media, and analyzes their social media data.
Once you’ve started diving into social media analytics, how do you bring it to the next level? This session covers a customer-centric data strategy for scaling a social media data program.
Here are the links (i.e. additional resources) featured during the session to help you formulate your social media data program in order to build a stronger presence and retrieve powerful insights:
Woohoo, it’s already ten years since I started this blog. Can’t believe it! Thanks to all of those who read my posts, and who encouraged and inspired me. Without you blogging would be only half the fun! Now, let’s have a little recap…
2007-2009 SAP and India:
It all started in 2007. I was studying Computer Science, and decided to go for an internship abroad. China and India were on my short list. I decided for India, applied for a scholarship and asked some companies for interesting project work. Before starting the adventure, I published my very first blog post to keep family and friends in loop.
For the next seven month, I lived in Bangalore, and worked for SAP Labs India to develop prototypes for mobile BI apps. I spent plenty of weekends to explore India and surrounding countries. After returning from India, I continued to work for SAP at their headquarters while finishing my degree in Karlsruhe.
CERN, surrounded by snow-capped mountains and Lake Geneva, grabbed my attention during the end of my studies. CERN has tons of data: some petabytes! Challange accepted. CERN is known for its particle accelerator Large Hadron Collider (LHC). We applied machine learning to identify new correlations between variables (LHC data and external data) that were not previously connected.
2012-2015 Capgemini and MBA:
Back in Germany, I wanted to bring Big Data Analytics to companies. To one company? No, to many companies! So instead of getting hired as Head of BI for an SME, I started to work for Capgemini. I had fantastic projects, designed data-driven usecases for the financial sector, and gave advice for digital transformation inititives.
In order to keep in balance with all the project work, I dedicated many of my weekend for studies and got enrolled in Frankfurt School’s Executive MBA programme. During my studies, I focused on Emerging Markets and visited a module at CEIBS in Shanghai.
2015-201? Tableau and Futura:
I knew Tableau from my time as consultant. It is an awesome company with a great product and a mission: help people see and understand their data. That’s me! I joined Tableau to help organizations through the transition from classic BI factories to modern self-service analytics by developing data strategies, so that data can be treated as a corporate asset. This includes education, evangelism and establishing a data-driven culture.
In the evenings I’m working for Futura Analytics, a fintech startup, which I co-founded in 2017. Futura Analytics offers real-time information discovery, and transforms data from social media and other public sources into actionable signals.
Currently I’m looking forward to give my Data Strategy talk on TC17 accompanied by a TensorFlow demo scenario. I’m also learning Mandarin, the predominant language of business, politics, and media in China and Taiwan, for quite a while. Let’s see if that is going to influence my next steps… 🙂
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.