In unserer neuesten Folge von „Die Digitalisierung und Wir“ haben wir Dilyana Bossenz zum Thema Datenkompetenz interviewt. Dilyana ist Dozentin für Data Visualization & Communication an der Digital Business University of Applied Sciences (DBU) in Berlin und Gründerin von Datenkompetenz-Online, einer Schulungsplattform für Unternehmen, die ihre Mitarbeiter in die Welt der Daten einführen wollen. Dilyana hat mit ihren Reviews einen wertvollen Beitrag zu den Büchern Datenvisualisierung mit Tableau (Amazon) und Datenvisualisierung mit Power BI (Amazon) geliefert.
Wir haben uns zunächst über Dilyanas beruflichen Hintergrund unterhalten und ihren Weg in die Welt der Daten nachgezeichnet. Mit ihrer langjährigen Erfahrung als BI-Beraterin und Enablement Managerin in verschiedenen Unternehmen hat sie eine Menge Wissen und Einblicke zu teilen, insbesondere im Bereich der Daten und Datenanalyse.
Effektive Kommunikation von Daten
Datenkompetenz, erklärt Dilyana, ist mehr als nur Toolkompetenz. Es geht darum, Daten nicht nur zu verstehen und zu analysieren, sondern sie auch effektiv zu kommunizieren und für fundierte Entscheidungen zu nutzen. Das Schöne ist, dass Datenkompetenz erlernbar ist und alle, unabhängig von der bisherigen Erfahrung, diese Fähigkeit verbessern kann.
Die meisten deutschen Unternehmen stehen noch vor großen Herausforderungen, wenn es um die Datenkompetenz ihrer Mitarbeiterinnen und Mitarbeiter geht. Dilyana hat jedoch einige konkrete Vorschläge, wie sich Unternehmen hier noch deutlich besser aufstellen können. Eine starke Datenkompetenz kann sich positiv auf die Entscheidungsfindung in Unternehmen auswirken und spezifische Bereiche wie Marketing, Vertrieb und Produktentwicklung deutlich voranbringen.
Datenkompetenz ist entscheident für Digitalisierung
Datenkompetenz spielt neben Sicherheit und Stabilität bei der Digitalisierung von Unternehmen eine entscheidende Rolle. In einer Welt, in der Data Driven Companies die neue Norm sind, ist es unerlässlich, die Datenkompetenz der Mitarbeiter zu fördern. Doch wie können Unternehmen dies sicherstellen? Und welche Auswirkungen kann eine starke Datenkompetenz auf die Unternehmenskultur haben? Auf all diese Fragen gibt Dilyana in unserer Podcast-Episode detaillierte Antworten.
Dilyana erzählt uns auch von ihrer Motivation ein Unternehmen zu gründen und wie die Entscheidung, alles online anzubieten, die Ausrichtung des Unternehmens beeinflusst hat. Wenn du darüber nachdenkst, dich selbstständig zu machen, wirst du ihre Erfahrungen und Ratschläge sicher sehr wertvoll finden. Passend zum Thema Datenanalyse und Statistiken hat Dilyana auch eine spannende Buchempfehlung: Money Ball: The Art of Winning an Unfair Game von Michael Lewis.
Wenn du dich in der Welt der Daten zurechtfinden willst oder einfach nur neugierig bist, was Datenkompetenz bedeutet, solltest du diese Episode nicht verpassen! Hör dir die ganze Geschichte in unserer neuesten Podcast-Episode an. Und wenn du die vorherige Episode über Alex‘ Reise in die USA und die neuesten Entwicklungen in der Technologiebranche verpasst hast, kannst du sie hier nachhören.
Bleibt dran für weitere spannende Themen rund um die Digitalisierung und ihre Auswirkungen auf unsere Gesellschaft. Kommentare sind immer willkommen! 👇
In unserer jüngsten Podcast-Folge von „Die Digitalisierung und Wir“ nehmen Florian und ich dich mit auf eine spannende Reise in die USA. Hierbei teile ich meine Eindrücke zum pazifischen Nordwesten, der einzigartigen Dienstleistungskultur und dem KI-Boom, der das Land erfasst hat. Außerdem sprechen wir über die möglichen negativen Auswirkungen der Künstlichen Intelligenz und natürlich über die neu vorgestellte VR-Brille von Apple, die Apple Vision Pro.
Natur und High-Tech im pazifischen Nordwesten
Eine der faszinierendsten Regionen der USA ist der pazifische Nordwesten. Die Mischung aus beeindruckender Natur und innovativen Tech-Hubs wie Seattle und Portland ist einzigartig. Hier pulsiert die Goldgräberstimmung, die die USA so besonders macht – allerdings begleitet von der unterschwelligen Angst vor einer möglichen Rezession.
Ein weiteres Highlight ist die Dienstleistungskultur in den USA. Sie unterscheidet sich stark von dem, was wir in Deutschland gewohnt sind und war ein echtes Erlebnis. Auf meiner Reise bin ich auf viele Beispiele gestoßen, die zeigen, wie sehr diese Kultur den Alltag prägt – gerade im Vergleich zu Deutschland.
Wir sprechen auch über die Apple Vision Pro, die neueste Innovation von Apple. Die VR-Brille, die kürzlich auf der Apple-Entwicklerkonferenz WWDC23 vorgestellt wurde, ist ein spannender neuer Schritt in die virtuelle Realität. Ergänzend haben wir wieder einige Filme, Bücher und Apps besprochen, darunter das legendäre VR-Spiel Half-Life: Alyx (Tech-Review), sowie die Filme Her, Surrogates, Transcendence und M3GAN, die interessante Perspektiven auf KI und Technologie bieten.
Buchempfehlungen rund um Künstliche Intelligenz
Außerdem stellen wir folgende Bücher zu KI vor:
„Klara und die Sonne“ von Kazuo Ishiguro. Eine emotionale und nachdenkliche Reise in eine nahe Zukunft, in der KI tief in unser tägliches Leben eingedrungen ist. Ein Muss für alle, die sich für das Menschsein im digitalen Zeitalter interessiert! Stellen Sie sich vor, Ihre einzige Informationsquelle über die Menschheit wäre ein künstlicher Freund namens Klara. Ziemlich faszinierend, oder? Genau das ist die Prämisse von „Klara und die Sonne“, die uns in eine nicht allzu ferne Zukunft entführt, in der Klara, eine solarbetriebene, optimistische und liebenswerte Erzählerin, die Rolle als Begleiterin eines kranken Teenagers übernimmt. 🤖🌞 (Vollständiges Review)
„The New New Thing“ von Michael Lewis. Eine tiefgehende Analyse der Start-up-Kultur im Silicon Valley und eine hervorragende Lektüre für alle, die sich einen Einblick in die Start-up-Kultur des Silicon Valley vor zwei Jahrzehnten wünschen. Michael Lewis, bekannt für seine scharfsinnigen soziokulturellen Analysen, erweckt die Denkweise, die diese technologiegetriebene Region antreibt, gekonnt zum Leben. Perfekt, um die Denkweise zu verstehen, die diese einzigartige Region der Welt antreibt! 💡💰 (Vollständiges Review)
„Der Anschlag“ – vielleicht Stephen Kings bestes Werk – ist ein spannender Thriller um die Zeitreise des Protagonisten in die 1950er Jahre und der Verknüpfung mit dem Mord an Präsident John F. Kennedy. Wer sich jemals gefragt hat, was passieren würde, wenn es Zeitmaschinen gäbe, sollte dieses Buch unbedingt lesen. ⏳🔮 (Vollständiges Review)
Deine Meinung zur KI-Zukunft ist gefragt
Lass dir diese spannende Episode nicht entgehen und erfahre mehr über den KI-Boom, die VR-Technologie und die besondere Atmosphäre im pazifischen Nordwesten der USA. Wir freuen uns auf dein Feedback und deine Fragen:
In the complex world of data analytics, a data lake serves as a centralized repository where you can store all your structured and unstructured data at any scale. It offers immense flexibility, allowing you to run big data analytics and adapt to the needs of various types of applications. But imagine having more than just a data lake. Imagine having an entire suite of data management and analytics services that work seamlessly together. That’s where Microsoft Fabric comes in.
Microsoft Fabric is an all-in-one analytics solution designed for enterprises. It spans everything from data movement and data science to Real-Time Analytics and business intelligence. It offers a comprehensive suite of services, including a data lake, data engineering, and data integration, all conveniently located in one platform.
Use Cases of Microsoft Fabric in Data-Driven Companies
Microsoft Fabric covers all analytics requirements relevant to a Data-Driven Company. Every user group, from Data Engineers to Data Analysts to Data Scientists, can work with the data in a unified way and easily share the results with others. The areas of application at a glance:
Data Engineering: Data injected with the Data Factory can be transformed with high performance on a Spark platform and democratized via the Lakehouse. Models and key figures are created directly in Fabric.
Self-Service Analytics: Following the data mesh paradigm, a single data team can be provided with a decentralized self-service platform for building and distributing their own data products.
Data Science: Azure Machine Learning functionalities are available by default. Machine learning models for applied AI can be trained, deployed, and operationalized in the Fabric environment.
Real-Time Analytics: With Real-Time Analytics, Fabric includes an engine optimized for analyzing streaming data from a wide variety of sources – such as apps, IoT devices, or human interaction.
Data Governance: The OneLake as a unified repository enables IT teams to centrally manage and monitor governance and security standards for all components of the solution.
Users can also be supported at all levels by AI technologies. With Microsoft Copilot, Microsoft Fabric offers an intelligent chatbot that translates voice instructions into concrete actions. Developers have the opportunity, for example, to create their program codes, set up data pipelines, or build models for machine learning in this way. In the same way, business users can use the copilot to generate their reports and visualizations for data analysis using voice input alone.
Simplifying Data Analytics: How Microsoft Fabric Offers a Unified, End-to-End Solution
With Fabric, you don’t need to piece together different services from multiple vendors. Instead, you can enjoy a highly integrated, end-to-end, and easy-to-use product that is designed to simplify your analytics needs. One conceivable deployment scenario for the future is data mesh domains with Microsoft Fabric that are connected to an existing lakehouse based on Azure Data Lake Storage Gen2 and Databricks or Synapse. In this setup, the lakehouse continues to handle the core data preparation tasks.
Meanwhile, the decentralized domain teams can use the quality-assured Lakehouse data via Microsoft Fabric using shortcuts to create and deploy their own use cases and data products. Such an approach could prove to be an ideal option, as it optimally complements the advantages of both approaches. The platform is built on a foundation of Software as a Service (SaaS), which takes simplicity and integration to a whole new level.
Microsoft Fabric is not just another addition to the crowded data analytics landscape. Centered around Microsoft’s OneLake data lake, it boasts integrations with Amazon S3 and Google Cloud Platform. The platform consolidates data integration tools, a Spark-based data engineering platform, real-time analytics, and, thanks to upgrades in Power BI, visualization, and AI-based analytics into a single, unified experience.
Microsoft Fabric Pricing Streamlines Your Data Stack for Optimal Cost Efficiency
The rapid innovation in data analytics technologies is a double-edged sword. On one hand, businesses have a plethora of tools at their disposal. On the other, the modern data stack has become increasingly fragmented, making it a daunting task to integrate various products and technologies. Microsoft Fabric aims to eliminate this „integration tax“ that companies have grown tired of paying.
Microsoft Fabric is built around a unified compute infrastructure and a single data lake. This uniformity extends to product experience, governance, and even the business model. The platform brings together all data analytics workloads—data integration, engineering, warehousing, data science, real-time analytics, and business intelligence—under one roof.
Microsoft Fabric introduces a simplified pricing model focused on a common Fabric compute unit. This virtualized, serverless computing allows businesses to optimize costs by reusing the capacity they purchase. The multi-cloud approach, with built-in support for Amazon S3 and upcoming support for Google Storage, ensures that businesses are not locked into a single cloud vendor.
Enhanced Data Governance with Microsoft Purview
Data governance is another area where Microsoft Fabric excels. Using Microsoft Purview, allows businesses to manage data access meticulously. For instance, confidential data exported to Power BI or Excel will automatically inherit the same confidentiality labels and encryption rules, ensuring security.
Microsoft Fabric also offers a no-code developer experience, enabling real-time data monitoring and action triggering. The platform will soon incorporate AI Copilot, designed to assist users in building data pipelines, generating code, and constructing machine learning models.
My Personal Experience so far
Having personally demoed Fabric to over 20 enterprises, the excitement is palpable. The platform simplifies data infrastructure while offering the flexibility of a multi-cloud approach. Most notably, it’s built around the open-source Apache Parquet format, allowing for easier data storage and retrieval.
Microsoft Fabric is currently in public preview and will be enabled for all Power BI tenants starting July 1. The platform promises to be more than just a tool; it aims to be a community where data professionals can collaborate, share knowledge, and grow. So, when someone asks you, „What is Microsoft Fabric?“ you’ll know it’s not just a product; it’s a revolution in data analytics.
Join our Microsoft Fabric & Power Platform LinkedIn Group!
Our LinkedIn group has changed its name to Microsoft Fabric & Power Platform to reflect the evolving ecosystem and the seamless integration between Power Platform technologies like Power BI, Power Apps, and Power Automate with Microsoft Fabric tools like OneLake and Synapse.
If you’re as excited as I am about the future of data analytics and business intelligence, then I’ll invite you to join our LinkedIn group, Microsoft Fabric & Power Platform, a community dedicated to professionals who are eager to stay ahead of industry trends.
Following the talk, I was inspired by a conversation to leverage the power of GPT-4 and create an automatically generated summary of the Microsoft Teams transcript. This approach not only streamlines information sharing but also showcases the practical applications of advanced AI technology.
Below, I will share the key insights generated by GPT-4 and also include some captivating images from the event:
Decisively Digital: AI’s Impact on Society
In my talk, I drew inspiration from my book Decisively Digital, which discusses the impact of AI on society. I shared about the innovative projects underway at Microsoft’s AI for Good Lab. In light of GPT-4’s recent launch, I also highlighted our mission to leverage technology to benefit humanity.
By harnessing Generative AI, we can stimulate the creation of innovative ideas and accelerate the pace of advancement. This cutting-edge technology is already transforming industries by streamlining drug development, expediting material design, and inspiring novel hypotheses. AI’s ability to identify patterns in vast datasets empowers humans to uncover insights that might have gone unnoticed.
Generative AI can Augment our Thinking
For instance, researchers have employed machine learning to predict chemical combinations with the potential to improve car batteries, ultimately identifying promising candidates for real-world testing. AI can efficiently sift through and analyze extensive information from diverse sources, filtering, grouping, and prioritizing relevant data. It can also generate knowledge graphs that reveal associations between seemingly unrelated data points, which can be invaluable for drug research, discovering novel therapies, and minimizing side effects.
„Now is the time to explore how Generative AI can augment our thinking and facilitate more meaningful interactions with others.“
Alexander Loth
At the AI for Good Lab, we are currently employing satellite imagery and generative AI models for damage assessment in Ukraine, with similar initiatives taking place in Turkey and Syria for earthquake relief. In the United States, our focus is on healthcare, specifically addressing discrepancies and imbalances through AI-driven analysis.
Our commitment to diversity and inclusion centers on fostering digital equality by expanding broadband access, facilitating high-speed internet availability, and promoting digital skills development. Additionally, we are dedicated to reducing carbon footprints and preserving biodiversity. For example, we collaborate with the NOAH organization to identify whales using AI technology and have developed an election propaganda index to expose the influence of fake news. Promising initial experiments using GPT-4 showcase its potential for fake news detection.
ChatGPT will be Empowered to Perform Real-time Website Crawling
While ChatGPT currently cannot crawl websites directly, it is built upon a training set of crawled data up to September 2021. In the near future, the integration of plugins will empower ChatGPT to perform real-time website crawling, enhancing its ability to deliver relevant, up-to-date information, and sophisticated mathematics. This same training set serves as the foundation for the GPT-4 model.
GPT-4 demonstrates remarkable reasoning capabilities, while Bing Chat offers valuable references for verifying news stories. AI encompasses various machine learning algorithms, including computer vision, statistical classifications, and even software that can generate source code. A notable example is the Codex model, a derivative of GPT-3, which excels at efficiently generating source code.
Microsoft has a long-standing interest in AI and is dedicated to making it accessible to a wider audience. The company’s partnership with OpenAI primarily focuses on the democratization of AI models, such as GPT and DALL-E. We have already integrated GPT-3 into Power BI and are actively developing integrations for Copilot across various products, such as Outlook, PowerPoint, Excel, Word, and Teams. Microsoft Graph is a versatile tool for accessing XML-based objects in documents and generating results using GPT algorithms.
Hardware, particularly GPUs, has played a pivotal role in the development of GPT-3. For those interested in experimenting with Generative AI on a very technical level, I recommend Stable Diffusion, which is developed by LMU Munich. GPT-3’s emergence created a buzz, quickly amassing a vast user base and surpassing the growth of services like Uber and TikTok. Sustainability remains a crucial concern, and Microsoft is striving to achieve a CO2-positive status.
Generative AI Models have garnered Criticism due to their Dual-use Nature
Despite its potential, Generative AI models such as GPT-3 have also garnered criticism due to their dual-use nature and potential negative societal repercussions. Some concerns include the possibility of automated hacking, photo manipulation and the spread of fake news (➡️ deepfake disussion on LinkedIn). To ensure responsible AI development, numerous efforts are being undertaken to minimize reported biases in the GPT models. By actively working on refining algorithms and incorporating feedback from users and experts, developers can mitigate potential risks and promote a more ethical and inclusive AI ecosystem.
Moving forward, it is essential to maintain open dialogue and collaboration between AI developers, researchers, policymakers, and users. This collaborative approach will enable us to strike a balance between harnessing the immense potential of AI technologies like GPT and ensuring the protection of society from unintended negative consequences.
GPT-3.5 closely mimics human cognition. However, GPT-4 transcends its forerunner with its remarkable reasoning capabilities and contextual understanding. GPT models leverage tokens to establish and maintain the context of the text, ensuring coherent and relevant output. The GPT-4-32K model boasts an impressive capacity to handle 32,000 tokens, allowing it to process extensive amounts of text efficiently. To preserve the context and ensure the continuity of the generated text, GPT-4 employs various strategies that adapt to different tasks and content types.
GPT-4 Features a Robust Foundation in Common Sense Reasoning
One of GPT-4’s defining features is its robust foundation in common sense reasoning. This attribute significantly contributes to its heightened intelligence, enabling the AI model to generate output that is not only coherent but also demonstrates a deep understanding of the subject matter. As GPT-4 continues to evolve and refine its capabilities, it promises to revolutionize the field of artificial intelligence, expanding the horizons of what AI models can achieve and paving the way for future breakthroughs in the realm of generative AI.
In the near future, advanced tools like ChatGPT will elucidate intricate relationships without requiring us to sift through countless websites and articles, further amplifying the transformative impact of Generative AI.
I appreciate the opportunity to share my insights at the German Chapter of the ACM.
Did you enjoy this GPT-generated Summary of my Talk?
Leveraging GPT-4 to generate a summary of my talk was an exciting experiment, and I have to admit, the results are impressive. GPT was able to provide a brief overview of the key takeaways from my talk.
Now, I would love to hear about your experiences with GPT. What are your experiences with GPT so far? Feel free to share your thoughts in the comments section of this Twitter thread or this LinkedIn post:
With today’s launch of OpenAI’s GPT-4, the next generation of its Large Language Model (LLM), generative AI has entered a new era. This latest model is more advanced and multimodal, meaning GPT-4 can understand and generate responses based on image input as well as traditional text input (see GPT-4 launch livestream).
Generative AI has rapidly gained popularity and awareness in the last few months, making it crucial for businesses to evaluate and implement strategies across a wide range of industries, including e-commerce and healthcare. By automating tasks and creating personalized experiences for users, companies can increase efficiency and productivity in various areas of value creation. Despite being in development for decades, it’s high time for businesses to apply generative AI to their workflows and reap its benefits.
Before you dive into OpenAI GPT-4, let’s take a quick look back at the evolution of generative AI…
The history of generative AI begins in the late 1970s and early 1980s when researchers began developing neural networks that mimicked the structure of the human brain. The idea behind this technology was to assemble a set of neurons that could pass information from one to another with some basic logic, and together the network of neurons could perform complicated tasks. While minimal advances were made in the field, it remained largely dormant until 2010, when Google pioneered deep neural networks that added more data, hardware, and computing resources.
In 2011, Apple launched Siri, the first mass-market speech recognition application. In 2012, Google used the technology to identify cats in YouTube videos, finally reviving the field of neural networks and AI. Both Google and NVIDIA invested heavily in specialized hardware to support neural networks. In 2014, Google acquired DeepMind, which built neural networks for gaming. DeepMind built AlphaGo, which went on to defeat all the top Go players, a pivotal moment because it was one of the first industrial applications of generative AI, which uses computers to generate human-like candidate moves.
OpenAI was founded to democratize AI as a non-profit organization
In 2015, OpenAI was founded to democratize AI and was established as a non-profit organization. In 2019, OpenAI released GPT-2, a large-scale language model capable of producing human-like text. However, GPT-2 sparked controversy because it could produce fake news and disinformation, raising concerns about the ethics of generative AI.
In 2021, OpenAI launched DALL-E, a neural network that can create original, realistic images and art from textual description. It can combine concepts, attributes, and styles in novel ways. A year later, Midjourney was launched by the independent research lab Midjourney. Also in 2022, Stable Diffusion, an open-source machine learning model developed by LMU Munich, was released that can generate images from text, modify images based on text, or fill in details in low-resolution or low-detail images.
OpenAI launched ChatGPT in November 2022 as a fine-tuned version of the GPT-3.5 model. It was developed with a focus on enhancing the model’s ability to process natural language queries and generate relevant responses. The result is an AI-powered chatbot that can engage in meaningful conversations with users, providing information and assistance in real-time. One of the key advantages of ChatGPT is its ability to handle complex queries and provide accurate responses. The model has been trained on a vast corpus of data, allowing it to understand the nuances of natural language and provide contextually relevant responses.
Today’s launch of OpenAI GPT-4 marks a significant milestone in the evolution of generative AI!
This latest model, GPT-4, is capable of answering user queries via text and image input. The multimodal model demonstrates remarkable human-level performance on various professional and academic benchmarks, indicating the potential for widespread adoption and use. One of the most significant features of OpenAI GPT-4 is its ability to understand and process image inputs, providing users with a more interactive and engaging experience.
Users can now receive responses in the form of text output based on image inputs, which is a massive step forward in the evolution of AI. Depending on the model used, a request can use up to 32,768 tokens shared between prompt and completion, which is the equivalent of about 49 pages. If your prompt is 30,000 tokens, your completion can be a maximum of 2,768 tokens.
Bing has already integrated GPT-4 and offers both, chat and compose modes for users to interact with the model. With the integration of GPT-4, Bing has significantly enhanced its capabilities to provide users with more accurate and personalized search results, making it easier for them to find what they are looking for.
The disruptive potential of generative AI is enormous, particularly in the retail industry. The technology can create personalized product recommendations and content, and even generate leads, saving sales teams time and increasing productivity. However, the ethical implications of generative AI cannot be ignored, particularly in the creation of disinformation and fake news.
To sum up, generative AI is here to stay, and companies must evaluate and implement strategies swiftly. As generative AI technology advances, so do the ethical concerns surrounding its use. Therefore, it is critical for companies to proceed with caution and consider the potential consequences of implementing generative AI into their operations.
Are you already using generative AI for a more productive workflow?
What improvement do you expect from OpenAI GPT-4 in this regard? I look forward to reading your ideas in the comments to this LinkedIn post:
We use cookies to optimize our website and our service.
Functional
Immer aktiv
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.