With today’s launch of OpenAI’s GPT-4, the next generation of its Large Language Model (LLM), generative AI has entered a new era. This latest model is more advanced and multimodal, meaning GPT-4 can understand and generate responses based on image input as well as traditional text input (see GPT-4 launch livestream).
Generative AI has rapidly gained popularity and awareness in the last few months, making it crucial for businesses to evaluate and implement strategies across a wide range of industries, including e-commerce and healthcare. By automating tasks and creating personalized experiences for users, companies can increase efficiency and productivity in various areas of value creation. Despite being in development for decades, it’s high time for businesses to apply generative AI to their workflows and reap its benefits.
Before you dive into OpenAI GPT-4, let’s take a quick look back at the evolution of generative AI…
The history of generative AI begins in the late 1970s and early 1980s when researchers began developing neural networks that mimicked the structure of the human brain. The idea behind this technology was to assemble a set of neurons that could pass information from one to another with some basic logic, and together the network of neurons could perform complicated tasks. While minimal advances were made in the field, it remained largely dormant until 2010, when Google pioneered deep neural networks that added more data, hardware, and computing resources.
In 2011, Apple launched Siri, the first mass-market speech recognition application. In 2012, Google used the technology to identify cats in YouTube videos, finally reviving the field of neural networks and AI. Both Google and NVIDIA invested heavily in specialized hardware to support neural networks. In 2014, Google acquired DeepMind, which built neural networks for gaming. DeepMind built AlphaGo, which went on to defeat all the top Go players, a pivotal moment because it was one of the first industrial applications of generative AI, which uses computers to generate human-like candidate moves.
OpenAI was founded to democratize AI as a non-profit organization
In 2015, OpenAI was founded to democratize AI and was established as a non-profit organization. In 2019, OpenAI released GPT-2, a large-scale language model capable of producing human-like text. However, GPT-2 sparked controversy because it could produce fake news and disinformation, raising concerns about the ethics of generative AI.
In 2021, OpenAI launched DALL-E, a neural network that can create original, realistic images and art from textual description. It can combine concepts, attributes, and styles in novel ways. A year later, Midjourney was launched by the independent research lab Midjourney. Also in 2022, Stable Diffusion, an open-source machine learning model developed by LMU Munich, was released that can generate images from text, modify images based on text, or fill in details in low-resolution or low-detail images.
OpenAI launched ChatGPT in November 2022 as a fine-tuned version of the GPT-3.5 model. It was developed with a focus on enhancing the model’s ability to process natural language queries and generate relevant responses. The result is an AI-powered chatbot that can engage in meaningful conversations with users, providing information and assistance in real-time. One of the key advantages of ChatGPT is its ability to handle complex queries and provide accurate responses. The model has been trained on a vast corpus of data, allowing it to understand the nuances of natural language and provide contextually relevant responses.
Today’s launch of OpenAI GPT-4 marks a significant milestone in the evolution of generative AI!
This latest model, GPT-4, is capable of answering user queries via text and image input. The multimodal model demonstrates remarkable human-level performance on various professional and academic benchmarks, indicating the potential for widespread adoption and use. One of the most significant features of OpenAI GPT-4 is its ability to understand and process image inputs, providing users with a more interactive and engaging experience.
Users can now receive responses in the form of text output based on image inputs, which is a massive step forward in the evolution of AI. Depending on the model used, a request can use up to 32,768 tokens shared between prompt and completion, which is the equivalent of about 49 pages. If your prompt is 30,000 tokens, your completion can be a maximum of 2,768 tokens.
Bing has already integrated GPT-4 and offers both, chat and compose modes for users to interact with the model. With the integration of GPT-4, Bing has significantly enhanced its capabilities to provide users with more accurate and personalized search results, making it easier for them to find what they are looking for.
The disruptive potential of generative AI is enormous, particularly in the retail industry. The technology can create personalized product recommendations and content, and even generate leads, saving sales teams time and increasing productivity. However, the ethical implications of generative AI cannot be ignored, particularly in the creation of disinformation and fake news.
To sum up, generative AI is here to stay, and companies must evaluate and implement strategies swiftly. As generative AI technology advances, so do the ethical concerns surrounding its use. Therefore, it is critical for companies to proceed with caution and consider the potential consequences of implementing generative AI into their operations.
Are you already using generative AI for a more productive workflow?
What improvement do you expect from OpenAI GPT-4 in this regard? I look forward to reading your ideas in the comments to this LinkedIn post: