Elon Musk’s OpenAI has launched, arguably, the largest Natural Language Processing deep learning application ever created: the Generative Pre-Processing Transformer version 3 (GPT-3).

GPT-3 can do some funky things. It can synthesize entire artifacts, including articles, music and software code, after being shown just a few examples. The results generated by the technology are pushing it towards passing the Turing Test.

Shortly after its release, GPT-3 synthesized this essay when prompted with a short introduction and the phrase “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI”.

The essay was produced by the machine iteratively predicting the next word and feeding the resulting new phrase back into the algorithm to predict the next word. Eventually, voila! An entire article was produced.

GPT-3 can also synthesize music. This classical music piece was seeded with a handful of notes and the iteration process took care of the rest. Here, a human plays the synthesized music on a real piano.

GPT-3, however, is not perfect. The results have been mixed with some embarrassing errors being produced. In this music piece GPT-3 is caught in a repetition loop while in this piece the music has extensive sparse sections and jarring jumps. Furthermore, significant bias, inherited via the internet data on which GPT-3 was trained, continues to be a major issue.

While this is all very impressive, there is the question of what can GPT-3 do for business? I will explore the application of this technology to business in Part 2. Stay tuned!