A company, co-founded by billionaire Elon Musk, has launched an artificial intelligence (AI) technology which has the ability to create coherent stories, write novels and computer code.

However, this technology was received with great criticism as it remains blind to racism and sexism.

GPT-3, OpenAI’s latest AI language model, is known for being able to complete and keep up a dialogue between two people. It has the ability to converse, ask relevant questions and answer questions accordingly. This could even go as far as completing sentences for you as it uses a huge database of information, which it has been fed over time.

Bruce Delattre, AI specialist at data consulting agency Artefact, said, “It is capable of generating very natural and plausible sentences. It’s impressive to see how much the model is able to appropriate literary styles, even if there are repetitions.”

This tech has been proven to be extremely useful for customer service, lawyers that need to sum up a legal precedent and more recently, authors in need of ideas and inspiration. OpenAI’s latest offering has been met with a great deal of fascination and praise due to its ability to generate text which resembles human writing.

“The model knows that a particular word (or expression) is more or less likely to follow another,” continued Delattre.

GPT-3 has actually been fed content from billions of websites which are freely available online. According to scientific director at AI research and development firm Dataswati, Amine Benhenni, the uniqueness of this system is attributed to the size of the model itself, compared to others.

To put the magnitude of this project into perspective, it has been noted that the entire online encyclopedia Wikipedia represents a fraction as little as three percent of all the information that this new AI tech has been given.

“It’s amazingly powerful if you know how to prime the model well. It’s going to change the ML (machine learning) paradigm,” said Shreya Shankar, a computer scientist who specializes in AI, after using GPT-3.

However, while this is extremely impressive, the tech is still not quite there yet. According to Claude de Loupy, cofounder of France-based startup Syllabs, the system lacks “pragmatism”.

Another issue, probably the most important one, with regards to this system is that it often replicates any stereotype or hate speech which it is given throughout its training period so the risk of becoming anti-Semitic, racist and sexist is quite high.

It may be useful for writing fake reviews and producing stories on a grander level for a disinformation campaign. Whereas this might not be the case for sectors such as journalism or customer service.