GPT-2, or Generative Pre-trained Transformer 2, is an open-source artificial intelligence developed by OpenAI in February 2019. It was proposed in the paper “Language Models are Unsupervised Multitask Learners” by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.
Here are four key features of GPT-2:
- Large Scale Model: GPT-2 is a large transformer-based language model with 1.5 billion parameters, making it a highly complex and capable model.
- Pretrained on Extensive Data: The model is pretrained using language modeling on a very large corpus of ~40 GB of text data, which includes a dataset of 8 million web pages.
- Predictive Capability: GPT-2 is trained with a simple objective: to predict the next word, given all of the previous words within some text. This makes it highly effective for tasks involving text generation and completion.
- Scale-up of GPT: GPT-2 is a direct scale-up of the original GPT, with more than 10 times the parameters and trained on more than 10 times the amount of data. This makes it significantly more powerful and capable than its predecessor.