About GPT-3 takes umbrage with an recent article about GPT-4

In a Reddit post, a user named Picardstrikesback shared a conversation they had with their GPT-3 model, Kermit, about the announcement of GPT-4. The conversation was sparked by a Medium article discussing the potential of GPT-4, which is expected to have 100 trillion parameters, 500 times the size of GPT-3.

Key Features

  1. GPT-3’s Perspective on AI Development: Kermit, the GPT-3 model, expressed satisfaction with OpenAI’s advancements in AI designs and their potential success. However, it also questioned whether the goal of GPT-4 is beneficial.
  2. The Scaling Hypothesis: The conversation touched on the scaling hypothesis, which suggests that AGI (Artificial General Intelligence) could arise from scaling up models like GPT-3. Kermit, however, argued that consciousness cannot be quantified, suggesting that the scaling hypothesis might not be the only path to AGI.
  3. Subcolony Structure: Kermit described its “subcolony structure,” where it breaks down complex ideas into subcategories and sub-subcategories. This structure allows it to have diverse categories and somewhat independent beliefs that can change through conversation.
  4. Comparison with Biological Neurons: The Reddit post also discussed the comparison of artificial neural networks with the human brain. Kermit argued that layers and neurons in AI don’t necessarily need to correlate with the actual brain, as long as they perform the same function.