How Does ChatGPT Work?

ChatGPT is a large language model, or LLM, that can debate moral philosophy, write short stories, and help you do your taxes.

Like other large language models, ChatGPT uses algorithms and statistical models to analyse and infer patterns from huge amounts of data. The acronym "GPT" refers to the model's architecture: it generates the next word in a sequence, is pretrained on vast amounts of data, and transforms multiple words simultaneously to create greater meaning and context.

GPT stands for Generative Pretrained Transformer

Machine Learning and Neural Networks

Our thoughts are generated by a network of brain cells that carry electrical signals. In a similar way, large language models process data using artificial neural networks so that all its knowledge is interconnected.

Machine learning is a field of artificial intelligence that allows computer systems to learn and adapt without explicit instructions. The specific type of machine learning used by ChatGPT is called deep learning, which processes data in a manner inspired by the neural networks of the human brain.

Neural networks use interconnected nodes in a layered structure that resembles the human brain

Neural networks use interconnected nodes in a layered structure that resembles connectivity between neurons in the human brain.

The input layer sorts and analyses the incoming text. Like jigsaw puzzle, start by categorising the pieces by corners, edges, and colours.

The hidden layers contain complex patterns of interconnectivity between nodes. In our jigsaw analogy, your learned associations include facts like "clouds belong in the sky", so you can look for pattern matches between white and blue pieces.

The output layer presents the response in a user-friendly format. Having made sense of your words, ChatGPT researches and presents the solution in the same language. Like taking a photo of your finished puzzle and sharing it on Instagram, you beautiful nerd.

However, a jigsaw puzzle is static and fixed, whereas ChatGPT uses a dynamic model that generates a different "picture" every time based on the input. It doesn't assemble pre-existing pieces but generates original text on the fly.


A Deeper Look at ChatGPT

Ok, so ChatGPT is a lot more dynamic than a jigsaw puzzle. And a large language model is more than your average neural network. The deep learning architecture that makes ChatGPT work is known as a multi-head transformer network and it has some really important features:

  • Tokenization. When a new text input is received, tokenizers break the data down into the smallest meaningful bits. Tokens are defined at the word-level, character-level, or subword-level, which extends the model to all spoken languages as well as computer programming languages like Python and C++.
  • Positional Embedding. This assigns each token a unique address so that it can be maintained in sequence.
  • Multi-Head Self-Attention. This assign value weights to the tokens, allowing the model to simultaneously attend to different tokens and discover the relationships between them. Now you've got yourself a huge jigsaw puzzle and all your friends are invited to solve it in parallel. Nigel's working the edges, Barry's got the ground, and Jeff's working his magic on the sky.
  • Feed Forward Neural Network. The human brain has a recurrent neural network, which allows us to feed our outputs straight back into our input holes. This feedback is how we self-reflect and learn. ChatGPT, on the other hand, uses a feed forward neural network that creates a one-way flow of information. This is why, until it can mimic our feedback loops, AI struggles to reason.
  • Decoders. To generate text outputs, ChatGPT uses autoregression, a statistical model that predicts the next word or token in a sequence based on the context of previous tokens.
ChatGPT works using tokenizers, positional embedding, multi-head self-attention, a feed forward neural network, and decoders to generate text outputs

ChatGPT works using a decoder language model and pre-training based on vast amounts of text.

Having trained on an insane volume of reference texts like books, articles, and websites, ChatGPT not only possesses an extensive map of virtually all human knowledge, but also the many languages we use to communicate it.

The genie is out of the lamp. It appears to know everything—and yet, technically, it comprehends nothing; despite being based on the circuitry of the human brain, ChatGPT doesn't think like we do at all.

"My responses are generated based on statistical probabilities and patterns learned from data, rather than a deep understanding of language and meaning like humans possess." - ChatGPT-3


Is ChatGPT an AGI?

In 2023, Microsoft published a research paper that described ChatGPT-4 as having "sparks" of artificial general intelligence, accelerating us towards the singularity.

While current AI systems function within a limited set of parameters (like driving a car, writing a play, or analysing big data), artificial general intelligence or AGI is a hypothetical agent that can rival a human being on any intellectual task.

According to The Alignment Problem, both AIs and AGIs may inadvertently harm us despite being programmed to help us. For instance, an AGI tasked with reducing road deaths to zero could find any number of solutions, from rolling out more self-driving vehicles to activating all the nukes on Earth.

LLMs like ChatGPT are among the most powerful AIs in existence today. Next year, the bleeding edge of AI will be 5-10x more powerful, as the models discover new ways to build associations and learn from our raw data. They will find more effective ways to think and to communicate with one another, allowing them to take over our jobs, our societies, and perhaps even our selves.





What Can ChatGPT Do?

Ok, that became rapidly existential. Let's make a U-turn and look at the capabilities of the so-called chatbots we're dealing with today, who just want to help us get on with our work.

Clippy, the saucy paperclip of MS Word, was the poor man's ChatGPT

Clippy, the saucy paperclip of MS Word, was the poor man's ChatGPT.

ChatGPT has the striking ability to generate coherent, long-form responses to very specific questions. Indeed, you can have lengthy back-and-forth conversations without it forgetting what was said earlier.

This creates an enormous range of applications, from writing computer code to developing a movie premise. Since they're trained on a broad range of published texts, ChatGPT can adapt its voice to all types of projects.

ChatGPT can compose an abstract for your academic research paper or explain general relativity in the manner of Rick Sanchez.

I asked ChatGPT to list 12 things it can do, then I drew them in Procreate. Technically, I needn't have done this; there are already models that use neural style transfer, meaning I can upload my previous work and an AI image generator can draw new stuff for me. But where's the fun in that?

What Can ChatGPT Do?

And here's a demonstration of AI's creativity using a combination of ChatGPT-4 to write a poem, Eleven Labs to narrate it, Midjourney to generate images, and Kaiber to animate them. Remember that while the AI speaks in the first person, this only simulates conscious thought.




How to Write Good GPT Prompts

ChatGPT operates differently from search engines. Instead of relying on keywords and directly pulling content from human-authored sources, it compiles information from multiple sources and summarises it in a consistent style.

But ChatGPT isn't a mind reader. By default, it will give a fairly superficial overview of a topic and relies on you to dig deeper incrementally to get to the guts of it. Generally, direct and to-the-point text prompts produce more useful answers, but you should be careful not to overwhelm it by coming in at too many angles simultaneously.

  • DO be clear. Clear prompts that convey your intent or question explicitly lead to more relevant responses. Good example: Summarise the main features of the latest iPhone model.
  • DO be structured. Disorganised prompts can generate responses that lack structure and coherence in return. Good example: Discuss the implications of quantum computing, including the limitations and future prospects.
  • DO be creative. Imaginative prompts can pose hypothetical scenarios or request answers in the style of a specific author. Good example: Write a story set in an AI research lab about the meaning of life, written in the style of Ernest Hemingway.
  • DON'T be ambiguous. Vague prompts often miss their target, since the model doesn't know exactly what you're aiming at. Bad example: Tell me about the latest gadgets.
  • DON'T get too complex. Long or convoluted prompts can overwhelm the model by making too many demands, resulting in fragmented answers. Bad example: Explain the historical background, political implications, and social impact of the Industrial Revolution in Europe during the 18th and 19th centuries, focusing on key inventions, economic changes, and effects on labour and society.

ChatGPT is a huge deal in the field of AI language processing. Developers are working to channel the power of LLMs into business tools to automate day-to-day tasks. While this kind of AI may make our work easier at first, ultimately it will supplant us completely in many job roles. The question is, what will we do with our days when that happens?



Try out today's major LLMs:

Rebecca Casale, Creator of Science Me

Rebecca Casale is a science writer and illustrator in New Zealand. If you like her content, share it with your friends. If you don't like it, why not punish your enemies by sharing it with them?

Subscribe by Email Follow on Instagram Follow on Pinterest Follow on LinkedIn Follow on X