Modern AI systems are usually described in terms of numbers. A neural network takes an input, runs it through layers of mathematical operations, and produces an output. At first, this sounds mechanical and lifeless, as if the model is only pushing numbers around without any real structure inside. But when researchers look inside these systems, something more interesting appears. Neural networks often develop internal representations that seem to correspond to recognizable concepts. Some neurons respond strongly to edges or textures. Others activate for faces, objects, animals, or even more abstract features. In language models, words and ideas arrange themselves in high-dimensional spaces where related concepts sit near each other. This raises a deeper question: why do neural networks form concepts at all? Are these concepts real, or are they patterns we project onto the model because we are looking for familiar structure? From Data to Representation A neural network does not begin...