How the Brain Works

Lately I’ve been reading David Eagleman’s Livewired. I still find it hard to grasp how such a complex system can work every second without a major glitch.
Even right now, as I write this post, my brain is planning which topics to cover in the background. Not millions — but billions of neurons are firing in my head. And there’s one simple question:

Wtf how is this even possible?

The human brain weighs about 1.4 to 1.5 kilograms; on average, about 75% of it is water. The remaining 25% is made up of proteins and fats. Inside, there are roughly 86 billion neurons, connected by trillions of synapses — each can form thousands of connections within seconds.

So how does the brain perceive, think, and act? The brain sits in complete darkness. Its only link to the outside world is the electrical signals carried by neurons.
Think about it — when you see an image, your eyes don’t send that picture via fiber optic cables. It’s all electrical signals. Yet we don’t “see” the world as blurry pixels; we see it in color, in high resolution, and with meaning (except for my nearsighted friends!). The brain’s secret power lies here: making sense of patterns.

In this post, we’ll explore exactly that: the brain’s wiring, its plasticity, where our thoughts “live,” and how we might simulate parts of this with modern technology — maybe through a GraphRAG architecture.


Livewired

“Livewired” is Eagleman’s metaphor for the brain. According to him, the human brain is born incomplete — its structure develops not just through genetics but also through constant interaction with the environment.

One fascinating point is that neurons don’t always work in perfect harmony. There’s actually competition inside the brain. Areas with weak or unused connections can get “taken over” by stronger, more active regions.

For example, in someone who loses their sight, the visual cortex no longer receives signals. This unused region can be claimed by other sensory areas. That’s why blind individuals often develop sharper hearing or touch — thanks to this plasticity. The more area a sense occupies in the brain, the more sensitive and detailed it becomes.


Plasticity

Plasticity is the clearest proof of the brain’s flexibility. Why is it easier to learn a new language as a child? Because the brain is not yet “fixed”; its “circuits” are free. For adults, learning a new skill is harder — but not impossible. Plasticity stays with us for life, just more slowly.

If a pianist can move their fingers so precisely, it’s because the brain’s motor area has expanded through years of practice.
Likewise, a foreign language fades over time if not used — because those connections weaken.

In short, the brain is not a static circuit board. It’s constantly rewiring itself.


Where Do We Store Our Thoughts?

Our brain does not work like a hard drive. We don’t “save” thoughts into a single folder. Memory has a scattered but meaningful network structure. Every interaction that moves us sparks new neural firings. Neurons that frequently fire together build new pathways and connections.

Each new connection helps the brain retrieve information faster, group similar ideas, and make links between different patterns. Our memories and thoughts become lasting through the strength, repetition, and spread of these networks. So, the answer to “Where are our thoughts stored?” is: Nowhere by themselves. Visual details sit in one region, smells in another, emotions in the limbic system — all scattered across an intricate network. That’s why when a memory is triggered, different brain areas light up at once. And every time you revisit that memory, the path gets stronger. Learning works the same way. The more you use a piece of information, the stronger those connections become.

“Neurons that fire together, wire together.” — this simple principle is called Hebbian learning.


Consciousness

So we know our thoughts are spread out — but how do we bring them into consciousness?
First, consciousness is really just your brain’s state in the moment. All the information you have isn’t sitting on your mental “desktop” — only what you need right now gets “pulled up.”

For example, a friend tells you about a restaurant they visited, and you suddenly remember another friend, Ahmet, got food poisoning at that same place — so you share that story. But you don’t constantly think about Ahmet getting sick (if you do, maybe that’s an issue). A current story triggered an old memory and brought it to your conscious mind.

This is where the brain’s working memory comes in. Conscious thought can’t hold everything at once — only a few bits at a time. Psychologists call this the short-term memory capacity or working memory. For example, you keep previous words in mind to finish a sentence, or juggle numbers when solving a math problem.

This is what focus is: the brain picks which bits to bring into consciousness, pays attention to the most relevant ones, and pushes the rest back into the background.
That’s why you sometimes say, “It was on the tip of my tongue — I forgot!” The knowledge isn’t gone; it just fell out of working memory.

In short: Consciousness is not a huge archive. It’s a screen. Trillions of connections run in the background — you only see what you need right now.


Memory and Consciousness — Knowledge Graph & LLM

All this makes me think of GraphRAG systems. A knowledge graph is like our brain’s “scattered but accessible” long-term memory. Data is stored inside a knowledge graph. We can build it from interactions (like adding friends on social media) or from any available data. It’s a store of information we don’t always need but can pull up when required. When new input comes in, we use this stored knowledge to take the right action.

The other piece is consciousness — we can simulate this with a language model (LLM). When needed, the LLM can search the knowledge graph, find the closest semantic nodes, “bring them into consciousness,” and generate an action. Just like the brain pulls up information through associative connections, the language model does semantic search between nodes and highlights the pattern that fits the current goal. So even today, we can see GraphRAG systems as a tiny replica of how the brain thinks. Of course, we’re still far from truly mimicking the brain’s flexibility, intuition, and recall — but the core idea is clear:

Brain = Scattered connections + contextual triggers + meaning in consciousness
GraphRAG + LLM = Nodes + semantic search + output generation

With technology advancing exponentially, maybe one day we’ll see AI that really “thinks.” At that point, I think it will truly earn the word intelligence.