What AI Is Really Teaching Us About Being Human


28 May 2025, 2 min read.

I thought I was learning about AI. Turns out, I was getting a masterclass in being human.

The release of Claude Sonnet 4 and Opus 4 sent me down a rabbit hole of experimentation and learning. Not about their capabilities – but about what each iteration teaches us about ourselves.

Claude Sonnet 4 and Opus 4

Here’s what caught my attention: a handful of strategic words can fundamentally change a prompt’s outcome. Just a few precise terms can be the difference between brilliance and complete nonsense.

But here’s what I kept thinking about. We do this with humans every day.

The way I phrase things in a discussion, how I frame conversations, even how I interact with a friend or neighbor – it’s all prompt engineering.

We’ve been doing it forever. We just never had to think about it this way.

And once you see it, you notice it everywhere.

Context windows are interesting too. Everyone wants to make them bigger, assuming more context equals better thinking. This makes sense given how much coding dominates AI use cases right now, where more context about the codebase improves outcomes.

But you don’t always want a model with a lot of context. After all, our brains limit context intentionally.

Think about your worst decision. I’ll bet you were drowning in information.

Now think about your best one. Focused. Clear. Limited context.

We’re not broken computers. We’re designed this way.

And then there’s training and fine-tuning models. Listening to AI researchers discuss training phases feels like observing human development in fast-forward. Base training is our shared foundation – elementary through high school. But that specialized fine-tuning?

Consider identical twins after different universities. Same DNA, same childhood, completely different worldviews. Those final years don’t just add knowledge – they fundamentally reshape how we process reality.

We are our fine-tuning.

And, don’t get me started with system prompts.

These hidden instructions massively shape AI behavior. But we have them too - they’re just our social rules made visible. Every cultural norm, every unspoken expectation – suddenly laid bare in code.

The debates about AI safety? They’re people working through questions about values, purpose, and what intelligence should actually do.

Which brings me to what I find most thought-provoking.

Free will.

We’re carefully debating how much autonomy to give AI, installing guardrails and safety measures. But step back. Laws, social pressure, economic systems, cultural conditioning – aren’t we already operating within a structured system?

We just tell ourselves we’re free because we can’t see our own prompts.

Here’s what I think is happening.

In teaching AI to think, we’ve accidentally created a mirror. Every feature we build, every limitation we impose – it reflects how human consciousness actually works.

We’re not just building AI.

We’re reverse-engineering ourselves.

What is AI teaching you about being human?

Originally posted on LinkedIn. Join the discussion and share your thoughts there.


To get in touch, connect on LinkedIn, send a message on Twitter, or write to jordan@jordansilton.com.