In a world increasingly obsessed with metrics , be it the number of followers on social media , KPIs at work, or university cutoff scores, an uncomfortable question arises: to what extent are we playing a game that wasn't created by us?

The person raising this question is the American philosopher C. Thi Nguyen, a professor at the University of Utah and a specialist in the philosophy of games applied to values and human thought.

Recognized for his work on how rule systems shape our freedom and creativity, Nguyen has just released The Score – How to stop playing somebody else's game, which is not yet available in Brazil.

In the book, he investigates how the logic of scoring and constant measurement, so prevalent in schools, companies, and digital platforms, can both offer clarity about what matters and erode our autonomy, reducing complex human experiences to numbers.

“While studying games and gamification, I realized that I had a history in which clear scoring systems were seen as the source of freedom and fun,” says Nguyen, in an interview with NeoFeed .

"But the other story is that simplified scoring systems were the death of everything that was good. I thought, 'Wait, how do these two things work together?' That's how it all started, trying to understand why," he adds.

According to the philosopher, in life, numbers offer clarity about what matters on one hand. But on the other hand, they drain creativity, purpose, and freedom.

"Institutions need to realize that there are many important things, and many of them are difficult to express: values of community, of happiness, of serving humanity," he says.

In a conversation with NeoFeed , Nguyen explains why the culture of metrics can be as seductive as it is dangerous, and how we can reclaim our autonomy in a world that insists on reducing us to numbers.

Below are the main excerpts from the interview:

What differences have you observed between this pursuit of points in games and in real life?
This answer has two parts. One of the things that happens with good games is that they are often carefully designed to be interesting or fun for the player. In this logic, the scoring system is like the artistic medium of the game designer, and they are constantly adjusting it. In many cases, if you change the scoring a little, suddenly players are more encouraged to be creative, or to cooperate, or to engage in interesting conflicts. Furthermore, the gaming ecosystem offers you a huge choice, right? If you don't like the game you're playing, you can change it, you can create "house rules," and so on. You have a choice.

On the other hand...
Scoring systems for large-scale institutions are not adjusted in this way. They are typically adjusted to provide information quickly and are often limited only by what is easy to measure rapidly.

Do you have an example of this usage?
One of my favorite examples is “screen time.” I mean, everyone cares about it—I have kids. We worry about screen time, but the metric doesn’t track what really matters. Sometimes my son’s screen time is him playing games that are total garbage or watching bad shorts on YouTube. And sometimes it’s him building logic gates in Minecraft or animating a video. Or telling a story with friends on a call. And it all ends up being mixed together and becomes one thing. The reason it all gets mixed up is because screen time is something very easy for a device to measure. So often, a major limitation is just this ease of measurement.

Is there room for creativity in this points system?
I think there's room for that, but it's very unlikely, given the institutional logic. Think of it this way: I have a bit of creativity with grades in my classroom. I have little control over how the grades work. I can, for example, offer my students several ways to get an 'A' or slightly different exercises. I can play around with it a bit. Here, the set of rules and the grading system are being adjusted in light of something bigger, not mechanical. The problem is that, in many institutions, this cycle doesn't happen. The mechanical grading system is the end of the line.

Is there a way out?
You're aiming to deliver more products, more profit... And if you don't have that more reflective stance, then your values become limited by "mechanics," you understand? Games can be guided by something that isn't mechanical, but institutional logic often asks us to hardly alter that system. And when we do alter it, the justification structure is stuck to the metrics themselves. There's no point of view behind the metrics.

"The healthiest perspective is to view metrics as a small piece of information that is very limited."

Do you believe there is a healthy way to use these metrics in the corporate world?
For me, the healthiest view is to see metrics as a small piece of information that is very limited, but useful because of its accessibility. Theodore Porter, the historian of quantification culture who was one of the main sources for my book, says that qualitative reasoning is rich, complicated, and open, but it travels poorly between contexts. Quantification, on the other hand, travels well between people with very different backgrounds because it was designed for that: because we removed the 'high context' from it.

What does this mean?
We remove everything that requires a huge amount of context and leave something that is easy for everyone to count, such as page views, clicks, likes, screen time, products—something that doesn't require specific sensitivity to be noticed. This is very powerful because it allows us to communicate quickly and allows us to aggregate. That's where big data comes from. But there are many questions…

How would you summarize it?
You can consciously use metrics as just a rough and simplified approximation of a complex and multidimensional value system; that is, realize that it is only this hyper-simplified language that we have created to communicate quickly.

Have you ever felt the weight of metrics in your own life?
Every day. I started all this because I had to report the learning outcomes of philosophy education to legislators who wouldn't accept things like, 'oh, they're more curious and virtuous.' They want to see graduation rates. Right now, humanities departments are being cut. Philosophy departments, arts departments… they're being removed from students' education because what they aim for isn't readily quantifiable. A program that quickly gives students a measurable skill is valued. Another that's trying to make students more flexible, open, and ethically critical is very difficult to measure. And so they tend to lose institutional disputes where people want clarity.

Is it possible to interfere in this dispute?
It's good to say that our decisions are 'data-driven', but we have to remember that data is very limited, especially at an institutional scale, to the kinds of things that are easy for institutions to collect and the kinds of things that institutions have decided they will collect.

With artificial intelligence, these metrics only become more relevant, right?
Yes. I don't understand everything about machine learning , but I know it involves optimizing a learning algorithm on top of a target, and often those targets are just whatever we have available. There are machine learning algorithms that are optimized to select students aiming for 'student success', but that success is defined in terms of graduation and employment rates, not in terms of other things.

What would those other things be?
One of the stories I told in my book is about meeting artistic AI developers who were optimizing the technology to make 'good art.' And they had defined 'good art' as increasing hours of engagement with the Netflix catalog. And that's not good art, but we also don't have data for what counts as good art, because it's not the kind of thing that's a simple, transcontextual, and mechanically measurable quality. So, it's very difficult to target.

"Human value is the kind of thing that's difficult to capture in a metric because it has the two fundamental characteristics of being difficult to measure."

Where do you believe human value fits into this scenario?
Human value is precisely the kind of thing that's difficult to capture in a metric because it has the two fundamental characteristics of what is difficult to measure. One is that it is subtle, and the other is that it is variable. Metrics tend to move toward what is publicly accessible and what is stable across different contexts.

But what is valuable?
What is valuable often involves, I think, a great deal of specialization. For example, what is valuable in an art form involves extensive experience. What is valuable in a field like philosophy, literature, art, or sociology requires being 'immersed' in that field. Therefore, metrics tend not to capture that. The other issue is that many human values are highly variable from person to person. And, again, that's not the kind of thing metrics can capture.

Do you have a practical example of this?
The philosopher Elizabeth Barnes, who works with the nature of health, has a beautiful argument that I present in my book, which is clearer than anything I've ever formulated about why health won't be captured by a metric. For her, the concept of health is relative to interests, and those interests vary from person to person. So, what do you mean by a 'healthy knee'? She says: 'Look, a healthy knee for an Olympic athlete, who needs peak performance for the next four years and cares less about long-term performance... that notion of health is different from that of a person who wants low-level performance, but without pain and in the long term'.

It's more complex than we're used to.
These are different notions of health. And since health is relative to interests, and interests vary drastically from person to person, it's not the kind of stable thing you can grasp. That's why we tend to focus on things that are a little easier, like life expectancy and mortality rates, and assume that's what matters for health.

Do you believe that if we focus only on goals, everyone will mold themselves to be the same?
I think this monotonizes humanity. Values should be subtle and diverse. And part of what's good about people and our natural makeup is that we can pursue our own bizarre notions of value. We can spread ourselves out and value different things in different ways. One way I put it is that when you're captured by metrics, you're outsourcing your values. And we're all outsourcing ours...

How big is this problem?
One problem with outsourcing is that it's not tailored to you. If we all outsource our values to a single thing, suddenly we'll all be valuing in the same way. And we won't be seeking all the strange, different, and beautiful things we could find. We'll be seeking things based on the fact that they are highly accessible, understandable, and quickly quantifiable at scale. And that's a very limited way to be a person.

"The way we should interpret boredom is as a call to change our values. And often, that means rethinking things."

Does this perspective directly connect to a lack of motivation at work and in life?
The philosopher Elijah Millgram has a wonderful article called " On Being Bored Out of One's Mind ," where he argues that engagement and fulfillment are signs that you have values that fit you and your environment, and boredom is a sign that you have values that don't. And the way we should interpret boredom is as a call to change our values. And often, this means rethinking things: changing jobs or simply rethinking how you're doing what you do.

What if it's not possible?
If you're fixated on an external value and it dominates your decision-making, you won't make that transition—because part of how it works is being able to hear a subtle signal and then be able to change. And that doesn't happen if there's a fixed, clear, and noisy value reference that's constantly shouting at us.

How do you believe companies can use metrics to help create a better environment?
In my opinion, institutions need to realize that there are many important things, and many of them are difficult to express: values of community, happiness, serving humanity... It's very easy for a company to think something like: "Well, we're making a good product and it's going to help people, and how do we measure if it's helping? Well, we should just look at the number of products shipped." That's a quick measure of something. But then, obviously, you won't be optimizing for a number of things that are very different from what really matters. That's why I think we should constantly subject a metric to the scrutiny of the question: 'Does this capture what really matters?'

Taking all these points into consideration, how do you envision the future ahead of us?
I don't know, I'm a little terrified by the possibility that what's happening is what I think is Value Collapse, which occurs when our social systems give all the power to people who are willing to ignore what's important and obsessively focus on metrics. This would create a feedback loop where all the people in power are those willing to eliminate all the subtle signals of what's truly important in order to just chase the metrics.

What if that happens?
If this is correct, we're doomed. Maybe it's not. Maybe there are systems we can build to encourage people, reward them, and incentivize them to step outside of metrics. But, given the way institutions seem to centralize a highly legible metric, then... I don't know. The pessimist in me thinks this is just the cycle. The optimist thinks we have many social movements that move away from metrics, but they aren't very respected.