How We Think: a Simple Model

Drilled Mind by Roman Klco

Forgive the clumsy workings of a mind untutored in philosophy of mind. But it has always been my way to read, and think, and argue, and then try to pull the threads together and see what emerges. What I lack in training I will try to make up for with brevity.

Crucial for understanding anything is understanding the context. The context is what is meant when people speak of the whole truth as opposed to partial truths. However, context—the whole truth—is boundless, and so whenever we attempt to grasp it, we’re always projecting and simplifying.

A partial truth might be dropping something, and a projection of the whole truth would be classical physics.

We can make sense of the centuries long tug-of-war between Enlightenment rationalists and empiricists from this perspective. The rationalist argument boiled down to the idea that we can never make sense of observed truths, which are always partial, without working out the whole truth abstractly beforehand. Empiricists basically believed the opposite; you add up a lot of partial observations until you get the whole truth.

Philosophical hermeneutics for the past century has taken another path, saying, in essence, you need both at once.

As Jonathan Haidt points out, people don’t really have a rational model of morality in mind when they make in the moment ethical judgments. His observation could be extended to nearly any judgement; more of the whole gets left out than is brought in explicitly. But there is an important two-way relationship between part and whole here. So how does it work?

Crucially, we externalize most of the tools for our thinking; Andy Clark calls this “extended cognition.” Joseph Heath and Joel Anderson point out that the most important way individual knowledge is externalized is through the social environment; that is, through other people. In large part this is accomplished by trusting the people in our lives and the people we perceive to be authorities on a subject, and having faith that the apparent contradictions or limitations to what is known can either be worked out, or aren’t very important.

But neither the trust nor the faith are blind. When we are confronted by partial truths, or assertions by people we trust, that stand in conflict with the whole truth as we understand it, or as other people we trust have explained it—they are subject to reevaluation.

Essentially, what the typical person brings to the fore in confronting the partial truths in their daily life is not an actual model of the whole, but prejudices. Hans-George Gadamer speaks of prejudices in just this sense as “pre-judgements,” similar to how a judge will make provisional judgments that influence the decisions he makes throughout the trial, before actually rendering a final verdict.

These prejudices aren’t simply mindless assumptions or “givens”, nor is the process by which we come to them. They always point back towards some skeleton projection of a whole truth, which can be fleshed out and scrutinized. Haidt leans hard on the fact that people will cling to their prejudices even when unable to come up with any reasons to justify them. But the fact that I might not be able to remember how to solve a quadratic equation in the moment does not invalidate mathematical reasoning as the correct way to solve such an equation. Nor does it mean I should abandon my belief in mathematical reasoning!

Context matters, and part of specifically human context is the knowledge that we have largely externalized. Haidt and his researchers are presumably complete strangers to their subjects, and the subjects in question knew that they were inside of a psychology experiment. Would the subjects have treated the matter differently if they were in a setting they trusted to be private, discussing the questions with a priest, a teacher, or other trusted moral authority who challenged their prejudices and explained the reasoning for doing so?

The very best projections of the whole truth that humanity is capable of mustering can get incredibly complex, and take years of training to really understand. Physics, chemistry, advanced mathematics, computer science, but also moral systems, offer up huge, interconnected sets of theories. Connecting these systems to people’s prejudices is a matter of persuasion. Non-specialists must be persuaded that specialists have authority on a given subject, and specialists must also persuade one another on any given point of contention.

Persuasion isn’t a matter of cold, logical syllogisms, but of rhetoric. But that doesn’t mean it is irrational. It appeals to thought processes as they actually occur in people. Among specialists, of course, this will often involve a great deal of technical details. But technical details alone cannot tell the whole story of what we ought to believe or why we ought to change our point of view.

In every case, the one attempting to persuade will draw on narratives and metaphors and examples from life that make the subject, as well as what is at stake, concrete for the people he wants to persuade. They will employ a situated reason. But reason, none the less.


One thought on “How We Think: a Simple Model

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s