Procedural Definitions

In Enlightenment Europe, it was believed that in Ancient Sparta, part of the educational system was for young boys to steal things from the adults in their community. Their thinking was that the thinking was that, by training the boys to sneak around, they were aiding them in their military training, which was of tantamount importance to Spartan culture.

Is it theft for the boys to do that?

It doesn’t particularly feel like it ought to be understood as theft to me — the society that they’re in sanctions it, and presumably the objects taken that way could presumably be returned.  Though, some part of me still slightly feels like it is theft.

In Social Studies 10 today, the professor suggested a method of resolving such confusion. Namely, one should view the question of whether or not something is “theft” as simply being the question of whether or not a particular well-defined process labels it as “theft”. This view argues that, in the same way that “illegal” means “judged-by-the-law-as-illegal”, “wrong” means “judged-by-X-as-wrong”, and the main difference is just that when you’re part of the process that is doing the labeling, the label feels true rather than just trivial.

I think that there’s some truth to that, and  that it’s definitely interesting to think about, but I think it’s a little unnecessarily relativistic, and that most of the interestingness/use of this thought can come through in a way that still allows for correctness in ways that I want.

Sociology of Science and History of Science has bumped into similar issues. In particular, given that scientists are responsible for making theories and that they are humans in a particular social institution, why should we think that scientists are correct in a more universal way than are practitioners of other subjects, who also make claims and function within a particular social institution. Why do we feel that politicians or authors are relative, while scientists are not?

One might say that it’s because scientists make falsifiable predictions, but leaving it at is incomplete in the light of the history of science. There are many examples of scientific controversies in which a particular prediction is made, and scientists disagree as to whether or not the experimental results falsify or support a particular theory. Sure, the theory may be falsifiable, but any particular falsification attempt is open to interpretation. More worryingly, scientists are themselves the ones who determine the criteria by which falsification is or isn’t achieved.

And yet, I think that there’s still a fairly real sense in which Science is objective.

The resolution somewhat stems from the fact that most scientific arguments in fact center around whether or not an instrument works, or whether or not an experiment is valid, rather than being about what the instrument was observed to have said. If we take the predictions to be perfectly concrete and rather than saying for instance “an electron will pass through this slit”, say “we will observe dots on this plate”, then most of the arguments are resolved.

If we talk about what the measurement instruments do, rather than what happens in the world, we can dodge the social nature objections somewhat. What the instrument does is objective, and instead, its relation to the theory is contentious and somewhat subject to “social” influences.

Every test of a theory by an instrument implicitly includes the hypothesis that the instrument measures the thing that is thought to be relevant to the theory, and when you modify the theories as such, they all become falsifiable. If a scientist wishes to claim that their theory X was true even though the experiment B failed, they are in effect arguing that they are right about X, but that they were wrong about B.

In the physical sciences, this is a pretty okay position to be in, since the theories have so much influence over the instrumentation. As you build different instruments according to your theories, you can check your theory for inconsistency by using a variety of instruments, and then comparing their measurements to your predictions about what they would claim to have measured. If they are inconsistent, then your theory is effectively disproven.

That doesn’t sound particularly powerful as a method, but it gets you surprisingly far by the virtue of the fact that assembling a body of mutually consistent scientific instruments is very hard, and often this is all you need for a disproof.

For instance, in the 18th century there was an argument over how to build thermometers. The problem was that most thermometry assumed the linear expansion of some material with respect to increases in temperature, but it was difficult to know that something expanded linearly with respect to temperature when you didn’t have a thermometer yet that could tell you what temperature it was.

As is very nicely explained in “Inventing Temperature”, the problem was still tractable though — a frenchman built a ton of different kinds of thermometer of different shapes and sizes, then checked their temperature readings against each other across a variety of ranges. It turned out that only the air thermometers actually agreed with each other, and so the rest were abandoned.

If your theory about the instrument is correct, then it ought to behave as you expect. If you build a variety of instruments all differing in theoretically irrelevant ways, you can determine that a theory is experimentally consistent based on whether or not the things which were theoretically irrelevant turn out to in fact be theoretically irrelevant.

In both Political Philosophy and History of Science then, it turns out to be a useful trick to view “X” as the “The Procedure that we use to determine X says X”.

Originally this post was called “Cybernetic Government” because I wanted to emphasize how the professor only used the trick for definitions, and not for checking results. A government or society can define some X, but if there are other ways to look at X that within the bounds prescribed by the society should give the same result but which don’t, then it could be called “wrong”.

Suck it relativism.

Advertisements

Culture, Limitations, Fixity

My Social Studies professor brought up a fairly terrifying example a while ago. Apparently in Fiji, every object is given a particular place in society, fully determining its uses and possibilities. Foreigners are (explicitly) allowed to do anything, and so if they teach you how to use a musket, you’re allowed to learn how to shoot it, but then if you want to use it as a weapon you have to use it as a club, because clubs are for killing. What you can do in general is also culturally enumerated, so for instance you can’t kill your chief because by definition the chief is able to kill you, and so if you tried to you would die. As an inevitable fact about the interactions of your different magics, rather than an empirical fact about the chief’s relative power and standing.

Apparently, culture can be arbitrarily limiting.

But wait! We’re totally post-Enlightenment westerners who know better, right?

I’m sort of worried about that.

In the (somewhat recent) past, there are a lot of examples to suggest otherwise. Part of the way that the Americans won the Revolutionary War was to try to shoot British officers. This was considered very rude and ungentlemanly at the time, and very much against the traditions of aristocratic officer corps. The Americans just happened not to have as much of an aristocracy, and so they cared less.

Similarly, in WWI and II, a major innovation was to use planes to drop bombs on supply lines. Prior to that, they had dogfights above trenches.

More clearly, part of the reason that Shaka Zulu was able to found his empire was by fighting to kill, rather than to fulfill ritual obligations.

Now to me, in retrospect, all of these seem like they would obviously work. But they apparently took a while to actually happen. Even when your life is on the line, it seems that people are mostly interested in following established cultural patterns of “tried and true” methods of how to do whatever it is that they’re trying to do, rather than asking themselves the causal question of how they could cause X to happen. Using imitation engines rather than their motor planning.

Those examples might be somewhat unfair, since maybe people don’t actually want to effectively fight. But there are more!

For instance, one of the reasons that Ben Franklin was a better printer than his contemporaries is that he wasn’t drunk the entire time he was learning to print. The other apprentices/journeymen made fun of him for it.

Is there any similarly low-hanging fruit? How could we tell?

From my life, it seems like “You can just email people you think are cool” is one. Seems like there’ve got to be others.

Affordances

It seems weird that people might be doing things that badly, and so it’s probably worth explaining it somewhat. There are a couple of factors that seem relevant.

Affordances

One reason is just that humans seem to process the world in terms of affordances — ways of acting upon an object to achieve a goal in a way that is associated with that object. For instance, a doorknob affords turning, and a cup affords drinking. This isn’t just a way of seeing things, but rather also an empirical cognitive fact. When people see objects regions of their motor cortex associated with the actions that that object affords slightly activate.

But from the inside, this makes sense too. When I don’t know where a door is, I look for something that looks like I can turn it, and then try to turn it. This feels perfectly natural.

Imagine a door that had a doorknob that you had to push into the door in order for the door to open.

Feels weird, right?

When objects have enough affordances, it feels natural to use them. When the affordances stop you from perceiving other possibilities, it’s called functional fixedness. Functional fixedness is pretty much what it sounds like — when you are fixed in your perception of what an object can be used for, and as a result, cannot perceive the possibility of using an object for something other than it’s “intended” purpose, and thus do not act on it.

Computation is Expensive

One of the reasons that functional fixedness makes sense is just that thinking about things is calorically expensive, compared to imitating other people. I can see that if other people push on the weird bar things on some doors they apparently open, without needing to look at the mechanism for why it might do so myself. This means that I would be optimized for being able to get through my daily life with as little thinking as possible. Caching answers for what different objects do makes things much easier than actually needing to think through it.

Empiricism is also expensive, and somewhat dangerous.

Imagine that you’re in a group of people that doesn’t know what foods are okay to eat, and that you don’t even necessarily know that poison is caused by particular molecules, rather than essences or malign spirits or somesuch. You come across a plant with leaves similar to a plant that you know is poisonous, and you have some idea that plants that looks similar often have similar properties.

On the other hand, it has these really plump attractive looking red fruits that look tasty.

Do you eat it?

It turns out that this describes tomatoes, and that people thought that they were poisonous for a while. It turns out that they’re actually still poisonous — you shouldn’t eat the leaves or the stem.

What can you do about it?

There are a couple of ways to deal with this fact.

One is to just occasionally stop and ask yourself if the thing that you’re doing is actually the best way you can think of to achieve the goal that you’re ostensibly trying to accomplish, and to ask yourself what it is that you’re trying to accomplish.

For instance, with homework, I’m ostensibly trying to do it in order to learn and get good grades so that I can get a good job. However, at least in software, GPA doesn’t seem to correlate with job performance, and it seems possible to get hired without a degree, let alone a good grade. I still think that homework somewhat helps me to learn the material, but if I’m less worried about the grade, I can do it on a schedule that’s more convenient for me, even if I lose some points.

This actually gets surprisingly far, if you haven’t done it before. It’s worth doing.

Another takeaway is that you can ask yourself if you think that the culture you’re living in does something a particular way because that’s the best that it knows how to do, or just because of different historical forces that don’t necessarily apply? Since naive empiricism only guards against catastrophic failures, you can safely expect that nothing you’re doing is going to kill you in the very near future, but that doesn’t mean that it’s good for you.

I think that food is a major instance of a case where that’s true. Nobody  thinks that sugar is good for you, but we eat it anyway because we’re used to having it around. (Also incentives for companies to make food that’s consistently immediately rewarding, but we’re not companies, right?) Bread is sometimes really tasty, but almost all of the bread that’s around just by default is just meh. So why bother eating it?

“If I were to actually think about what motor actions I anticipate would actually have this effect, would I be going about it this way? Or am I just copying people?”