Religion(s), the Self, and The Other


In my Buddhism class section today there we talked a lot about how different aspects of Buddhism relate to destroying the self in order to eliminate suffering. Roughly, the thinking goes, the reason that people suffer is because they desire things. Desire is egocentric in that its based on telling yourself that if only such and such were to happen, you would be happy. However, Buddhists would say, if you meditate and watch yourself form your desires — rather than just taking them as a given — you’ll find that they no longer really seem coherent or compelling, and that most of the times the things that you “want” have nothing to do with what it is that you actually seem to be trying to get.

For instance, let’s say that someone is interested in buying some dieting book. They tell themselves that once they lose weight, they’ll be so much happier, because then they’ll be more attractive and they’ll be able to get in a steady relationship and people will like them more and they can fit into cooler clothes and they’ll have friends and so on and so forth.

The person in that example doesn’t seem particularly interested in diets. Mostly they seem to just want to have friends and relationships.

(I remember a really depressing comment I heard someone say in San Francisco, about how products that actually work won’t be as successful, because the actual reason that people buy things is to fill narrative holes in their life so that they can tell themselves that they’re trying to make things better, and that if they were to buy something and have it actually do what it’s supposed to, then they would need to face the problem that that they were wrong about it fixing everything — that if the person in the example were to actually lose weight, then rather having the problem of “I need to lose weight”, they now have the problem of “I don’t have any friends, and its not even because I’m fat!”, and that this state of knowledge would be much much worse for them emotionally than their ignorance, and that most discretionary consumer products are in fact just packaged hope, and the ability to maintain the illusion of whatever story it is that you’re telling yourself about what you’re doing.)

Hopefully after reading the above, you’re now sufficiently despondent for the next point, that the Self as a narrative center of gravity describing your trajectory through life is a thing that can be eliminated, and that this is a separate question from achieving goals.

If you’re like me, reading the example you might think that the problem there isn’t that the person has desires, but that they are mistaken about them, and that if they reflected more about the question, they would be able to notice that they care more about having friends than they do about losing weight, and then they could just try to make friends, rather than doing silly diet stuff.

I think that that’s a basically reasonable response, but I also think that the Buddhists are correct to say that if you actually take the time to pick through and take apart the process whereby you try to force all of your life into a coherent-seeming thread of thought, and instead just pay attention to the relationships between the things that are happening around you, without declaring that it is imperative that some thing goes one way or another, then you’ll also stop suffering.

Buddhism has a lot of psychological insight into how to do that, and I think that they’re mostly right about most of what they say. However, I’m not really that interested in trying to be entirely Buddhist. I think they’re right about most of the practical details of their theory, but I think that their theory is mostly about doing something that I’m not very interested in.

I have a lot of issues with that.

In particular, not-suffering seems like a kind of boring goal to me. I’m already pretty happy, and it seems like if I were to succeed then I’d sort of have a lot of time left over after I was done. Also, I care about people, and being a perfectly not-suffering being seems like I might be maybe nicer to the people around me, but not do anything particularly difficult to make things better.

Mahayana Buddhism tries to get around this by having the goal of trying to enlighten all beings, and then going on to say that even though one could meditate on hunger and pain, it’s really just much easier/more efficient/more expedient in most cases to feed people first, because becoming Enlightened is just much easier when you’re mostly well fed.

So I started thinking about how other religions go about dealing with this problem, without eliminating the self.

In a similar way that Buddhism develops psychological techniques for eliminating the feeling of suffering, I think that Christianity has pretty good techniques for eliminating Guilt, which can also be viewed through a mostly cognitive lens. Where Buddhism focuses on dissolving the self through examining the inter-relatedness of different perceptions (and seeing that the self is less unitary then you think), Christianity seems to focus on redeeming yourself through a personal relationship between your self and a kind of Generalized Other.

What do I mean by that?

When I say Generalized Other, I mean a sort of prototypical model of what another person might know, and how they might judge you that your brain creates for the purposes of understanding specific other people.

This Generalized Other would know witness everything you’ve experienced, and have as much access to your motives as you do. As such, it can judge you based on every true argument that someone could make about your actions, and so is a relatively reasonable perspective to adopt in order to attempt to form moral judgments about yourself. It also makes representing other people’s judgments about you more convenient, since everyone else’s observations of you are just the G.O.’s observations minus information.

Christianity provides a mechanism for freeing yourself of Guilt and Shame based on your relationship with this G.O.. For apologizing to specific people, it seems to work well to just confess your wrongdoings, repent, and try to make amends. Sometimes, however, that’s impractical. So what Christianity does is it gives you a cultural institution for being able to repent to your own internal G.O. (a process by which you try to see what could make a generalized person forgive you), and then encourage your actual attempts at repentance. If you have a prototypical model of a person that other specific people are just extensions of, then if the G.O. can forgive you, then your brain would conclude that specific other people can forgive you to, and further that they should.

This seems really effective at helping people deal with Guilt and Shame, without needing to sacrifice their selves/narrative centers of gravity. As such, it seems (at least to me) to be a firmer foundation for an agentic worldview.

There are two major problems though.

One is that God doesn’t exist. Both in the sense that there isn’t a guy who created the world and is interested in each and every one of our well-beings, but also in the sense that the G.O. is a pretty bad model of other people’s judgments of you.

For one, the G.O. is based on what you pay attention to about yourself, and not what other people notice about you. Even after that, people still have different judgments than you. The G.O. is a really good model when other people have the same G.O.s, which can happen when people have some sort of morality that they think everybody else follows, and have that moral system constantly reinforced in their minds as something universal and important, but there’s not too much of that left in modern culture.

For me, this makes the phrase “God is dead” a lot more scary. There is no universally accepted standard to which you can be held accountable. You can try really hard and do everything right and there can still be people who hate, disagree with, and resist you, and there’s nothing that you can appeal to that they have to listen to.

I can think of a couple of hacks around this problem. You could try to model a bunch of different G.O.s corresponding to different values/subcultures (modernized polytheism), or try to be praiseworthy for doing things that actually everybody likes like feeding people or preventing disease (Effective Altruism),  you could try to please every specific person (neuroticism), or you could just give up on the whole thing entirely (egocentrism).

The other problem is deeper.

Even if you did have a G.O., it still seems to me that the Christian morality/worldview is kind of hollow. Basically, the upshot is that you try to be nice to people so that you can go to heaven. Why is heaven good? Because it’s really nice there. There’s also a bunch of different things about creating the kingdom of heaven on Earth and imitating Jesus and whatnot, but that still seems to ultimately boil down to “Don’t be a jerk, because if you weren’t then things would be better”. Heaven is never depicted as having particularly awesome, but it’s nice because there aren’t any assholes there.

I know I should be nice to people. I even like being nice to people. I’m not a jerk.

But what do I do besides that?

Procedural Definitions

In Enlightenment Europe, it was believed that in Ancient Sparta, part of the educational system was for young boys to steal things from the adults in their community. Their thinking was that the thinking was that, by training the boys to sneak around, they were aiding them in their military training, which was of tantamount importance to Spartan culture.

Is it theft for the boys to do that?

It doesn’t particularly feel like it ought to be understood as theft to me — the society that they’re in sanctions it, and presumably the objects taken that way could presumably be returned.  Though, some part of me still slightly feels like it is theft.

In Social Studies 10 today, the professor suggested a method of resolving such confusion. Namely, one should view the question of whether or not something is “theft” as simply being the question of whether or not a particular well-defined process labels it as “theft”. This view argues that, in the same way that “illegal” means “judged-by-the-law-as-illegal”, “wrong” means “judged-by-X-as-wrong”, and the main difference is just that when you’re part of the process that is doing the labeling, the label feels true rather than just trivial.

I think that there’s some truth to that, and  that it’s definitely interesting to think about, but I think it’s a little unnecessarily relativistic, and that most of the interestingness/use of this thought can come through in a way that still allows for correctness in ways that I want.

Sociology of Science and History of Science has bumped into similar issues. In particular, given that scientists are responsible for making theories and that they are humans in a particular social institution, why should we think that scientists are correct in a more universal way than are practitioners of other subjects, who also make claims and function within a particular social institution. Why do we feel that politicians or authors are relative, while scientists are not?

One might say that it’s because scientists make falsifiable predictions, but leaving it at is incomplete in the light of the history of science. There are many examples of scientific controversies in which a particular prediction is made, and scientists disagree as to whether or not the experimental results falsify or support a particular theory. Sure, the theory may be falsifiable, but any particular falsification attempt is open to interpretation. More worryingly, scientists are themselves the ones who determine the criteria by which falsification is or isn’t achieved.

And yet, I think that there’s still a fairly real sense in which Science is objective.

The resolution somewhat stems from the fact that most scientific arguments in fact center around whether or not an instrument works, or whether or not an experiment is valid, rather than being about what the instrument was observed to have said. If we take the predictions to be perfectly concrete and rather than saying for instance “an electron will pass through this slit”, say “we will observe dots on this plate”, then most of the arguments are resolved.

If we talk about what the measurement instruments do, rather than what happens in the world, we can dodge the social nature objections somewhat. What the instrument does is objective, and instead, its relation to the theory is contentious and somewhat subject to “social” influences.

Every test of a theory by an instrument implicitly includes the hypothesis that the instrument measures the thing that is thought to be relevant to the theory, and when you modify the theories as such, they all become falsifiable. If a scientist wishes to claim that their theory X was true even though the experiment B failed, they are in effect arguing that they are right about X, but that they were wrong about B.

In the physical sciences, this is a pretty okay position to be in, since the theories have so much influence over the instrumentation. As you build different instruments according to your theories, you can check your theory for inconsistency by using a variety of instruments, and then comparing their measurements to your predictions about what they would claim to have measured. If they are inconsistent, then your theory is effectively disproven.

That doesn’t sound particularly powerful as a method, but it gets you surprisingly far by the virtue of the fact that assembling a body of mutually consistent scientific instruments is very hard, and often this is all you need for a disproof.

For instance, in the 18th century there was an argument over how to build thermometers. The problem was that most thermometry assumed the linear expansion of some material with respect to increases in temperature, but it was difficult to know that something expanded linearly with respect to temperature when you didn’t have a thermometer yet that could tell you what temperature it was.

As is very nicely explained in “Inventing Temperature”, the problem was still tractable though — a frenchman built a ton of different kinds of thermometer of different shapes and sizes, then checked their temperature readings against each other across a variety of ranges. It turned out that only the air thermometers actually agreed with each other, and so the rest were abandoned.

If your theory about the instrument is correct, then it ought to behave as you expect. If you build a variety of instruments all differing in theoretically irrelevant ways, you can determine that a theory is experimentally consistent based on whether or not the things which were theoretically irrelevant turn out to in fact be theoretically irrelevant.

In both Political Philosophy and History of Science then, it turns out to be a useful trick to view “X” as the “The Procedure that we use to determine X says X”.

Originally this post was called “Cybernetic Government” because I wanted to emphasize how the professor only used the trick for definitions, and not for checking results. A government or society can define some X, but if there are other ways to look at X that within the bounds prescribed by the society should give the same result but which don’t, then it could be called “wrong”.

Suck it relativism.

Culture, Limitations, Fixity

My Social Studies professor brought up a fairly terrifying example a while ago. Apparently in Fiji, every object is given a particular place in society, fully determining its uses and possibilities. Foreigners are (explicitly) allowed to do anything, and so if they teach you how to use a musket, you’re allowed to learn how to shoot it, but then if you want to use it as a weapon you have to use it as a club, because clubs are for killing. What you can do in general is also culturally enumerated, so for instance you can’t kill your chief because by definition the chief is able to kill you, and so if you tried to you would die. As an inevitable fact about the interactions of your different magics, rather than an empirical fact about the chief’s relative power and standing.

Apparently, culture can be arbitrarily limiting.

But wait! We’re totally post-Enlightenment westerners who know better, right?

I’m sort of worried about that.

In the (somewhat recent) past, there are a lot of examples to suggest otherwise. Part of the way that the Americans won the Revolutionary War was to try to shoot British officers. This was considered very rude and ungentlemanly at the time, and very much against the traditions of aristocratic officer corps. The Americans just happened not to have as much of an aristocracy, and so they cared less.

Similarly, in WWI and II, a major innovation was to use planes to drop bombs on supply lines. Prior to that, they had dogfights above trenches.

More clearly, part of the reason that Shaka Zulu was able to found his empire was by fighting to kill, rather than to fulfill ritual obligations.

Now to me, in retrospect, all of these seem like they would obviously work. But they apparently took a while to actually happen. Even when your life is on the line, it seems that people are mostly interested in following established cultural patterns of “tried and true” methods of how to do whatever it is that they’re trying to do, rather than asking themselves the causal question of how they could cause X to happen. Using imitation engines rather than their motor planning.

Those examples might be somewhat unfair, since maybe people don’t actually want to effectively fight. But there are more!

For instance, one of the reasons that Ben Franklin was a better printer than his contemporaries is that he wasn’t drunk the entire time he was learning to print. The other apprentices/journeymen made fun of him for it.

Is there any similarly low-hanging fruit? How could we tell?

From my life, it seems like “You can just email people you think are cool” is one. Seems like there’ve got to be others.


It seems weird that people might be doing things that badly, and so it’s probably worth explaining it somewhat. There are a couple of factors that seem relevant.


One reason is just that humans seem to process the world in terms of affordances — ways of acting upon an object to achieve a goal in a way that is associated with that object. For instance, a doorknob affords turning, and a cup affords drinking. This isn’t just a way of seeing things, but rather also an empirical cognitive fact. When people see objects regions of their motor cortex associated with the actions that that object affords slightly activate.

But from the inside, this makes sense too. When I don’t know where a door is, I look for something that looks like I can turn it, and then try to turn it. This feels perfectly natural.

Imagine a door that had a doorknob that you had to push into the door in order for the door to open.

Feels weird, right?

When objects have enough affordances, it feels natural to use them. When the affordances stop you from perceiving other possibilities, it’s called functional fixedness. Functional fixedness is pretty much what it sounds like — when you are fixed in your perception of what an object can be used for, and as a result, cannot perceive the possibility of using an object for something other than it’s “intended” purpose, and thus do not act on it.

Computation is Expensive

One of the reasons that functional fixedness makes sense is just that thinking about things is calorically expensive, compared to imitating other people. I can see that if other people push on the weird bar things on some doors they apparently open, without needing to look at the mechanism for why it might do so myself. This means that I would be optimized for being able to get through my daily life with as little thinking as possible. Caching answers for what different objects do makes things much easier than actually needing to think through it.

Empiricism is also expensive, and somewhat dangerous.

Imagine that you’re in a group of people that doesn’t know what foods are okay to eat, and that you don’t even necessarily know that poison is caused by particular molecules, rather than essences or malign spirits or somesuch. You come across a plant with leaves similar to a plant that you know is poisonous, and you have some idea that plants that looks similar often have similar properties.

On the other hand, it has these really plump attractive looking red fruits that look tasty.

Do you eat it?

It turns out that this describes tomatoes, and that people thought that they were poisonous for a while. It turns out that they’re actually still poisonous — you shouldn’t eat the leaves or the stem.

What can you do about it?

There are a couple of ways to deal with this fact.

One is to just occasionally stop and ask yourself if the thing that you’re doing is actually the best way you can think of to achieve the goal that you’re ostensibly trying to accomplish, and to ask yourself what it is that you’re trying to accomplish.

For instance, with homework, I’m ostensibly trying to do it in order to learn and get good grades so that I can get a good job. However, at least in software, GPA doesn’t seem to correlate with job performance, and it seems possible to get hired without a degree, let alone a good grade. I still think that homework somewhat helps me to learn the material, but if I’m less worried about the grade, I can do it on a schedule that’s more convenient for me, even if I lose some points.

This actually gets surprisingly far, if you haven’t done it before. It’s worth doing.

Another takeaway is that you can ask yourself if you think that the culture you’re living in does something a particular way because that’s the best that it knows how to do, or just because of different historical forces that don’t necessarily apply? Since naive empiricism only guards against catastrophic failures, you can safely expect that nothing you’re doing is going to kill you in the very near future, but that doesn’t mean that it’s good for you.

I think that food is a major instance of a case where that’s true. Nobody  thinks that sugar is good for you, but we eat it anyway because we’re used to having it around. (Also incentives for companies to make food that’s consistently immediately rewarding, but we’re not companies, right?) Bread is sometimes really tasty, but almost all of the bread that’s around just by default is just meh. So why bother eating it?

“If I were to actually think about what motor actions I anticipate would actually have this effect, would I be going about it this way? Or am I just copying people?”