(Non-Political) Rights, Ethics, and Morality


Nah, i don’t think so.

Any moral or ethical code needs some foundational seed of “good” upon which to build everything else.

Self interest is just as valid as any other seed.


Self interest seems to be coded in the DNA. Superseding all other interests.


If your moral code is based solely on self-interest, then there is no reason to follow it when it is clearly no longer in your interest.

Hardly. Parents generally sacrifice more for their children than they can reasonably hope to get back.


Sure, the same goes for any moral or ethical code. It’s driven by whatever is defined as good.

But i believe that helping others actually is in your best interest, especially when within the context of a society and legal framework.


There are people who have literally thrown themselves on grenades to save others. This was absolutely not out of self-interest.

Sometimes it is, and sometimes it isn’t. So what do you do when helping someone is clearly not in your interest?


Eh, it could be, especially given the complexities of the human mind.

Self interest did not mean raw animal hedonism.

Your family and friends have value to you. Benefiting them does in fact directly translate into benefiting yourself.

You may throw yourself onto that grenade to save those other people, because their value in your mind, is higher than the value of your own life. But it’s still an evaluation based on your own self interest. But i realize in writing that, that it sounds weird.

Then you don’t?
I mean, there’s nothing that dictates that any moral code needs to assign moral value to altruism.


OK, suppose I told you that I only acted out of self interest. No moral code at all.

Then, on Sunday I have an epiphany and adopt your moral code. Do you expect me to act differently on Monday?

And if my new moral code doesn’t change my behavior, then how do you know it even exists?



You realize you are just making a circular argument here, right?

You are begging the question, assuming that a moral code cannot be founded on self interest.

I would say this part of your statement here:

Is where your error lies. Acting out of self interest is in fact a moral code.

But I think that an important part of this, is that decisions can’t be made based upon a superficial evaluation of whether something satisfies your best interest. Or rather, it can be… at which point you are essentially an animal.

But with humans, and our ability for abstract thought and complex reasoning, we have the ability to achieve greater levels of benefits for ourselves as individuals, by pursuing actions which may appear to be detrimental to a superficial examination.


It doesn’t really matter to me if you want to consider “Act in your own self interest” a type of moral code. Go right ahead. But it is a very rudimentary moral code, hardly worth discussion. “Essentially an animal” is about right.

And even if you change it to “Ponder very deeply the complexity of life, then act in your own self interest” it is still a very rudimentary moral code, hardly worth discussion. Still an animal, but now a deeply pretentious one.

The moment you add, “Act against your own interests if X”, well now you have something worth discussing. I mean, would Rorschach have been nearly as interesting character if he chose self-interest over the self-sacrifice demanded by his moral code?


It’s exactly as complex as “act in the interests of others”. It is no more rudimentary.

It’s simply the starting foundation of the code.

I’m not seeing how adding an arbitrary deviation makes it more interesting.

Why do you think this?

But perhaps, since we wouldn’t want to limit ourselves to being pretentious animals as you do thoughtfully put it, you should explain what the “correct” moral foundation is, and why.


Well, yeah. “Act in the interests of others” is too vague to be very useful.

I don’t know what the “correct” answer is. Like I said earlier, there are at least three schools of thought that are considerably better than acting in a self-interested manner (or, if you prefer, in an enlightened self-interested manner). They are all worth considering, but I don’t personally have a strong preference among them.

And why are they all better than self-interest? Well for one thing, two people sharing any of those principles will agree on their goals in a given situation. Whereas if you and I are both acting only out of self-interest, then we will likely have divergent goals in many or most situations. And if so, then our goals are hardly worth discussing. All that’s left is just a matter of using game theory to help achieve our individual goals.


Those three schools are different from what we’re talking about regarding self interest.
They are essentially ways of processing actions to evaluate them according to a given ethical or moral goal. But they do not dictate what that goal is. They do not define the ultimate “good”, they merely define how to evaluate any given action against whatever your definition of good is.

Self interest merely provides a very low level anchor upon which to base a utilitarian system.

Yes, you will have divergent goals in many situations. Why is that bad? That’s just how the world is.
That doesn’t prohibit self interested entities from collaborated for mutual gain.

Self interest provides a shared, easily understood goal that applies to everyone. Within the framework of a society, that self interest transforms into more apparently altruistic actions. Since we have a capacity for abstract thought, we are able to work together to benefit everyone, even within a morality framework based on self interest.

If self interest isn’t the low level definition of good, then what is? Simply saying “utilitarianism, deontologism, and virtuism” doesn’t answer that question. Even if you picked one of them, it doesn’t answer that question. You’d need to pick one and then define what it’s internal definition of good is. And really, all of them can work with a core value of self interest.


Thanks for this recommendation! I’m not up to date with contemporary philosophers and this is helpful. Picked this up and will read it this weekend (hopefully!).

I also have Virtue and Vices by Phillipa Foot on order; looking forward to these!


I haven’t heard of Phillipa Foot–looks good. Fascinating that virtue ethics has attracted notable female contributors. Elizabeth Anscombe was a seminal writer on the topic.


Second this. I’m interested in at least skimming this as well. Interesting that John Rawls hasn’t entered the discussion yet. Utilitarianism is often ascribed to folks with libertarian (or classical liberal) leanings but Rawls makes a utilitarian case for a thoroughly liberal (modern American sense of the word) system of political ethics.


It seems that this thread may be deteriorating into personal opinions as opposed to given definitions. Can we get back to absolute definitions of certain values?


Didn’t see a violence in video games thread recently, but I found this article interesting:


Violent games were found to have a positive relationship with moral reasoning while mature content was more likely to produce a negative one, the report published in published in journal Frontiers in Psychology found.

The Grand Theft Auto and Call of Duty franchises were highlighted as examples of titles related to lower moral scores, alongside variables including the length of time spent playing games, how many years they’ve been playing games, the level of engagement and m0ral narrative within a game.

Male participants displayed significantly higher moral reasoning scores than their female counterparts, which contradicted previous findings, the researchers claimed. Girls also experienced higher levels of stress while playing.

Note of course, the exceptions to higher moral reasoning from video games - GTA and CoD…


Fascinating piece on moral decision making (note, it uses eating meat as the example but it’s not about veganism; it explores the impacts of cognitive dissonance and how social practices shape moral judgments.)

This isn’t just relevant for meat-eating. When we turn animals or humans into objects, and thereby avoid the discomfort caused by knowing about the suffering behind consumer goods, we make it easier to be cruel. The same processes we see with meat, we see with all kinds of other morally unacceptable but common human behaviours that have to do with money.

We know that poverty causes great suffering, yet instead of sharing our wealth we buy another pair of expensive shoes. We fundamentally disagree with the idea of child labour or adults working under horrible conditions, but keep shopping at discount stores. We stay in the dark, to protect our delicate identities, to maintain the illusion that we are consistent and ethically sensible human beings.

In this constant effort to reduce cognitive dissonance, we may spread morally questionable behaviour to others. We begin to shape societies in ways to minimise our discomfort, to not remind us of our inconsistencies. We don’t want constant reminders. And, as Bastian and Loughnan argue, “through the process of dissonance reduction, the apparent immorality of certain behaviours can seemingly disappear.”

Hypocrisy can flourish in certain social and cultural environments. Social habits can cast a veil over our moral conflicts, by normalising behaviours and making them invisible and resistant to change.


The last few episodes of this last season (3) of ‘The Good Place’ sort of touches on this. How we do things we think are ethical, but when you look at everything surrounding it and related even slightly, it becomes a negative action. I mean, it’s filtered through a sitcom lens, but still interesting.