Mental Experiments and Prudential Egoistic Concern


There are several thought experiments that explore the nuances of personal identity continuity and prudential egoistic concern. In this article, I will analyze some of these experiments.

1 – Sequence of Brain Fissions

One of the most classic philosophical scenarios concerning the analysis of personal identity is that of brain fission. Imagine a person whose brain is split into two parts, and assume that the technology used for the division somehow manages to keep the brain minimally conscious throughout the entire process. Each half, together with other artificially created neurons, is used to create a complete brain exhibiting the same psychological traits as the original person; afterward, they are placed in an artificially created body.

Now, suppose we designate the person who underwent the brain fission as X. X knows that the two individuals emerging after the fission will experience great suffering—for example, being tortured—and he can prevent that. If X is prudentially egoistically concerned about himself, should he prevent these two individuals from being tortured? Intuitively, the answer seems to be yes.

Thus, X takes the necessary measures to ensure that the persons emerging after the fission are not tortured. This is what we call prudential egoistic concern—he will care about what happens to these individuals after the fission as if they were himself, with the highest possible consideration and priority. However, this raises the following problem:

  1. If X undergoes the brain fission process, then he must prudentially care about the persons who emerge from that process.

  2. If X is prudentially concerned about someone, then if that person undergoes the brain fission process, he must also be prudentially concerned about the persons who emerge from that process.

  3. If the brain fission process is repeated—say, sequentially 20 times, first producing 2 new individuals, then 4, then 8, 16, 32, and so on—then X should be prudentially concerned with 2^20 individuals.

While premises 1 and 2 seem intuitively true, premise 3 appears intuitively false. This contradiction suggests that we must rethink the principles previously postulated. It is important to note that this analysis focuses on prudential egoistic concern, not on other forms of concern such as altruistic concern (i.e. concern for other people).

One could refute premise 1 by arguing that it is impossible to create technology that splits the brain while keeping it minimally conscious. In that case, the fission process would kill the original person, and what emerges afterward would be two entirely new individuals—thus, prudential egoistic concern would not apply.

Another way to refute premise 1 is to assert that people are immaterial souls. When brain fission occurs, either the soul leaves the body and two new souls arise (which seems intuitively false if we reject the hypothesis that the process causes death), or the soul remains with one of the two individuals emerging after the fission while a new soul is created in the other. In this case, the issue of prudential egoistic concern would only involve the epistemological problem of determining in which individual the soul resides—even though there would be only one person to be concerned about.

A more radical way to deny premise 1 would be to adopt empty individualism. In some versions of this view, the “self” is a fixed entity that ceases to exist (i.e. dies) as soon as it undergoes any change. Thus, there would be an original person followed by successive, different individuals throughout the entire fission process, leaving no reason for prudential concern.

Another way to resolve the problem would be to deny premise 2, postulating that beyond a certain point in the fission sequence, prudential egoistic concern no longer applies. At first glance, choosing any particular point in the sequence might seem arbitrary, but further analyses or additional principles—beyond the scope of this article—could potentially resolve this apparent arbitrariness.

Yet another resolution is to accept the conclusion of premise 3—perhaps only for these particular cases of fission sequences. In this instance, X would indeed need to be prudentially and egoistically concerned with all 2^20 individuals emerging from the sequence of fissions.

A more radical acceptance of premise 3 would involve adopting open individualism. According to open individualism, all sentient beings are, in fact, a single conscious subject; the division of identities into separate individuals is merely an illusion. Thus, X should be concerned not only with the 2^20 individuals who emerge after the fission sequence but also with all other people, the animals on farms and in nature, the ant that crosses his path, and so on.

2 – Multiplication of Brains

Another scenario similar to the fission sequence involves a case where the fission occurs only once. If the person has not already rejected premise 1 from the previous scenario, then X must be concerned with the individuals emerging after the fission. But what if we divide the brain into 3, 4, 5, 6, 7, … parts—up to the point where each neuron is used as the basis for creating a new brain? We would then be led to the conclusion that one must be concerned with billions of individuals, which is intuitively false. And if that is false, at what point in the division does prudential concern cease to apply? Any chosen point seems arbitrary.

Consider also a similar scenario in which the brain is not divided into equal parts but into unequal ones—for example, one part might constitute 90% of the original brain and the other only 10%, or 95% and 5%, or even one part might have the majority of the brain while the other is reduced to a single neuron that is later supplemented with artificial neurons to create a complete brain. It seems intuitively false to claim that there should be prudential concern for the part based on just one neuron, yet any minimal quantity we might postulate appears arbitrary.

3 – Resuscitation of Brains

Imagine there exists a technology capable of making dead brains function again, while also restoring any missing parts. Suppose a person suffers brain death and, after just 1 second of being dead, this technology is used to reactivate their brain. Is the individual who emerges after the technology is applied the same as the one who was dead before? Or does prudential egoistic concern extend to both the former and the “awakened” individual? And what if the interval were not 1 second but 1 day, 1 week, 1 month, 1 year, 10 years, or even 1 millennium?

Furthermore, consider the issue of brain damage. Imagine a person experiences brain death and is buried; later, they are exhumed, with only 10% of their brain remaining intact. The technology is then used to restore the brain to full functionality. Should the original person be concerned about the individual who “wakes up” after the use of this technology? Are they the same person? And if a new version of this person had preserved a single neuron, and using the technology that person is recreated, is there prudential concern—or continuity of personal identity—in that case?

4 – Fusion of Brains

Now, imagine that persons X and Y have their brains divided into two parts each. Next, their parts are combined—X’s parts (labeled 1 and 2) are fused with Y’s parts (labeled 3 and 4), forming two new brains: one consisting of parts 1 and 3, and the other of parts 2 and 4. Intuitively, X and Y should be concerned with the individuals that emerge. But what if, after some time together—enough time to allow new neural connections to form without any loss of neurons—the brains are divided again, and parts 1 and 2 are rejoined while parts 3 and 4 are rejoined? Does the brain composed of parts 1 and 2, for example, correspond to X? Moreover, if we consider X as he was before any of these brain division processes took place, should he also be concerned with the brains composed of parts 3 and 4 after all these procedures?

Another similar scenario: imagine that 10 people have their brains fused—that is, their brains are joined together—so that these 10 individuals begin to share experiences, memories, and so on. Suppose that Z, one of these 10 people before the fusion, could prevent this 10-brain entity from being tortured. Should he be concerned? Intuitively, yes. Now, let us go further and imagine that after 5 years together, the brains are separated once again. Z knew before the fusion that the brains would eventually be separated and that X, one of the other 10 fused individuals, would be tortured a few years after the separation. Should Z be non-altruistically (i.e. prudentially egoistically) concerned about X?

Conclusion

These are some of the thought experiments I have encountered that explore the issue of personal identity. Similar thought experiments exist in the philosophical academic literature, and there may be other, yet unexplored, experiments that I might examine in the future.


Comentários