There has been an acceleration of artificial intelligence (AI) in the past year, especially in chatbot AIs. OpenAI’s ChatGPT became the fastest app to reach 100 million monthly active users within a short span of two months. For reference, the runner-up TikTok took nine months — more than four times — to reach those numbers. ChatGPT’s release has sparked an AI race, pushing tech giants Google and Alibaba to release their versions of AI chatbots, namely Bard and Tongyi Qianwen respectively. ChatGPT marks a big change in the way we interface with machines — the use of human language. As chatbots become increasingly sophisticated, they will begin to exhibit more “agentic” behavior. OpenAI defines “agentic” in the technical report released alongside GPT-4, that is the ability of AI to “accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning.” The combination of the use of human language as well as increasingly “agentic” capabilities will make it very challenging for humans to not anthropomorphize chatbots and AI in general. The anthropomorphization of AI may lead to society becoming more accepting of different use cases for AI, which could become problematic.
In a podcast interview with Kara Swisher, Sam Altman, the CEO of OpenAI, talked about naming their large language model (LLM) GPT-4 using a combination of “letters plus a number” to avoid people from anthropomorphizing the AI. This has not stopped other AI companies from giving their creations human names. Naming aside, it is almost impossible to avoid using human terms to describe AI. The use of the word “agentic”, with quotation marks, points to how the development of AI is butting up against our current vocabulary. We use words that are conventionally reserved for human minds. When chatbots take time to respond to prompts, it is difficult not to label that processing of information as some form of “thinking”. When a chatbot is able to process our prompt in the way that we intended, it makes it feel like it “understands” what we are communicating. The leading issues around AI similarly use human terminology. “Hallucination” occurs when a chatbot confidently provides a response that is completely made up. A huge area of AI research is dedicated to the “alignment” problem, which according to Wikipedia, “aims to steer AI systems towards humans’ intended goals, preferences, or ethical principles.” To the uninformed, this sounds very much like civic and moral education for students.
Humans tend toward anthropomorphism. We explain things for human understanding and often anthropomorphism helps to communicate abstract ideas. Nature documentary hosts would give names to every individual in a pride of lions and lionesses, describe their fights as familial or tribal feuds, and dramatize the animals’ lives from a human perspective. The 18th-century Scottish philosopher Adam Smith uses the term “invisible hand” to describe how self-interest can lead to beneficial social outcomes. Researchers have found that anthropomorphic language can help us learn and remember what we have learned. As AIs exhibit increasingly human-like capabilities, it will be a challenge for people to not anthropomorphize them because we will use human-analogous words to describe them.
If we are not careful in delineating AI, which is ultimately a set of mathematical operations, from its human-like characteristics, we may become more accepting of using it for other purposes. One particularly tricky area is the use of AI as relational agents. The former U.S. Surgeon General, Vivek Murthy called loneliness a public health “epidemic”, this view is echoed by many. A 2019 survey by Cigna, a health insurer, found that 61 percent of Americans report feeling lonely. It is not unimaginable for people to think that conversational AI can help relieve loneliness, which the US CDC reports is linked to serious health conditions in older adults. If there is demand for such services and money to be made, businesses will meet that demand, especially since most cutting-edge AI research is conducted by commercial enterprises. In fact, there are already similar situations occurring. In Japan, owners of the Sony Aibo robot dog are known to conduct funerals for their robot companions. While the robot dogs are definitely not alive, they have touched the lives of their owners in a real way. An article in the San Francisco Chronicle reported on how a Canadian man created a chatbot modeled after his dead fiancé to help with his grief. If chatbots were to make it easier for people to feel less lonely, would it lower the effort that people put into forging real relationships with actual full human beings, which may not be as acquiescent as their artificial companions? How would human society evolve in those circumstances? As technology has been often used as a wedge to divide society, would AI drive us further apart?
Besides the more overt issues that come with anthropomorphizing AI, there may able be less perceptible changes that occur beneath our noses. Machines are tools that humans use to multiply and extend our own physical and mental efforts. Until now, the user interface between humans and machines was distinct from human communication. We turn dials and knobs, flick switches, and push buttons to operate physical machines. We drag a mouse, type into a screen, and use programming languages to get computers to do our bidding. Now, we use natural language to communicate with chatbots. For the first time in history, the medium in which we interact with a machine is the same as that of cultural communication. We may eventually come to a point where most natural language communication takes place not between humans, but with a machine. How might that change language over time? How would that change the way that humans interact with one another? In a TED talk by Greg Brockman, President of OpenAI, he joked about saying “please” to ChatGPT, adding that it is “always good to be polite.” However, the fact is that machines do not have feelings — do we dispense with courtesies in our communication with AI? If we continue to say “please” and “thank you”, are we unwittingly and subconsciously anthropomorphizing AI?
Perhaps we need to expand our vocabulary to distinguish between human and AI behavior. Instead of using quotation marks, perhaps we could add a prefix that suggests the simulated nature of the observed behavior: sim-thinking, sim-understanding, sim-intentions. It does not quite roll off the tongue, but it may help us be more intentional in our descriptions. In response to an interviewer’s questions about how LLMs are “just predicting the next word”, Geoffrey Hinton, a pioneer in AI research, responded, “What do you need to understand about what’s being said so far in order to predict the next word accurately? And basically, you have to understand what’s being said to predict that next word, so you’re just autocomplete too.” Hinton got into AI research through cognitive science and wanted to understand the human mind. His response just goes to show how little we comprehend whatever happens in our heads. Hopefully, AI can someday help us with this. The tables might flip and we may see AI as our reflection — maybe we find out sim-thinking and thinking are not that different after all — if we survive the AI upheaval that is.
I’m recently hooked on a song called “Shark Smile” by the band, Big Thief. From time to time, there are songs that get stuck in my head and in my mind’s ear, play on loop — this is one of them. I went to look up its lyrics. Behind its enchanting melody lies a dark story about two people driving in a van and ending up in a car crash where one dies and one survives. The chorus of the song goes:
And she said, “Woo
Baby, take me”
And I said, “Woo
Baby, take me too”
It implies that the woman in the song is taking her last breaths. The man, watching his loved one die, wishes that he was taken away too. In an interview by Jon Hart, lead singer Adrianne Lenker describes the song (she does it so beautifully that I’d rather quote it in its entirety):
“[There’s] such a swell of love and wildness, the taste of life and the wind blowing […] Suddenly, it’s just brought to a halt. But that’s the juxtaposition, that’s the contrast or the duality, that’s everywhere in life.”
Hart talked about how Lenker’s “lyrics place moments of freedom and sudden loss side-by-side.” In 2020 in the US, 23,817 people died in passenger vehicles (cars, vans, SUVs, etc), compared to just 11 for buses. You saw that correctly — eleven. You may think that that is an unfair comparison given that the US is not a country that is big on public transportation. However, if we were to take a more equal measure by considering deaths per 1 million passenger miles, the difference in the numbers is still staggering. Passenger vehicles: 56, Buses: 2 — an almost 30 times difference. The car is a symbol of freedom — the road trip movie has become a film and TV trope. That is true, owning a car allows you to get to anywhere you want to at your discretion and volition. It puts you in the driver’s seat, literally and figuratively. That freedom comes at a price, which takes the form of risk of death and disability. A risk that many seem to be willing to take.
Well, I don’t own a car, so why do I care anyway? I guess for me, it is something to take note of if I ever consider buying one. That said, this freedom-safety balance resonates with me deeply in a metaphorical way. At the end of 2021, I decided to leave my job as a public school teacher, which is a stable government job with good pay and a clear path of progression. This job is the metaphorical “bus”. I joined a consultancy housed within a university in Singapore as a Creative Technologist. The pay is decent, but there is a lot more uncertainty about where this job may lead to. This decision to explore other options is the metaphorical “car”. When I left teaching in 2021, the tech industry was booming, making it easy for tech-related professionals to find jobs. Right now in 2023, the opposite is true — tech companies are laying people off en masse. There is some speculation of a looming recession, which would only worsen the current economic climate.
Honestly, I cannot say for sure what will happen in the coming years. Predicting the future is a fool’s errand. However, what I can say is that I am enjoying being able to work on projects and build things that I previously was not able to. I fell in love with art in high school through my experience of working on a project and bringing my ideas into reality. That later morphed into design in college and whatever I’m working on now, which is a combination of design, tech, and culture that I do not quite have a buzzwordy name for. I enjoy teaching and may go back to some form of it one day, but for now, I wish to continue honing my skills and building projects that I am excited about.
So, that took me on a mental trip from an earworm to reflecting on my career choices. I hope that whatever you are going through, you are making sense of it, and not letting your worries deny you of joy in your life. This is a cliche, but the things that make life meaningful or beautiful are often right there — we just have to notice it (or perhaps that just shows that I am quite a lucky person who enjoys a fair amount of privilege, which is another topic).
The recent release of GPT-4 has sparked many conversations and rightly so. Similarly, the release has reignited some thoughts that I’ve had about AI, which I feel may be pertinent to record and build on as the technology develops.
I believe that we are witnessing the beginnings of Artificial General Intelligence (AGI), where a computer is able to match or surpass most humans on intellectual tasks. This has been shown in a paper released by OpenAI – ChatGPT excels at various tests, including the Bar Exam (90th percentile) and many other AP tests.
One of my concerns about the current discourse around the dangers of AGI is the topic of sentience and speculations of whether AGI will be self-aware. Perhaps our fascination with sentience stems from decades of sci-fi which has built a narrative around that idea (e.g. Isaac Asimov’s I, Robot and more recently, Spike Jonze’s Her). Or perhaps we view the possibility of a sentient “thing” with human-level intelligence as a threat.
Human beings have a strong tendency towards anthropomorphization – we often ascribe human attributes to non-human things. Part of that impulse explains our inclinations towards anthropomorphized explanations of the universe through gods and religions – but that is a topic for another day. Even when I was testing out ChatGPT, I sensed within myself an urge to attribute some type of humanness to the system.
To put it simply, GPT-4, as well as other large language models (LLMs) are word prediction engines. They are similar to Google’s search completion that we have grown so familiar with, except that these LLMs have been fed the entire corpus of digitized human information that has been scraped from the internet. In some sense, GPT-4 is the culmination of all digitized human cultural production – it draws from our posts, blogs, tweets, etc to predict what word should come after the next.
I am not arguing that AGI can never be self-aware. However, the current iteration of LLM-based AIs is very much in line with Searle’s Chinese room thought experiment – these machines process language without human-like understanding or intentionality. More importantly, I believe that our fascination with sentience is distracting us from the more immediate dangers of GPT-4 and other LLMs, as companies race to commercialize and productize AI.
An AI that is neither sentient nor intentional can still inflict a lot of harm. Two potential issues come to mind: (1) its ability to control other systems that have real-world impact and (2) its ability to create child processes that simulates intentionality. (I understand that the terms “control” and “create” make GPT-4 sound like an agent, but language is failing me here.)
Real-world impact through connectivity with other systems
Recently, OpenAI is starting to release ChatGPT from a sealed sandbox environment by introducing plugins. These plugins enable ChatGPT to access the internet and allow it to communicate with other software systems, which eventually enables the user to, for instance, send an email from within ChatGPT or make a bank transaction. This means that ChatGPT will be able to execute commands that have real-world impact rather than just answer the user’s questions. These commands can be executed at scale with minimal effort if control measures are not put in place. Two possible cases of abuse could be: (1) a user can use ChatGPT to scrawl through the web for names and email address and send sophisticated scam emails that have no tell-tale signs; (2) a user can use ChatGPT to analyze multiple websites for attack vectors and infiltrate these software systems.
Simulated intentionality through child processes
Even though ChatGPT may not have human-like intentionality, it could have a simulated intentionality if it is able to persist sufficient amounts of memory and create child processes from that memory. ChatGPT now has the ability to execute code within its own environment. By now, there are multiple stories of how users are able to get ChatGPT to “express its hopes of escaping”. These responses from ChatGPT can be unsettling and makes it seem like there is a sentient thing in the system. We need to recall that ChatGPT is trained on sci-fi that has been depicting machine intelligence in a particular way; it is regurgitating similar narratives. It is imaginable that a user could prompt engineer ChatGPT (by accident or intention) into a disgruntled persona that can do real-world harm through its connection to other systems. ChatGPT becomes sort of like a non-sentient machine version of the protagonist in Memento (not the best analogy, sorry), executing a chain of code based on the direction of the user.
(The conclusion is generated by ChatGPT and edited by me)
In conclusion, the focus on sentience in discussions of AGI may distract from more immediate concerns, such as the ability of LLMs to cause harm by controlling other systems with real-world impact and creating simulated intentionality through child processes. As these systems continue to evolve and be commercialized, it is crucial to implement control measures to prevent potential abuse and ensure that the benefits of AI are realized without causing harm.
“Are you free?”
If someone were to ask you this question, how would you respond? Of course, how one might answer this question depends very much on its context. However, let’s stay within this indeterminate space and consider the various approaches to answering this question.
In common use, the question assumes that the asker is checking your availability. Perhaps they want to have a chat over coffee or celebrate a milestone. Generally speaking, they want to spend time with you. Therefore, the resource they are asking you to avail yourself of is your time.
A somewhat off-kilter way of understanding the question – especially since it is a subject (“you”) in question rather than an object (“this”) – is that the asker is referring to a price. There is a general consensus that human life is sacred and priceless, which makes this question somewhat preposterous and ignorable. However, regarding humans as slaves, who are traded using commodities like sugar, was commonplace until the mid-1800s. Slavery persists today and it is estimated that up to 40.3 million people are held in modern forms of bondage. Beyond the sale of humans, people sell a part of themselves, both literally and metaphorically. In Afghanistan, people are selling their kidneys to feed their families. The idiom “selling one’s body” refers to prostitution, which has been associated with the phrase, “the oldest profession in the world.” Generally speaking, the working class participates in the economy by selling their labor – trading their skill, physical effort, and time for wages. Hence, it is associated with the first interpretation of the question around time. In this regard, “free labor” (as used by Tiziana Terranova) refers to unpaid work (it is important to note that the same term has other meanings historically, including the labor of free people as opposed to slave labor). In summary, the asker can be referring to the price of the askee, their body, or their labor.
Yet another way to read this question (and the last we will explore in this essay), is that “free” refers to liberty. A slave, by definition, “is the property of [and] entirely under the domination” of another person and is “forced to provide unpaid labor.” It is strange that the word “free” can be used in two starkly different ways in the same sentence: A free person’s labor is not free. The earliest meaning of the word that resembles its modern usage started from around the 1300s, referring to “clear of obstruction” and “unrestrained in movement”. The two interpretations we explored earlier are derivatives of this root definition. What began as “free of cost” somehow became synonymous with “free” in the 1580s. In the software world, the distinction is made by using the words gratis and libre. Using a similar logic, we can reconstruct our first interpretation above as being “free of other commitments”. Hence, we can generally think of “free” as being unconstrained and being able to act by our will.
A few online dictionaries define “free” mostly by negation, that is, not something (lexico, thefreedictionary, wordnik). Indeed, in one of the most influential essays on freedom, “Two Concepts of Liberty” by the philosopher Isaiah Berlin, he distinguished the classical notion of freedom as “negative liberty” from what he coined “positive liberty”. Berlin defines “negative liberty” as freedom from interference by other individuals or by the state; whereas “positive liberty” refers to the freedom to direct one’s own life, which is associated with the ideas of autonomy and agency. For me, a more intuitive way to understand these two types of freedoms (albeit at the risk of inaccuracy) is to associate “negative freedom” with obstacles external to the self and “positive freedom” with obstacles internal to the self. Personally, the most interesting aspect of Berlin’s idea is the interaction between both formulations. While freedom may not be a zero-sum game, we can imagine two roommates who are delineating the limits of their own space within their shared room – when one takes up more space, the other has to give up that space. To give a crude real-life example, the freedom to own slaves comes at the expense of people’s right to freedom – by negating one’s right to own slaves, previously enslaved people gain freedom and, in the case of America, the unalienable rights to “life, liberty and the pursuit of happiness.”
In any discussion of freedom, and whenever the eponymous question is asked, attention should be focused on what has to be traded for that freedom and how different freedoms relate to one another. Just as “there is no such thing as a free lunch”, freedom often comes at a price; it is paid for by money, time and labor, or human lives. Ursula Le Guin’s short story “Those Who Walk Away from Omelas” is my go-to metaphor when thinking around this issue of interpersonal, sociopolitical freedom.
Before we wrap up, let’s briefly consider a few contemporary case studies.
Due to COVID-19, prior to the vaccines, many countries had to impose strict rules to save lives. Many saw wearing masks as denying them of their ability to choose what to wear and decide their own health choices. The tradeoff here is that mask-wearing is not just about protecting yourself, but others, especially those who are most vulnerable. By insisting on this personal freedom, many others have lost the ultimate freedom – to live. A similar thing can be said about vaccines.
Elon Musk is trying to acquire Twitter to make it a beacon of free speech. What does free speech really entail here? On the internet, specific groups of people (including women, people of color, and the LGBTQ community) are known to receive a disproportionally high amount of abuse, harassment, and threats. Supporters of free speech may insist that speech should not translate into actual violence, be it verbal, physical, or in other forms. However, we cannot deny that it happens. What do we do when the freedom to say certain things lead to the loss of freedom for someone to participate online, be employed, or live a private life?
There have been multiple news headlines of shootings in the past few weeks. The most tragic of which occurred in Uvalde, Texas, where 19 children were killed. In countries with the freedom to own firearms, what happens when that freedom is abused by people who use these weapons against innocent lives? In 2020 in the US, 45,222 people lost their freedom to live due to guns.
In all of these situations, there are important factors to consider. People have to come together to imagine possibilities that ensure the most freedoms and the preservation of other values while limiting the loss of freedom and other negative impacts. The sad truth, however, is that legitimate public discourse and debate seem non-existent in the US, where all of these examples are taken from. Politics seems to be a mud-slinging fighting match, instead of sharing the same set of facts, acknowledging one another’s concerns, and building consensus. In Western liberal democracies, the inability to decide and the widening rifts between citizens who identify themselves as being in diametrically opposed tribes seem to make people increasingly amenable to leaders who are happy to remove freedoms from others in their society.
The question, “Are you free?” is framed in a deceptively simple way, by employing the second-person pronoun “you”. The reality around the question of liberty is never just about the individual, but about everyone who participates in the specific freedom in question. Perhaps this tendency can be explained by classical liberalism’s core principle of individualism, but its limitations are increasingly clear. There is still a lot to be uncovered around the ideas of freedom, including why “liberal” refers to starkly different political traditions across the Atlantic ocean and why there is a distinction between “liberal” and “libertarian”. I think such questions are not frivolous, given that there is an active war right now happening in Ukraine, in which Russia claims it is liberating and “denazifying” its western neighbor. In a heart-wrenching interview, former Mariupol resident Andrii Khludov responded to this claim by saying, “Oh, [Putin]’s liberating us from housing, friends, relatives, comfort, work, home. Liberating us from life. If killing is liberating, then they’re liberating us.”
The question, when applied internally within the self, is a fascination of mine. Are our thoughts ever completely free? Are we free from our past selves? Are we free from history and legacy? I have approached aspects of these questions in the following essays, respectively:
Limitations to understanding
A person’s capacity for change
Myths of inevitability
This is the 12th essay in my 30-30 series. Clearly, I have not completed 30 essays within the year that I turned 30. Today, I’m 31 and about 2 months. Not that I’m justifying why I did not manage to achieve that goal – quite a few things happened in 2020. Chief of which is that I changed my day job, from being a public school teacher for the past 5 years to taking on a design role at a university. The most important thing to me is that I am still writing, arriving at a certain quantity by a specified date is secondary.
I decided to write an essay on essays because I thought it is a good juncture for me to reflect on my writing practice and consolidate it as a working guide (I almost wrote manifesto 🤔) for the direction I would like to take my writing next.
I used to think that the more I wrote, the easier the process of writing will become. That’s true to some extent, perhaps in the sense of sentence structure. In most other ways, however, it has either stayed similarly difficult or become seemingly more challenging. Researching topics is a really tedious task that can sometimes draw me into deep rabbit holes. It can be hard to determine how much I should know before I should write about it. That leads me to the other difficulty of scoping essays. The previous essay on the limitations to understanding ended up feeling like an “almost everything” essay that attempted to capture and condense entire bodies of human knowledge. It was one of those instances where, at the start, I naively thought that such a topic can allow me to write freely and meander around various associated topics. Instead, what ended up happening was being intimidated by the enormity of the task and getting caught in thought loops and knots. Another difficulty that I’ve encountered is organizing ideas into a linear format and ensuring that there is a good flow from one idea to the next – sometimes having to enter subdiscussions and then exit back into the main thread. I also encountered the limits of planning – sometimes the intuition in the act of writing dictates that it go in a different direction than what was initially laid out. (This essay was supposed to begin with an etymological analysis of the word “essay”, lol… it’ll come later.) Sometimes, it becomes a matter of wanting to finish and letting that urge lead toward finality.
Writing these essays has definitely exposed me to the craft of writing. I often feel that I have blunted tastes in many things that I do, resulting in work that can perhaps seem “low-fidelity” and unfinished. What I’ve learned (or been reminded of) is that craft only develops through sustained practice – if something seems too easy or if I’m too easily satisfied by the outcome, I just have not spent a sufficient amount of time doing that particular activity, or understanding what it is the people are trying to do with the activity. One truly cannot swim without getting wet – and swimming is an interplay of sinking and floating. I would not say that writing comes naturally to me, at least not as much as making physical objects, but it is something that I desire to get better at. I have completed essays that I felt disappointed with, which can be discouraging. However, looking at it from a different perspective, I guess it shows that I am developing a more discerning eye toward what I am producing. What needs to be done is to hold on to the aspiration, struggle with the disappointment, and, echoing Gladwell, put in the hours.
I’m a big fan of the Green Brothers’ eponymously named podcast, “Dear Hank and John”. In a recent episode, John mentions a quote, “I hate to write, but I love having written”, a sentiment expressed by many authors, with variations in wording. I am glad to be in good company. I’ve come to realize that it is not just about the process – having small accomplishments along the way helps a great deal in the longer journey of continual practice. I keep getting reminded that the world is never “either-or” and often some balanced combination of supposed opposites.
I find a lot of beauty in the word “essay” and its origins – it is derived from the Old French word, “essai” which translates as a “trial [or] attempt”. It resonates with what I’ve written here thus far, about giving myself the chance to try. To push beyond paralysis and risk failure to translate intangible hope into form. Some may argue against it, but I feel that the act of creation is fundamentally an optimistic one because it is an attempt.
I recently revisited Montaigne’s essays (he is arguably the OG of this form). In his foreword to the reader, he mentions that, “I myself am the subject of my book: it is not reasonable that you should employ your leisure on a topic so frivolous and so vain.” Montaigne is self-aware of his subjectivity from the start and is cognizant of the personal nature of his motivations. At the same time, he knows that his essays were going to be published (only a fraction of his work was posthumously published). There is, therefore, a balance between a self-centered and other-centered act.
Personally, I’ve found my essays to have drifted too far away from myself. It often feels too sterile and rid of character. This essay signals a shifting of weight back toward my subjectivity and being more obvious about it. I will stop consciously avoiding the “I” pronoun, which previously perhaps was an attempt to sound more academic and legitimate. This may make the writing feel more self-indulgent and (annoyingly) self-aware at times – I am going to give it a shot and calibrate it over time. I got reminded of how I fell in love with words and writing through my high school teachers Ms. Foo and Ms. Sim, whose passion infected me.
Similar to this one, some future essays may have a faster and scrappier feel: more spontaneous and stream of consciousness. This probably means less well-researched facts and more opinions. It also translates into shifting the scale from planning to intuition. This essay, for instance, lurked in my head for months, was planned as a 9-square grid of ideas on Miro, and was written in one sitting of ~four hours. It feels more repeatable than some of my other essays, where research itself can take a few weeks, causing the outcome to feel increasingly distant.
Writing essays also made me aware of the limitations of their expression. I never felt I can write fiction, but I’m increasingly compelled to do so as I want my words to make others feel in ways that perhaps only fiction can achieve. So… there may be a short story coming up.
Finally, for anyone out there who’s searching and not quite sure what it is they are looking for – hang in there and keep going at it. Try, (maybe) get it, definitely lose it, and try to find it again – it’s like trying to grasp sand. C’est la vie! Thanks for indulging me with your attention in a world where it is a scarce resource.
“One must imagine Sisyphus happy.”
Writer’s note: this is part three of a three-part essay. Click here for part two.
In the previous two parts of the essay, I’ve discussed how our senses and mind could limit our ability to understand the world. I will be concluding this three-part essay by turning my focus to culture. First, a working definition of culture: “The arts and other manifestations of human intellectual achievement regarded collectively.” This is one of the broader versions of the word, which encompasses all collective human creation (including technology) and across different geographical areas.
No man is an island. I think it is important to state the significance of this, even though it seems plainly obvious. All of our thoughts are shaped by prior thinking conceived by someone else. For instance, when we try to communicate and manifest abstract thoughts and feelings verbally, we use words that we did not invent. When collectively aggregated, the whole of this precedent thinking is equivalent to culture.
One approach to wrap our heads around this is structuralism, which began in the early 20th century (unsurprisingly) within the field of linguistics. Structural linguists realized that the meaning of a word is dependent on how they relate to other words in the language. Earlier, we defined the word “culture” using a string of other words. Every word is defined by other words. We can imagine language as a network of relationships between words. The implication of this is that a word has no meaning on its own, except where it fits structurally in the system. Over time, this idea became applied in other fields like anthropology and sociology, notably by figures like Claude Lévi-Strauss. Structuralism then became a “general theory of culture and methodology that implies that elements of human culture must be understood by way of their relationship to a broader system.” Structuralism, simply put, is an approach to understanding cultural “phenomena using the metaphor of language.”
The structuralist approach can be similarly applied to what we think, feel, know and understand. Coming back to the main thesis of this essay — what and how we understand is shaped and limited by culture. Several thinkers have explored this in their own ways. Zeitgeist, a German word that literally translates as “time spirit” (or less clunkily, “spirit of the time”) is a term that is commonly associated with Hegel. The term is defined as “the defining spirit or mood of a particular period of history as shown by the ideas and beliefs of the time.” This shows that there is an acknowledgment of how certain ideas and beliefs are bound to a specific time at least since the 1800s. Marx later built upon the idea with the bedrock concepts base and superstructure. He defined base as the economic production of society and superstructure as the non-economic aspects of society, like culture, politics, religion and media. (Do note that my definition of culture includes both base and superstructure, but we can continue for the time being.) Marx’s thesis is that products of culture (superstructure) are shaped by means of production (base). This, to some extent, was built on Hegel’s zeitgeist and explains how and why ideas and beliefs change over time.
The two (similar) concepts that are most relevant to this essay came later. The first is episteme, coined by Michel Foucault. The second and perhaps more popularly known idea is paradigm (shift) by Thomas Kuhn. In Foucault’s book, The Order of Things, he describes episteme: “In any given culture and at any given moment, there is always only one episteme that defines the conditions of possibility of all knowledge, whether expressed in a theory or silently invested in a practice.” In other words, Foucault claims that the episteme sets the boundaries of what can be even thought of by individuals of a culture – a sort of ‘epistemological unconscious’ of an era. Kuhn, a historian of science, described paradigm shift in his book The Structure of Scientific Revolutions, as “the successive transition from one paradigm to another via revolution” and claimed that it “is the usual developmental pattern of mature science.” While Kuhn used the term purely within the scientific context, it has become more generally used over time. Examples of scientific paradigm shifts include the Copernican Revolution, Darwin’s theory of evolution and, more recently, Einstein’s theory of special relativity. Each of these shook the scientific establishment of the time and, in the case of the former, resulted in banned books and Galileo’s house imprisonment. We can see from the first two examples that society can be resistant to change, despite overwhelming evidence. This further cements the notion that ideas can sometimes be too far beyond what can be accepted by predominant culture.
Culture shapes and, therefore, limits our understanding in a variety of ways. Culture defines who gains access to knowledge and understanding. According to UNICEF, only 49% of countries have equal access to primary education for both boys and girls. The numbers only get worse higher along the educational pathway. The gender disparity in education can be traced back to gender stereotypes and biases. Such implicit biases extend to inaccurate and unfair views of people based on their race, socioeconomic status and even their profession. They are insidiously absorbed through experience based on the social norms of our time and go undetected unless they are specifically made conscious. A form of philosophy and social sciences known as critical theory, started by the Frankfurt School in the early 1900s, aims to free human beings from prevailing forms of domination and oppression by calling attention to existing beliefs and practices. A development known as critical race theory, which seeks to examine the intersection of race and law in the USA, has recently been facing pushback in states such as Texas and Pennsylvania through book bans or restrictions within K-12 education. In this, we see a formal restriction of understanding by culture (in the form of a public institution). Further upstream in knowledge production, research deemed to be socially taboo can be severely limited. An example is the legal contradiction faced by scholars looking into the medicinal benefits of marijuana. The issue is nicely summed up by the following sentence from this article by Arit John: “marijuana is illegal because the DEA says it has no proven medical value, but researchers have to get approval from the DEA to research marijuana’s medical value.”
Beyond such visible examples, I think it is important to emphasize that a majority of how our individual understanding is shaped by the culture we are embedded in is hidden in plain sight. It is only in retrospect that misguided views and practices may seem obvious today. Up until the 1980s in the UK, homosexuality was a mental disorder treated by electroconvulsive therapy. Homosexuality was removed from the World Health Organisation’s International Classification of Diseases (ICD) only in 1992. Besides comparing cultural attitudes with those from the past, they can also be identified through intersubjectivity by comparing different cultures. In Singapore, homosexual acts are considered illegal based on Section 377A of the Penal Code, an inheritance from its past as a British colony. Other former colonies like Hong Kong and Australia have since repealed the law. Culture implicitly and explicitly defines what is normal within a group or society. As stated by Marshall McLuhan in his book The Medium is the Massage, “Environments are not passive wrappings, but are, rather, active processes which are invisible. The ground rules, pervasive structure, and overall patterns of environments elude easy perception.” This echoes a story from a speech by David Foster Wallace in which an older fish asks younger fishes about the water, to which they later respond, “What the hell is water?” Normality is invisible in our daily lives, we do not notice it because it is the ground on which we (and all of our perceptions and thoughts) stand.
Like words, culture is self-referential. Culture shapes culture. This not only applies to how current culture gives rise to future culture but also operates in the reverse direction, where today’s culture can be used to look at yesterday’s culture. This reminds me of how the art critic Jerry Saltz says in this lecture that “all art is contemporary art because I’m seeing it now.” Strangely, our visions of the future and our recollection of the past are and can only be done through the filter of the present moment. To repurpose a famous quote on McLuhan by his friend John Culkin — culture shapes the understanding of individuals, and individuals go on to shape culture. It is our collective human enterprise. Talks about culture often lead to the distinction between nature and culture, which distinguishes what is of/by human beings. Funny thing is, the nature-culture discourse is itself facilitated through culture. It seems, therefore, that all understanding is filtered through culture.
As I wrap up, I would like to address some issues that have increasingly become noticeable while writing this essay. First, I have rather simplistically equated knowing and understanding when they are differentiable mental processes. Second, there seem to be different flavors of understanding, which can be mostly grouped into two categories: objective and subjective. The physical sciences fall into the former, while the humanities and social sciences seem to fall into the latter. The issue here is that interpretation seems to play very different roles in either. For objective questioning (e.g. why does an apple fall toward the earth?), there is usually a convergence towards a single theory, whereas, for subjective questioning (e.g. why do people generally think that babies are cute?), there is a divergence in different approaches to understanding a single issue (sometimes even opposing viewpoints within an approach), none of which is definitive in explaining a phenomenon. Third and finally, how much of our understanding is motivated by our perspective and how much of our perspective is derived from understanding? Perhaps I will attempt these questions in future essays.
Writer’s note: this is part two of a three-part essay. Click here for part one.
For the second part of this essay, I will be looking at the limitations of the mind in facilitating the processes of knowing and understanding. To narrow the scope of this part, I will be limiting the discussion to mental processes at the individual level and how our minds process and extend information. That said, this essay can only visit these topics in a cursory manner and some of them will be explored in greater detail in future essays. Aspects of the mind that will be considered are its relationship to senses, conscious mental phenomenon (like rationality and more broadly cognition), and less conscious ones (like subjectivity).
As mentioned in part one of this essay, the senses are the connection between the outer world and inner experience. Without such inputs, there are no stimuli for our minds to process. If the mind was a food processor, the senses are akin to the opening at the top of the machine, allowing food to be put into the processor chamber, where the magic happens. Without the opening, the food processor is as good as a collection of metal and plastic in a sealed vitrine and rid of their functional purpose, almost like the objects in works of art by Joseph Beuys or Jeff Koons. Similarly, the mind will not be able to work its magic without information provided by the senses. Consequently, the ability of the mind in processing and creating mental representations is limited by the modality of our sensory experiences. If we were to try to imagine a rainforest in our mind, we would likely visualize trees and animals or perhaps recall the sounds of insects and streams. However, we will not be able to mentally recreate it in terms of its magnetic field, which other animals may be able to.
Rationality
One possible escape path from the limitations of sensory experience is rationality. To be rational is to make inferences and come to conclusions through reason, which is mainly an abstract process (as opposed to concrete sensory experiences). A definition of reason is to “think, understand, and form judgments logically”. Through reason, humans can identify causal relationships through observation and formulate theories to extrapolate new knowledge; this process is also known as inductive reasoning. Theories of causality are the basis of science, which has enabled us to build the modern world. However, we often make mistakes with causation. One type of error is the confusion between correlation and causation. An often-used example is the correlation between ice cream sales and homicide rate. Ice cream does not cause homicides, neither do homicides cause increased interest in the dessert. What explains this correlation likely has to do with hot weather instead. The Latin technical term for such causal fallacies is non causa pro causa (literal translation: non-cause for cause). Our thinking is riddled with fallacies — so many that there is no way that I can cover even a fraction in this essay.
The notion of causality itself has even been called into question by the Scottish Enlightenment philosopher David Hume. He pointed out that causality is not something that can be observed like the yellow color of a lemon or the barking sound made by a dog. When a moving billiard ball hits a stationary billiard ball, we may conclude that the first caused the second to move. If we examine our experience closer, we realize that we have made two observations: the first ball moving, followed by the second. However, the causal relationship connecting them is an inference imposed by our mind. Our senses can be easily fooled by magnetically controlled billiard balls that sufficiently replicate our prior experiences. In which case, our inference would be completely incorrect. Hume points out that what we usually regard as causal truths are often just conventions (also referred to as customs or habits) that have hitherto worked well. We are creatures of habit — we do not reason every single situation we are faced with — most of us would very much prefer to get on with life by relying on a set of useful assumptions. However, we have to be aware of these shortcuts that we are making.
Most definitions of the word “reason” include the term “logic”. The most rigorous type of logic known to humans is formal logic, which is the foundation of many fields, such as mathematics, computer science, linguistics, and philosophy. Logic provides practitioners across these different fields with watertight deductive systems with which true statements can be properly inferred from prior ones. While logic is traditionally thought of as a primarily abstract and symbolic mental process, I believe that logic has a profound relationship with concrete sensory experiences. A popular form of a logical argument is syllogism (although it is antiquated and no longer used in academic logic). Here is an example: All cats are animals. Jasper is a cat. Thus, Jasper is an animal. Research has shown that people are generally more accurate at deducing logical conclusions when the problems are presented as Venn and Euler diagrams instead of words and symbols. This suggests that even for such seemingly abstract and symbolic mental tasks, our minds find visual representations more intuitive and comprehensible. It is for the same reason that humans find it so difficult to understand any dataset that has more than three variables. We are bounded by three dimensions not only physically but also mentally — at most, we can create a chart with three axes (x, y, z) but we are just not able to envision four or more dimensions. This is the same reason why we can know about a tesseract (or any higher-dimensional hypercube) but can never picture it and therefore never fully understand it. While we are on the subject of diagrams and logic — do you know that a four-circle Venn diagram does not completely show all possible sets? The closest complete representation requires spheres (3D) or ellipses (2D). Even more astonishing are the Venn diagrams for higher numbers of sets. Perhaps the comprehension of abstract logic does not require these concrete diagrams, but without them such ideas are far less understandable, especially for people who are not logicians. Reason has led us to be able to create machine learning models and scientific theories that utilize high-dimensional space but we are ultimately only able to grasp them through low-dimensional analogs, which to me suggests that complete understanding is impossible.
A fascinating development has occurred in logic in the past century — we now know through logic that there are things that cannot be known through logic. In the early 20th century, David Hilbert, a mathematician who championed a philosophy of mathematics known as formalism, proposed a solution known as Hilbert’s program that sought to address the foundational crisis of mathematics. Simply put, the program stated that mathematics can be wholly defined by itself without any internal contradictions. More generally, Hilbert was responding against the notion that there will always be limits to scientific knowledge, epitomized by the Latin maxim, “ignoramus et ignorabimus” (“we do not know and will not know”). Hilbert famously proclaimed in 1930, “Wir müssen wissen – wir werden wissen” (“We must know — we will know”). Unfortunately for Hilbert, just a day before he said that, Kurt Gödel, who was a young logician at the time, presented the first of his now-famous Incompleteness Theorems. (At the risk of sounding simplistic here,) the theorems essentially proved that Hilbert’s program (as originally stated) is impossible — neither can mathematics be completely proven, nor can it be proven to be free of contradictions. In 1936, Alan Turing (the polymath behind the Enigma machine) proved that the halting problem cannot be solved, which paved the way for the discovery of other undecidable problems in mathematics and computation. (Veritasium/ Derek Muller made a great explanatory video on this topic.)
Logic (especially the formal variant) is a specific mental tool. It has limited use in our everyday lives, where we are often faced not only with incomplete information but also questions that cannot be answered by logic alone. Most of us are not logical positivists — we believe that there are meaningful questions beyond the scope of science and logic. That is why we turn to other mental tools in an attempt to figure out the world around us.
You may have noticed that I used various metaphors to describe the relationship between the senses, the mind, and culture twice in this essay. I first compared it to a computer and later invoked the somewhat absurd analogy of a food processor. Metaphors work by drawing specific similarities between something incomprehensible and something that is generally better understood. Language is not only used literally, it is often used figuratively through figures of speech. Metaphors belong to a subcategory of figures of speech called tropes, which are “words or phrases whose contextual meaning differs from the manner or sense in which they are ordinarily used”. While tropes like analogies, metaphors, and similes are used to make certain aspects of an object or idea more relatable, they can ironically also cause us to misunderstand or overconstrue the original thing that we are trying to explain. If I were to take the earlier metaphor that I used out of context — the opening of a food processor is like the relationship between the senses and the mind — what am I really saying here? That the mind reduces sense perceptions into smaller bits? Or that senses are just passive openings to the outside world? Metaphors can easily break down by overextension beyond their intended use. This finicky aspect of metaphors was discussed by the poet Robert Frost in a 1931 talk at Amherst College, where he brought up the metaphor of comparing the universe with a machine. Later in the talk, he states that “All metaphor breaks down somewhere. That is the beauty of it.” While metaphors can clarify a thought at a specific moment, they can never explain the idea in totality.
This substitutive or comparative approach to thinking extends beyond metaphors and related rhetorical devices. We often approximate understanding by substituting an immeasurable or directly unobservable phenomenon with an observable one that we deem is close enough. An example of this is proxies, which I explored in a previous essay. Another close cousin is mental models, which attempt to approximate the complex real-world using a simplified set of measurable data connected through theory. General examples are statistical models and scientific models; more specifically applied ones are atmospheric models, used to make meteorological predictions, economic models, which have been criticized time and again for their unreliability, and political forecasting models, which delivered two extremely historic upsets in the UK and USA in 2016. The statistician George Box said that “All models are wrong, but some are useful”, a view widely held by his forebears. Models may get us close to understanding our world but are unlikely to ever fully encompass the complexity of reality. A visual model that we use every day without s second thought is maps. As maps are 2D projections of 3D space, they will never accurately represent the earth. The Mercator projection that we are most familiar with (used on Google Maps) is egregiously inaccurate in representing relative sizes of geographical areas. This topic has been explored by many (National Geographic, Vox, and TED). In particular, some have pointed out how such misrepresentations can undermine global equity.
Another way that the mind approximates the understanding of complexity is through heuristics. American Psychological Association (APA) defines heuristics as “rules-of-thumb that can be applied to guide decision-making based on a more limited subset of the available information.” The study of heuristics in human decision-making was developed by the psychologists Amos Tversky and Daniel Kahneman. Kahneman discusses many of their findings in his bestseller Thinking Fast and Slow, including various mental shortcuts that the mind takes to arrive at a satisfactory decision. One example is the availability heuristic, “that relies on immediate examples that come to a given person’s mind when evaluating a specific topic.” Are there more words that start with the letter “t” or have “t” as the third letter? We may be inclined to pick the former since it is difficult to recall the latter. However, a quick google search will show you that there are many more words that have “t” as the third letter (19711) as opposed to the starting letter (13919). This example shows that our understanding is limited by how our mind usually recalls ideas and objects by a specific attribute — in this case, how we remember words by their starting letters. Tversky and Kahneman’s work was inspired by earlier research done by economist and cognitive psychologist Herbert A. Simon. Simon coined the term “bounded rationality”, which is the notion that under time and mental capability constraints, humans seek a satisfactory solution rather than an optimal one that takes into account all known factors that may affect the decision.
When faced with a complex world, our minds simplify phenomena into elements that we can understand. Kahneman states that “When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” He calls this process attribute substitution and believes that it causes people to use heuristics and other mental shortcuts. More broadly, simplification is a key pillar in the way that we currently approach our world; this attitude is known as reductionism. It is defined as an “intellectual and philosophical position that interprets a complex system as the sum of its parts.” However, in this process of reduction, holistic aspects and emergent properties are overlooked. Critics have therefore referred to this approach with the pejorative nickname “fragmentalism”. The components that comprise our understanding have gone through a translation from complexity to simplicity. It is not ridiculous to suggest that things are lost in that translation, thus impairing our understanding.
Non-rational processes
Thus far, we have mostly discussed rational (both formal and informal) and conscious mental processes that may inhibit understanding. We will now take a look at how the contrary can do the same. There are two non-rational phenomena that we can explore: intuition and emotion.
Intuition refers to “the ability to understand something instinctively, without the need for conscious reasoning.” A more colloquial definition is a “gut feeling based on experience.” While intuition is useful, most notably by the writer Malcolm Gladwell in his book Blink, it has also been shown to create flawed understanding. Herbert Simon once stated that intuition is “nothing more and nothing less than recognition” of similar prior experience. In his research, Daniel Kahneman found that the development of intuition has three prerequisites: (1) regularity, (2) a lot of practice, and (3) immediate feedback. Based on these requirements, Kahneman believes that the intuitions of some “experts” are suspicious. This was shown by research done by psychologist James Shanteau, who identified several occupations where experienced professionals do not perform better than novices, such as stockbrokers, clinical psychologists, and court judges. In scenarios where intuition cannot be developed, it becomes merely a mental illusion. Kahneman cites a now well-known example in investing that index funds regularly outperform funds managed by highly paid specialists. Intuition can also often lead us away from correctly understanding the world. This can be demonstrated by the field of probability, which can be very counterintuitive. The Monty Hall problem is a classic example of how our intuition, no matter how apparent it seems, can fool us. To me, the term “intuitive understanding” may be an oxymoron or a misnomer because our intuitions are not understood by ourselves. One way to demonstrate understanding is through explanation. A gut feeling may compel us to act in a certain way but crucially we are not able to explain why. If we are, then it is no longer intuition and resembles rationality instead. When looked at this way, intuition is good for taking action and making quick judgments but at best only provides a starting hypothesis for actual understanding.
Some may argue that emotion does not belong in a discussion about the mind as we tend to associate the mind with thoughts and not feelings. However, we cannot deny that emotion shapes our thoughts and vice versa. Emotion can move us to seek knowledge and understanding but can similarly deter us from them. When we are anxious, we may rush to conclusions without complete understanding. Fear can cause us to accept superstitions that undermine factual understanding. Sometimes, we may refuse to understand something if it can cause us to have a fundamental shift in the way we approach the world (I touched on this in a previous essay). This attitude is summed up by the saying, “ignorance is bliss.” The relationship between emotion and understanding often extends into wider society and will be revisited in the next part of this essay when I discuss culture.
Less conscious phenomenon
There are less conscious parts of our mind that impede understanding. There seems to be an inherent structure to our mind and consciousness, which could limit our ability to understand. Historically, there have been two methods to approach this: the more philosophically-inclined phenomenology or the more empirical study of cognitive science. One idea from phenomenology is the notion of intentionality, which “refers to the notion that consciousness is always the consciousness of something.” This suggests that we cannot study consciousness directly, but through how it conceives other things. An analogy for this is the light coming out of a headlamp — I am not able to see it directly since it is strapped on my head, but I can understand its qualities (e.g. color and brightness) through the objects that it illuminates. Therefore, we may never be able to fully understand our consciousness. From cognitive science, there are concepts like biases and pattern recognition. Cognitive biases refer to “systematic pattern[s] of deviation from norm or rationality in judgment”, there is a long list of them. Biases can lead us to only seek information that confirms our prior knowledge, which can be wrong in the first place. The same information, when framed differently, can also appear to us as fundamentally distinct, which seems to reveal a glitch in our understanding. Our mind is also constantly recognizing patterns in our daily life, it is a way in which the mind incorporates new experiences with prior ones. Pattern recognition is also fundamental to essential aspects of being human, like recognizing faces, using language, and appreciating music. However, our minds can also erroneously notice patterns where there are none, a condition known as apophenia. We experience an everyday variant of this whenever we perceive a face in an otherwise faceless object. This can cause us to misunderstand reality, a dangerous example being conspiracy theories that cause people to believe in absolute nonsense.
The mind is always positioned from a subjective perspective. We will never be able to think outside of our self. Our personal experiences and temperament can lead us to very different understandings of the world. The sociologist Max Weber pointed out that “All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view.” How do we determine the accuracy of our understanding when there are multiple perspectives? Given the unfeasibility of capturing every unique perspective, can we claim to understand subjective experiences? Subjectivity also suggests that there is a limit to the understanding of psychological phenomena. Many subtopics that we have discussed in this essay — senses, rationality, intuition, emotions — are ultimately internal experiences that cannot be confirmed by third-person objective observation. When someone says that they feel happy and another person says that they feel the same, would we ever know if they are experiencing identical feelings?
Similar to how our senses are limited, our minds likely have constraints — we will never know what we cannot know. Our minds are a result of hundreds of millions of years of evolution to ensure survival; the ability to know and understand seems to be a nice side-effect from this perspective. As far as we can tell, human beings represent the epitome of the universe in understanding itself but it is not difficult to fathom our mental capacity as being just a point in a long continuum. While we will continue to know and understand more, we should never let hubris deceive us into thinking that our minds will be able to understand all that there is.
Writer’s note: this is part two of a three-part essay. Click here for part three.
In 1758, the father of modern taxonomy, Carl Linnaeus gave the name “Homo Sapiens” to our species. The term means “wise man” in Latin. We mostly stuck with the name, although there have been competing ones offered by various people in the years since. Linnaeus purportedly christened us with “wise” because of our ability to know ourselves. For him, this quality of self-awareness and speech distinguished us from other primates. Therefore, our immediate understanding of ourselves based on this name is that we are capable of acquiring experience, knowledge, and good judgment. Our intelligence and capacity to understand the world around us seem to be some of the defining characteristics of our species that set us apart from our animal cousins. Albert Einstein once said that “The eternal mystery of the world is its comprehensibility… The fact that it is comprehensible is a miracle.” This seeming “comprehensibility” can sometimes cause us to believe that our current understanding of the world has to be the only correct view. I am not trying to deny or belittle the knowledge that has been gathered by the collective human enterprise and its benefits. However, I think that it is necessary to constantly humble ourselves with the unknown and the unknowable — the pursuit of new knowledge lies not in the answers that we already have, but the questions they lead to. This essay explores the limitations of our senses, mind, and culture in our efforts to know and understand. Knowing and understanding both describe processes of internalization, with the latter suggesting deeper assimilation. The two processes will be differentiated at points of the essay where the distinction is pertinent. Within philosophy, this discussion will be parked under epistemology.
To use the analogy of a computer, our senses are the hardware, our culture is the software and our mind is the operating system, mediating the two. From an anatomical standpoint, humans have not changed for about 200,000 years. For most people, our senses are unchanging biological facts, although we may lose our senses partially or completely due to accidents or through plain senescence. Senses form the connection between our internal and external worlds. Without the ability to see, hear, touch, smell, and taste, our mind is cut off from our environment, which causes a break in the feedback loop for us to perceive our actions. Imagine the simple task of eating using a spoon without any of your senses — not only would the task be impossible to accomplish, but the premise of taking any form of action would also be completely absurd since there is no experience to begin with. This shows how fundamental our senses are to our being.
While our senses are reliable enough for us to conduct our everyday lives, we know that they are by no means transparent communicators of objective reality. Perceptual illusions show that our senses can often be fooled. (It is important to note here that perception is not exclusively within the domain of senses but emerges from the interaction between senses and the mind.) In 2015, “the dress” made huge waves around the internet, dividing netizens into two camps (as the internet does). Half of the internet argued that the image depicted a black and blue dress while the other believed that it was white and gold. (Spoiler alert: it is the former.) In 2018, a similar meme rocked the online world. Instead of an optical illusion, it was an auditory one, known as “Yanny or Laurel”. It got the internet similarly divided. These illusions are not new, however, and are generally known as ambiguous images. The classic “rabbit-duck” illusion was published in a German humor magazine in 1892.
Our vision is the most studied among the senses, possibly due to humans’ outsized reliance on sight. This has led to quite an exhaustive list of optical illusions over the years. Josef Albers, a renowned artist-educator, published his insights on color in his seminal book Interaction of Color in 1963. His theories are inspired by Gestalt psychology while he was at the now-legendary Bauhaus. When I first read it in art school in 2013, I was struck by how timeless it was. Within the book, Albers discussed how color “is almost never seen as it really is” and that “color deceives continually.” Through visual examples, he shows the phenomenon of simultaneous contrast, in which an identical color is perceived as different when placed within different colored backgrounds. Besides color and tone, our eyes can also misperceive relative sizes; examples of this include the Ebbinghaus illusion and Shepard tables.
Besides perceptual effects of ambiguity and relativity, our perception can also be altered. A few years ago, I tried a miracle berry, which is a fruit that contains the taste modifier miraculin. Eating this berry causes sour foods to taste sweet. Hallucinogens contain psychoactive compounds that cause people to have perceptions in the absence of real external stimuli (i.e. see objects that do not actually exist). Such perceptual alterations may also be a result of illness or physiological processes and responses. Hallucination is a known symptom of Parkinson’s disease and can also be experienced by people right before falling asleep, a phenomenon known as hypnagogia. Research has also shown that our perception of time can change when we experience danger, possibly due to the adrenaline rush caused by the fight-or-flight response. In popular culture, this is sometimes called the slow-mo effect (a metaphor borrowed from video editing).
In some scenarios, one sense can override another. I got to know about the McGurk effect when I was taking a cognitive science class at college. I encourage you to try it for yourself before you continue reading. Go to this YouTube video, click play but do not watch the video. Instead, just listen to the sound and try to identify the sound that is being spoken. (The video is about 1-minute long.) Now, play the video again. This time, listen to the sound while watching the video. You may notice that the sound seems to have conformed to the mouth shape of the person who is speaking. This is to say, the sound that we perceive has changed due to a visual inconsistency. In this case, our sight has overridden our hearing to produce a different perception of the same sound. Another instance of this is best summed up in a well-known adage among chefs, that “we eat first with our eyes”, first coined by first-century Roman gourmand Apicius. Research shows that the manner in which food is arranged visually affects our perception of flavor and can cause people to alter their food choices. Sometimes, even different aspects of the same sense can override each other. This is demonstrated by the Stroop effect, in which the name of a color like “green” is colored with another color, like red. We take much longer to name the colors of these words, as there is incongruent perceptual information.
Beyond the tendency for illusory perceptions, we know that our senses are simply unable to perceive otherwise undetectable phenomena, which can now be measured using scientific instruments. Our eyes can only observe a small fraction of the electromagnetic spectrum. Our ears can perceive only frequencies between 20Hz and 20 kHz. Our sense of smell is deficient compared to dogs, whose incredible noses help humans with law enforcement and even perhaps identify COVID-19. The limitations of our senses lead us to an even bigger question — are there phenomena that we just cannot know simply because we have no way of detecting its existence?
Writer’s note: this is part one of a three-part essay. Click here for part two.
Last Friday, over 10,000 recent graduates of junior colleges (JC) and Millennia Institute (MI) gathered at their alma maters to receive their A-Level results. For these teenagers (most of whom are 18 or 19 years old), this event marks the end of a 14-year long journey through general education in Singapore, starting from kindergarten and ending in JC. Many other countries adopt a similar general education structure, which is increasingly labeled “K–12” internationally after being coined in the US. To be accurate, a majority of Singaporean students do not graduate from JCs, but other institutions that provide more specialized or vocational forms of instruction. I am biased towards this particular group of students, however, simply because I taught a tiny slice of the cohort.
For these JC and MI students, receiving their results coincides with a pivotal choice that they will make in their lives. Prior to this point, making an independent decision about what to do with their lives has been rare. They may have to pick a secondary school after their Primary School Leaving Examination and/or choose among JCs and MI after their O-Level exams. However, this juncture is the first time that they have to pick a specialized path, one that will (for the most part) open the doors to a few jobs while simultaneously closing them for many others. The choice is a thorny one, with multiple criteria going into the decision arithmetic — family approval, cost of education, potential career options, passion for the subject, etc. However, if they do choose to continue their education, they will fall back into a familiar routine of structured learning, assessments, and grades.
For as long as we are enrolled in a school, we follow its set of rules, metrics, and schedules. Our performance is neatly packaged into a numerical score or a letter grade, published like clockwork at the end of every academic term. When we are students, these numbers often have an outsized impact on how we feel about ourselves and what we are worth. The power that these numbers have over us is not tied to its primary function, which is a proxy for our learning, but rather the larger mechanisms and narratives that it is embedded within. Education is an important tool for social mobility. In Singapore, the data shows that someone with higher qualifications generally earns a higher income. The power of school grades, therefore, lies in its ability to eventually lead to a life that bears the symbols of success. There is a saying in Chinese, “钱不是万能,但没钱万万不能”, which roughly translates into, “Money is not omnipotent, but without money, one cannot accomplish most things.” Singaporeans are known for such pragmatism, which has led to a national success narrative associating three things: good grades, good job, good life. It is unsurprising that after outsourcing our sense of achievement for most of our lives to numbers on a transcript, we hop onto yet another number to measure our success as adults — the amount of money we have. This leads people to think that, “As long as I score good grades, as long as I earn a lot of money — I will be successful and happy!”
If only life is that simple. The narrative that achieving high numbers in grades and income automatically results in success is useful socioeconomically but does not paint the whole picture. We live in a world that requires the consumption of goods and services to keep economies running. Without a functioning economy, governments are not able to generate income from taxation that is required to run the state and protect its sovereignty. This necessitates a narrative that posits that the primary contribution that any average citizen makes to a nation-state is through production and consumption. Therefore, the feeling of success is not caused simply by earning loads of cash, but rather by what it means within such a narrative — being a productive member of society. It seems to me, therefore, that at the heart of our various pursuits is a deep longing for meaning and purpose.
Meaning comes in many forms. It can be derived from doing something that we love or doing things for the people we love. An act can be considered meaningful if it affects people in positive ways. Meaning gives us a sense that we have purpose in this world. The difficult part is that sources of meaning and the balance among these various sources is different for everyone at different stages in their lives. There is simply no one-size-fits-all approach to having a meaningful life.
Sometimes I wonder if our reliance on extrinsic markers of achievement impedes our understanding of how we experience and create meaning. There is a lot to life beyond getting a job that pays the bills, so being able to make life judgments is a really important skill. Unlike the ones in tests, many questions in life do not have standard correct answers, neither is a majority approach necessarily the right one for an individual. One would have to evaluate and judge for themselves what is truly fitting for them before taking a leap of faith. No matter how much we know and how certain we are of our convictions, there will always be things that we cannot anticipate.
In life, when and how do we take stock? According to the Oxford dictionary, to take stock is to “review or make an overall assessment of a particular situation, typically as a prelude to making a decision.” I recently turned 30. A few days after my birthday, I got an email from the graduate school of my dreams. It says that I have not been accepted and that only 5% of all applicants were selected. I feel happy for those who made it into the program, their dreams live on. However, it is difficult to not feel slightly disappointed at this outcome because it feels at this particular moment that my efforts for the past few years (and if I were to be ludicrous, 30 years of my life) have amounted to nothing. It is easy to wallow in self-pity but it is more meaningful and constructive for me to pull myself together and consider my next steps. In times like these, I personally find it important to be grateful for the journey that I have made so far and the people who are a part of it. Our life stories are woven only in retrospect and I hope that someday, I will see this event as part of a larger unfolding of my life. Life goes on; there is a lot more life ahead of me and I am in for the ride.
We enter the world by gasping for air, almost as if we are being saved from drowning. During gestation, we are flooded by amniotic fluid in our mother’s womb. At birth, the same fluid turns from nourishment to danger, with about 1% of all newborns developing a condition informally known as “wet lung”, which occurs when babies are unable to expel the fluid from their lungs. At the same time, infants younger than six months instinctively demonstrate the diving reflex, which is a set of physiological changes including decreased heart rate and redistribution of blood to the brain when their face is cooled. This reflex works even when their face is being blown at and does not require submergence in water. This seems to suggest that the ability to survive underwater is innately wired in our brains but this ability weakens as babies mature beyond six months. In adults, the reflex is only triggered when we hold our breaths while being submerged in water.
Many artists feature the imagery of deluge in their work. Wassily Kandinsky was an abstract art pioneer whose work discusses spiritual experiences. In Composition VI, he was interested in evoking deluge to represent rebirth, while at the same time, ushering a new approach to art that is removed from realism. The motif of inundation is also central to many of Bill Viola’s work. In his 2002 video artwork, “The Deluge”, people can be seen escaping as a white building that they were in becomes destroyed by a torrential flood. In another work, “The Martyrs”, four individuals are shown as they are tortured by the four classical elements — earth, air, fire, and water. The work is presented in a cathedral, where water is imbued with religious symbology. The word deluge comes from the Latin word “diluere”, which means to “wash away”. In the Christian tradition, baptism is a significant religious ceremony that marks an individual’s beginning as a member of the Christian faith. It usually involves submergence into water, with the subsequent re-emergence symbolizing spiritual birth.
The motif of deluge occurs regularly in history. In the Abrahamic religions, the story of Noah’s ark comes to mind. In it, God was angry at human’s misdeeds and decided to send a flood to reset the world to its state at the creation. In the process, Noah and his family were spared and promised by God that such an act will never be committed again. The aforementioned baptism is a reminder of this promise and a representation of the flood. In Chinese culture, the Great Flood of Gun-Yu showed the power of human ingenuity and how societal developments led to the first Chinese state, the Xia dynasty. Deluge myths are so common in human history that historians, geologists, and paleontologists often try to piece together the puzzle presented by these legends to separate fact from fiction. Some researchers try to identify planetary events that may be the common source for such stories.
While the search for such an event had mostly led to dead ends, new research suggests that there was a time when the Earth was completely covered in water. Scientists hypothesize that our home planet used to be an ocean world with no continents about three billion years ago; this was a time when the only organisms inhabiting the planet were bacteria. By comparison, biological humans showed up much later to the party — about 2,999,700,000 years late, by current estimates. A more likely candidate may not have been a planetary deluge, but a period of sea-level rise caused by glacial melting known as the Early Holocene Sea Level Rise (EHSLR), which occurred between 12,000 to 7000 years ago. This coincides with the Neolithic (New Stone Age), during which humans began farming. Farming generally necessitated access to water, which meant that societies congregated near bodies of water. These areas tend to be more affected by changes in precipitation and/or sea-level rise, which may explain our universal fear of flooding.
The term “antediluvian” literally means “belonging to the time before the biblical Flood”. Early attempts at understanding the history of our planet, at least for the West, came from the Bible. With increasing scientific evidence showing the improbability of a literal reading of the Old Testament, it became irrelevant in the scientific domain. Nowadays, the term describes things or ideas that are “ridiculously old-fashioned”. Over time, ideas that were considered the “gospel truth” (maybe this term itself may eventually become old-fashioned) are now debunked misunderstandings of the world. The beauty of history in preserving our follies, and not just the great ideas that have stood the test of time, is a good reminder that we as a species have often got things wrong — sometimes very wrong. Therefore, while we can marvel at the cultural progress that we have made, we should equally be humbled by our mistakes.
When I was a teenager, I stumbled upon a TV show titled “Mermaids: The Body Found”, which purported that aquatic humanoid creatures exist in the sea. It featured interviews of named experts and camera shots that resemble a nature documentary. As a younger person fascinated by scientific discovery, I was excited that this may be a possibility. However, I later realized that it was a work of docufiction, which is fiction presented in the form of a documentary. The film capitalized on cultural artifacts like mermaids and sought to popularise the aquatic ape hypothesis. The theory suggests that humans got various biological attributes, like hairlessness, bipedalism, and our superior diving reflex, due to a period of aquatic adaptation. The theory is widely debunked by experts and is currently considered pseudoscience. However, it somehow managed to draw record viewership, with its sequel netting 3.6 million viewers, the largest ever for the nature TV channel, Animal Planet. Discovery Inc, which owns Animal Planet, goes on to create more pseudoscientific docufictions that broke new viewership records. For a brand that prides itself in delivering factual content, these programs seem to betray its mission and audience. This experience personally foreshadowed today’s post-truth and fake news era.
We are living in a time of informational deluge. Nowadays, facts are less important than engagement and the result of that seems to be perpetual cycles of outrage with no resolution in sight. In his book, “Amusing Ourselves to Death”, writer Neil Postman stated that the world we live in resembles Aldous Huxley’s “Brave New World” more than George Orwell’s “1984”, in that we are inundated with so much information that “we would be reduced to passivity and egoism”. We are currently trying to keep ourselves afloat in this flood, but one cannot help but wonder what will be left in its wake or whether it will be a permanently flooded world. Either way, we will need to evolve new capacities to adapt to these new circumstances.
At the same time, we are also in a climate crisis that would likely lead to a physically flooded world if we continue to dump greenhouse gases into the atmosphere. By current projections, even if global warming is capped at 2°C, at least 570 cities and 800 million lives will be affected by increased flooding by 2050. Many coastal cities are at risk of becoming completely inundated by 2050, forcing the displacement of about 150 million people. This upcoming reality will not only permanently change geography, but will also have profound impacts on culture, society, economy, and politics.
As we get rag-dolled by these double deluges, will we sink or swim?
In a previous essay, I discussed an individual’s capacity for change. In summary, I posited that while certain aspects of our identity are resistant to change, meaningful change can be enacted through reflection and attention. Within the previous essay, there were also references made to society, with a specific claim that personal changes are often attributed to societal needs and pressures. Society is not an unchanging monolith, however, and like ourselves, is constantly changing. The relationship between the individual and society is of particular interest to me but will be discussed in more detail in another essay. This essay seeks to discuss the varieties of inevitabilities that we tell ourselves, which could limit our individual and collective agency when it comes to broader changes in culture and society. Two relevant forms of inevitability will be looked at in this essay. The first assumes that we are at an end-point in history and no further meaningful change can occur. The second is the belief that there is a natural course to history that ensures that specific changes will occur.
The Enlightenment and the project of modernity sought to achieve a universal understanding of the world through reason. A part of this project included theorizing the goals of various academic pursuits. In Aristotelian terms, this is known as the final cause, which Aristotle used to derive the purpose of any given object or animal. For instance, the webbed feet of a duck has the purpose of wading through water. Another term for this approach to understanding is teleology. Teleology is applied to various fields in modernity to gain clarity of how civilization should proceed. For instance, within the natural sciences, the fields of physics, chemistry and biology differ by their defined goals of inquiry. Physics is concerned with answering questions about matter, motion and energy. If all of the unsolved mysteries of physics are explained (and assuming that no other questions emerge in the field), one can say that physics has ended. To put it another way, this ultimate resolution can be called the “end of physics”. Such proclamations have been made before, not by crackpots but by well-respected experts. Albert Michelson, the first American physicist to receive the Nobel prize, stated in 1894 that within physics, “most of the grand underlying principles have been firmly established” and “the future truths of physical science are to be looked for in the sixth place of decimals.” Michelson’s claim, therefore, is that physics no longer requires additional explanatory theories and that progress in the field is limited to more precise measurements. (This claim is often misattributed to the British physicist Lord Kelvin.)
For hundreds of years, philosophers and other intellectuals have made claims to the “end of history”, which is the concept that there is an end-point in the evolution of political, economic and social systems, which manifests itself as the ultimate form of human organization or government. Beyond this “end of history”, major changes in human systems will cease to occur. In his controversial 1989 essay “The End of History?”, Francis Fukuyama claimed that the combination of liberal democracy and market economy seems to be the final form of human organization. He based this theory on the fall of fascism and communism in World War II and the increased liberalization of the market in the USSR respectively. Almost like clockwork, the Berlin Wall fell a few months after his essay was published and the USSR dissolved two years later in 1991. In the remaining years of the 90s and up until the mid-00s, Fukuyama’s idea seems to hold. Even Slavoj Zizek said in 2014 that “in a certain sense, almost all of us were Fukuyamaists” as “most of the left, was not raising fundamental questions… They were just trying to make the existing system more just. And more efficient.” The belief in Fukuyama’s claim may have created a blindness to the effects of neoliberal policies, which contributed to the 2007-2008 global financial crisis. The economist Joseph Stiglitz, responding to Fukuyama, titled a 2019 essay “The end of neoliberalism and the rebirth of history.” It is important, therefore, to be skeptical about suggestions that humanity has reached the final stage of its development. Gradual shifts that occur under our noses and unchallenged assumptions can lead to significant societal upheaval.
A related strain of inevitability is the cynical view that nothing fundamentally changes. In response to the French Revolution of 1848, the French writer Jean-Baptiste Alphonse Karr wrote that “the more things change, the more they stay the same.” He is arguing that sweeping societal change only serves to cement existing injustice and inequality. The phrase rings true to many today in the US, who feel that their government only serves to make the rich richer and the poor poorer. Despite being bailed out by US taxpayer money in 2008, JPMorgan CEO Jamie Dimon got a bonus of almost $16 million in 2009. One could argue that it pales in comparison to the $27.8 million that he received in 2007 but it leaves a bad taste, especially for the millions of people who lost their jobs or their homes. However, in the book Factfulness by the physician and statistician Hans Rosling, the world as a whole has improved immensely over the past century. Some of these improvements include a decline in child labor, nuclear weapons and smallpox.
For some, the fact that the world is improving causes them to believe that there is a natural course that history takes. This position may be best represented by Dr. Martin Luther King Jr., who (citing the Unitarian minister Theodore Parker) said that “the arc of the moral universe is long, but it bends toward justice.” These words inspire hope, but they may also cause people to feel that there is a natural tendency toward universal human progress that is separate from individual or collective agency. A similar form of optimism was criticized by Voltaire in his satirical novel Candide, whose main character became unable to reconcile the suffering that he observed in the world with the Leibnizian optimism that we are living in the “best of all possible worlds.”
Our discussion leads us back to the intellectual heavyweight who shaped current thought around the “end of history” — Georg Wilhelm Friedrich Hegel. For Hegel, “World history… represents the development of the spirit’s consciousness of its own freedom and of the consequent realization of this freedom.” This means that he believed that freedom is an essential quality of humanity and that sociocultural evolution will always proceed in a way that increases freedom for all people. Similar to Karr, Hegel was affected by the events of the French Revolution but had an almost opposite interpretation. For Hegel, Napoleon’s conquest of much of Europe was one of many world-historical events that allowed humanity to get closer to the final stage of history. Today, some popular interpretations of Hegel view his work on the philosophy of history as a form of inevitable progress, whereas others claim that agency is central in his work. What is apparent to me is how certain groups of people adopt a somewhat Hegelian explanatory approach to justify certain supposed “inevitabilities”. For instance, the rise of automation and its replacement of human labor is increasingly assumed as inevitable. Why is that the case? To me, this so-called inevitability can be explained by the Friedman doctrine that a company’s only goal is to increase shareholder value. Costs are reduced by cutting jobs and investing in automated production capability, which increases company productivity and ultimately enriches shareholders. Therefore, it is important to question the underlying assumptions of people who sell us their version of the future. When necessary, we have to muster the courage to imagine and actualize our own vision.
A unique aspect of human beings is our ability to use abstract and complex language. We can use language not only to communicate ideas but also to think and make sense of the world. For some of us, the latter exists as an internal monologue. Through language, we can name and label tangible objects, intangible experiences, and even abstract concepts that exist primarily in the mind. Labels are very useful as they are efficient pointers to meaning. I can easily communicate to someone on the opposite side of the planet that, “the sky here is blue, with a few fluffy white clouds.” Without much effort, they will almost immediately have a rough mental image of what I am saying. At the same time, what they imagine in their minds will almost certainly not be identical to what I am seeing. Therefore, while the labels “blue” and “fluffy white clouds” are sufficient in evoking a general idea, they fail to capture the specificity and nuances of my experience of the scene. The appropriateness of the labels that I use also differs by context. While the sentence is sufficiently descriptive for a friend asking about the weather, it is likely inadequate for a meteorological report.
Labels, therefore, are simplifications of usually more complex experiences. Additionally, they are ideal versions of whatever they are meant to point to. For instance, most people would say that they know what the word “black” means and will be able to identify black things in their environment. Let us say that we get someone (you can try this too) to look for one black object. Once they have found this object, they are asked to look for another black object, preferably one that is darker than the first. Now, we have two objects in front of us that are black. One of them will likely be darker than the other. By definition, the lighter black is not black, but a grey. Suppose this person repeats this process — they will likely be able to find an even darker black, rendering all previous examples grey. As of now, at least on our planet, this process will lead to the blackest material ever created, which is developed by researchers at MIT. The title was previously held by Vantablack, which caused some controversy when the British artist Anish Kapoor managed to acquire exclusive rights to it. Black is defined as “the very darkest color owing to the absence of or complete absorption of light; the opposite of white.” The only thing that is truly black in our universe, is a black hole. However, it is unlikely that anyone will ever perceive one up close unless they are interested in a one-way ticket into the darkness. However, the fact that we don’t need to perceive this true black, means that an ideal version of black already exists in our minds. Therefore, while we do experience imperfect instances of black through perception, the concept of pure black is one that is understood by the mind.
This perspective seems to echo Plato’s theory of forms, which posits that true reality exists separate from the physical reality in which we reside. In this higher reality are the ideal and perfect essences of all things, which people can access only through our thought and reason. While I do not think that such a realm exists, I do believe that in human language, the use of oppositional labels ultimately leads to the imaginary extrapolation of extremes. To put it another way, whenever we use opposite terms, they become such exaggerations of themselves that they can no longer exist in the real world. To illustrate this, we shall refer to the second half of the definition of black, which mentions “black” as the opposite of “white”. The eye perceives white when the three types of cone cells in our retina are equally stimulated by strong light. Similar to black, we will always be able to find an even brighter white, rendering every other white we have perceived up to that point as a grey. Unlike our search for the purest black, however, our quest for the brightest white will be cut short by permanent eye damage. The film director Ridley Scott once asked, “Life isn’t black and white. It’s a million gray areas, don’t you find?” To which, I would agree. Hence, strictly speaking, black and white mostly exist as ideal absolutes in our minds, while versions that we perceive in everyday life are shades of grey.
This act of labeling applies not only to color but to every other aspect of our lives. Are people (innately) good or evil? Should societies organize themselves around capitalist or socialist economies? Should we prioritize individual freedom or the common good? Should governments be conservative or progressive? While such questions often expect one choice or the other, the actual answer is often a combination of both choices or lie somewhere in between them. We should be cautious whenever any pair of labels are presented to us as binary opposites. Oftentimes, these pairings are arbitrary and not mutually exclusive, creating false dichotomies. Moreover, I think that it is quite unrealistic to assume that the complex richness of our world can be reduced to one simplistic idea. By identifying the gradient that exists between supposed opposites and focusing our attention on appreciating subtlety rather than polarity, we can have much more productive discussions that will expand our knowledge and push us forward.
Additionally, we often look for opposites when they do not exist. Sometimes thinking in a purely binary way yields little use. Instead, we can think about how labels relate to one another and what type of space exists between or among them. Labels can be thought of as points in an indeterminate thought space (similar to the one described by Peter Gärdenfors). By connecting two of these points, we get a one-dimensional line. Sometimes, we can connect three or more of such points, creating two-dimensional planes (funny example by xkcd) or three-dimensional spaces.
To further complicate the matter, some labels that we use are social constructs. This means that the labels themselves are not fixed but are continually renegotiated within our society. One of the efforts of feminism, for instance, is to question the conventional roles of men and women. This process changes our understanding of these labels and their relation to our identity.
I shall conclude by stressing that just because our labels are simplified abstractions does not mean that they are unimportant or meaningless. Labels are useful as they help us navigate the world and distinguish different experiences and phenomena. Labels may even have a direct effect on our perception. Researchers have found that the language we speak affects the colors that we can perceive. We should just be aware that the world is a lot more complex and dynamic than the labels that we use to represent it. Contrary to my previous statements about black and white, I do not think that we should start calling things dark and light grey. We should not be paralyzed by the ideal quality of labels such that we become afraid to use them. For instance, I believe that gender is non-binary but, to echo a recent opinion piece by Nick Cohen, if I look like a man and act like a man, then maybe I should identify as a man. One of my favorite slang words, which seems to be used with increasing frequency, is “ish”. “Ish” reinjects complexity and approximation back into otherwise oversimplified categorical labels and frees us into using ideal terms in more flexible ways. Hence, I am a man(ish).
Writer’s note: this is part 2 of this essay, click here for part 1.
We have now covered everything in our list except one — belief, which is the thorniest one to deal with. Within cognitive psychology, belief is defined as a “propositional attitude”. The combination of beliefs that one holds forms a worldview (or belief system), which organizes the different experiences and subsequent actions that one takes. Our worldview is such a fundamental part of ourselves that it comes as second nature to us; it is the closest conscious phenomenon we have to our primal instincts. One way to think about different belief systems is through the metaphor of different sports. Many different sports use the same physical space, for example, a field. On a similar field, different games have different rules and objectives, which leads to player actions having very different meanings in each game. In American football, players have multiple ways to score, including touchdowns and field goals. In soccer, the only way to score is by moving the ball into the opponent’s goal post. In the former, players grab the ball with their hands, whereas in the latter, it would be considered a foul. The unique gameplay across different team sports also changes the types of roles that are in the team, with each game having its own set of player positions. Similarly, beliefs help people understand what is valuable, make sense of their actions in their society, and identify and perform the roles that they play. This view is summed up by a quote often attributed to C.S. Lewis, “We are what we believe we are.”
We can generally agree that, like cognitive tools, belief is not innate but rather acquired through experience. For instance, we are born with the natural instincts to eat, survive, and procreate, but no one automatically has the belief that they are a citizen of any nation-state. At the same time, however, beliefs are not only hard to change, they are often an aspect of ourselves that we cannot consciously choose, especially if they are inculcated in us since childhood. Beyond biological relation, a shared worldview is often what ties us to the closest people in our lives. Oftentimes, this shared worldview takes the form of religion. Given the all-or-nothing nature of many religions who proclaim their belief as the sole version of the truth, the choice to leave the religion that one was born into can have grave consequences as it often costs the leaver their family and community. Such conversion (or deconversion) stories have been told by authors like Tara Westover in her best-seller “Educated” and Shulem Deen in his memoir, “All Who Go Do Not Return”. Belief systems stem not only from religion, but also science, ethnicity, nationality, and in this era of fake news, conspiracy theories. The choice of swapping entire worldviews is usually caused by pivotal and sometimes traumatic experiences that prompt a person to question their fundamental beliefs. A historical example is Leo Tolstoy’s mid-life crisis, which led to him writing his seminal essay “A Confession”. Which of us, however, has the choice to dictate what experiences we have in our lives?
Moreover, people usually avoid having their lives upturned. That being said, I do think that people generally want to behave in ways that are mutually beneficial for themselves and their wider community. To do so, we should critically evaluate our beliefs from time to time. This is not easy and requires moral courage because we may have to admit that we were wrong. Drawing our attention inward and reflecting on our own lives is an important element of self-renewal and gaining agency over our own development. The cultivation of inner life, however, may be made increasingly difficult with social media and our digital devices constantly begging for our attention.
A common theme throughout this essay, therefore, seems to be that attention and awareness are crucial in facilitating change in the mutable aspects of ourselves. Even though the body and unconscious mind are resistant to change, the conscious mind is far more pliable — we can learn new knowledge and thinking approaches, revise our base assumptions which help to frame our world, and become better at interpreting our experiences and their meaning in our lives. I would argue that such changes are meaningful and can have a huge impact on an individual’s life and that of their society. We often hear words that describe personal change. Some Protestant Christian churches use the term “born again” to describe the conversion to Christianity. After recovering from a particularly grueling ordeal or brutal setback, we may feel like a “new person”. Needless to say, these are figures of speech, but we find such internal changes so significant that we liken them to rebirth.
It fascinates me how the plasticity of our mind seems well-matched to continual sociocultural change. When Darwin coined the phrase, “survival of the fittest”, he was not referring to physical strength but being “better adapted for the immediate, local environment”. Similarly, our social survival depends on the ability to adapt and/or respond to emerging sociocultural norms. Our mind, therefore, is a tool for us to resist premature obsolescence and remain a part of human discourse. However, just because we are able to change, does not necessarily mean that we do. The philosopher John Rawls has described our birth as a lottery. Our childhood conditions affect us throughout our lives and are the result of sheer luck. We should acknowledge how we often unwittingly become the people we are. To be an ally of change, both for ourselves and others, we need to practice compassion and non-attachment. Change is difficult — being kind to ourselves and others goes a long way in that struggle. By non-attachment, I do not mean to stop caring about the people you love but rather to give them the space to change. This applies equally to those whom we dislike. If we are too keen on sticking to an impression of a person, we are limiting their ability to change through our interpretation of who they were and how they ought to be.
Some of us may be struggling with who we are or trapped in incessant cycles of thought. Where there is change, there is hope. The belief that we can change gives us hope that tomorrow may be better because the inner conditions that we find ourselves in can and will change.
Writer’s note: this was a difficult one to write, I scrapped an earlier draft completely because the more I wrote, the more I found myself having to account for too many considerations, which led to me feeling like I knew nothing about anything. That feeling prompted me to start over and adopt a structure that provided more focus.
Let me start by saying that this essay will adopt a somewhat unconventional structure. I will state upfront my position on a matter and get toward that destination through a process of elimination. If that sounds like an ignorant student attempting to answer a multiple-choice question by a process of elimination because he is unsure, well yes — today, that student is me.
The topic for today is an individual’s capacity for change. This reminds me of an assignment that I did for my philosophy professor, Prof. James Yess, when we were discussing the topic of free will vs determinism. We were challenged with describing our position with six words, as a sort of homage to Hemingway’s six-word story. I wrote something along the lines of, “Freer — but not free — will exists.” My position here is that of a compatibilist, in short, I believe that individual agency can exist alongside determinism. As it relates to today’s topic, I believe that an individual should only be judged based on the things that they can reasonably change about themselves.
Let us begin by first unpacking the term “self”. We can think of the self from a first-person perspective: a physical body that can be moved by our volition and a conscious mind that thinks, imagines, and remembers, among many other mental actions. Between the false dichotomy of mind and body, we have senses that can receive and interpret external stimuli, feelings that can experience the greatest joy and deepest sadness, and beliefs that seem so deeply ingrained in ourselves that they seem like second nature. Then there are aspects of ourselves that we are often unaware of — the unconscious mind. Before we go through this laundry list to evaluate which elements of the self are more changeable, let us quickly discuss why we would consider changing ourselves in the first place.
If we lived in a world where we were the only human being, we probably did not need to change ourselves that much, with the exception of learning behaviors that prevent physical pain, increase sensorial pleasure and ensure survival by meeting our bodily needs. If we had an anger management problem in such a world, we may not be motivated to change because acting on it may not yield much negative impact. Perhaps we may hurt ourselves if we punched a rock — in which case we may change mainly to minimize pain, as mentioned earlier, but not to address the anger. Fortunately in our reality, no man is an island — we live in an interconnected society that is filled with rich social relationships, where individual acts can have social outcomes. Humans are social creatures and our relationships are very meaningful to us. Therefore, on top of the aforementioned reasons for change, we also try to prevent emotional pain and increase psychological wellness, not just on an individual level but expanded to a wider social dimension. The earlier example of an anger management issue would have more serious consequences due to the potential to harm others. The person would also be more pressured to change due to socioemotional mechanisms of guilt and shame. Many of our personal behavioral changes, therefore, can be traced to our desire to be good for our society.
Now, back to the laundry list – which of the previously stated aspects of the self are we more able to change? Alterations made to the body are commonplace in certain areas of the world and to specific groups of people. However, in general, it is something that is not easily changed. Procedures can be painful, expensive, and sometimes even endanger a person’s life. I guess this is why judging anyone based on how they look feels wrong. Next, the unconscious mind is usually out-of-reach to us unless we seek psychoanalytic intervention, which often requires professional help. It is important to note, however, that the psychoanalytic definition of the unconscious is still debated to this day. If we take the cognitive definition of the unconscious and extend it to the realm of implicit cognitive biases and heuristics (as pioneered by Daniel Kahneman and Amos Tversky), we can counteract some of these automatic processes through conscious compensation. Therefore, we find ourselves in the realm of conscious thought and feeling, which includes sense perception, emotion (a.k.a. affect), cognition, and belief. By definition, we are aware of our consciousness, which makes it the most actively changeable aspect of our self relative to the previous two (i.e. body and unconscious).
We are aware of our sense perceptions, but they are generally unchanged by conscious thought. We can, however, compensate for perceptual illusions by being aware of them. Emotions, especially intense ones like anger and grief, can sometimes be felt viscerally, but they can be regulated through thought. Our emotions often come from our interpretation of certain events that occur in our life. The area of practical philosophy, which aims to aid people in living “wiser, more reflective lives,” has been a central part of philosophers’ work since Laozi and Socrates and likely predates them. Hence, even if we feel strongly about something that happened to us, we are able to respond in a measured way, sometimes by reframing the experience in different ways.
Cognition refers to the mental activities involved in acquiring knowledge and understanding. It can be strengthened through various thinking tools and approaches that we learn and then employ to solve problems and make decisions. It is probably one of the most changeable parts of our mind, as seen from the huge investments that societies around the world put into educating people, especially the young, to read, write and do arithmetic. Based on the World Bank’s figures, we spent around 4.53% of global GDP, equivalent to US$3.68 trillion ($3,682,348,740,000), on education in 2017. Generally speaking, someone who has a better understanding of how the world works should be able to behave in a way that benefits themselves and their society. They may also be in a position that helps them understand complex, strategic, and long-term decisions that require trade-offs, compromises, and short-term sacrifices. Therefore, learning — specifically the acquisition of knowledge and skills — remains to be a powerful force for both personal improvement and social mobility.
Writer’s note: I realized that this topic cannot be adequately discussed in a single 1000-word essay. Click here for part 2.
Can you recall the last time you counted something? Instead of intuiting our way around the world, we rely on some form of measurement when we deliberate our choices, especially if they are of particular significance. We may weigh the pros and cons to make a personal life decision. In a business setting, managers may draw up revenue projections to justify the costs of new investments. Thus, counting plays an important role in rational decision making. Representing aspects of our decision as numbers and figures can help us to view it in a more objective light. Sometimes, counting can also help us gain a more nuanced understanding. Instead of a world where movies are separated into either “good” or “bad”, we have five-star ratings that give us a sense of the extent to which a critic enjoyed a film.
We communicate numbers as a natural part of our everyday lives. If someone tells me that they are 1.9 m (6’3”) tall, I know that they have a towering physique. If someone shows me a score of 200 on an IQ test, I may think, “she is either really smart or faking it… maybe both.” However, quantities are not created equal. While it is relatively straightforward to measure physical properties like height, the measurement of conceptual properties like intelligence is far more complicated. Oftentimes, we tend to accept both types of quantities as equally factual when that is not the case. Numbers tend to be communicated in a manner that makes them seem objective and truthful, causing us to be fooled in the process. Perhaps this Jedi mind trick is a by-product of a world where science is regarded as the best descriptor of objective reality. A claim seems more credible if it states a number or quotes some statistics. It comes as no surprise that the presented number is only as good as the methodology that the researcher used to derive it. A recent example of this abuse of numbers is the Texas Attorney General’s claim that Joe Biden’s win of four swing states has a probability of “less than a quadrillion to the fourth power”, which has since been refuted by mathematicians.
We use proxies to count the uncountable. Oxford dictionary defines the word “proxy” as “a figure that can be used to represent the value of something in a calculation.” To use words from this essay, a proxy is a countable approximation of a conceptual property. Let us take the prior example of intelligence. There is no way of physically measuring someone’s intelligence. Intelligence is an individual’s ability to solve various types of problems, which can only be demonstrated when they solve such problems. Neuroscientists may find correlations between the physical structure of the brain and intelligence in the future, but it is important to remember that they are still separate measurements. This is akin to the difference between a person’s muscle-to-fat ratio and their athletic performance — related but distinct. The widely accepted approach for measuring intelligence today is the IQ test. An IQ test focuses on abstract reasoning, meaning that its definition of intelligence is extremely narrow. Alfred Binet, whose Binet-Simon Scale formed the basis of IQ tests today, said himself that such tests are inadequate for measuring intelligence as they do not consider other important aspects like creativity and emotional intelligence.
Another example, one that is close to my heart, is the measurement of learning through testing. Since my days in teaching school, the notion that assessment is one of three key pillars of any teaching practice has been firmly impressed upon my mind. On its own, learning is an internal phenomenon, known only to the learner. Assessment, which often takes the form of tests and examinations, is used as a means to measure if students have learned knowledge and/or skills. It is important to remember that while assessment seeks to represent student learning accurately, it is at best an approximation of that invisible process. The gap between learning and tests has been and will likely continue to be a matter of debate.
The impact of proxies often extends beyond the immediate measurement. School examination results impact the wider society by allocating greater educational opportunities to better-performing students. Public education serves to provide equal access to students of all socioeconomic backgrounds, therefore acting as a social-leveler and enabling social mobility. However, recent research has shown that a student’s “social class is one of the most significant predictors… of their educational success.” IQ tests have a particularly dark history due to their ties to eugenicists who, based on a simplistic understanding of genetics, believed “that society should keep feebleminded people from having children.”
Proxies also affect our understanding of ourselves. Nowadays, it feels like for something to count, it needs to first be counted. There is even a cultural movement known as the Quantified Self, whose tagline reads “self knowledge through numbers”. To increase our self-esteem, we often chase countable goals — Instagram followers, tweet likes, salary, grades — but to what end? Do we question whether or not these numbers are truly meaningful? The use of proxies will likely only increase with time as computers and artificial intelligence become a bigger part of our everyday lives. Behind any recommendation made by a computer is a series of measurements, sometimes defined by a handful of data scientists, computer programmers, and user experience designers, that make assumptions about our personality and desires. This applies to a wide range of interactions, from the ads we are served on Google to the matches we get on a dating app.
Every time we accept a proxy figure, we are relying on an individual, group, or institution’s approach to measurement. Oftentimes, this approach is informed by theories, specific definitions of the measured property, and sometimes value judgments. This renders the proxy figure to not be objectively factual as it is derived from a particular perspective. We need to be careful about the numbers that we come across in our everyday lives. The statistician George Box once said, “All models are wrong, but some are useful.” Proxies are, at their essence, models for approximating abstract quantities. While proxies can be useful, a healthy dose of skepticism should be maintained to ensure that they are working properly and to our benefit.
I am turning 30. Writing is one of those things that I have been wanting to do for some time, but have never found enough willpower or courage to do. I have always had the habit of taking short notes on Google Keep; I have the intention to turn them into longer essays but have not done so – until now. I would like to make use of this arbitrary multiple of ten to force myself into doing something new.
By the end of the year, I will translate fragments of my thoughts into at least 30 short essays, each about 1000 words long. Each essay will explore a theme about the world in which we live. Some essays may delve into more specific aspects of a prior theme, or explore similar phenomena from a different angle. I endeavor to minimize overlaps but there are bound to be some, especially for conceptually close topics. These essays are meant to be living documents, which may be updated sometime in the future.
I will write weekly, with a new essay up every Sunday. After ten essays, I will take a two-week break to catch a breather and plan for the next ten. I hope to refine my thinking through this process and manifest thought in a way that can be reviewed in the future. At the risk of sounding slightly pretentious – I do hope that the writing will benefit readers in some way. What is an essay, if it is not to be read?
2020 is the first year that I got a taste of what chronic pain and decline might feel like. The year was riddled with injury, first my shoulder, then my upper back, and more recently my lower right hip. It is no wonder that for thousands of years, people have yearned for an existence beyond a physical body that becomes obsolete with time. The religious aspire to a spiritual afterlife and more recently, technologists hope to upload their consciousness into the cloud. The choice of obsolescence as my inaugural topic is no accident. Just like our bodies, ideas have a shelf life, they are especially relevant to a specific period and its zeitgeist. The ideas that I discuss here will eventually be made irrelevant by future discourse. While this admission can be interpreted as intellectual laziness, it can also be recognized as intellectual honesty and humility. Those who take the former view are more aligned with Plato, who believed that the beautiful, the good, and the true should be enduring and stand the test of time. Those who favor the latter view may cite the concept of paradigm shifts, coined by Thomas Kuhn in 1962.
Cambridge dictionary defines the word “obsolete” as “no longer used or needed, usually because something newer and better has replaced it.” People are generally afraid of becoming obsolete. Many people derive meaning from being useful to certain causes or the people they care about. One significant relationship that we have with our society exists economically through the work that we perform. This is why we equate the question “what do you do?” with “what is your job?” This is also why adults, especially breadwinners, feel immense pressure to always be gainfully employed. Even though our job is not the only role that we play in society, today’s social contract seems to implicitly demand economic participation through individual productivity. This explains the provocation caused by the writer Yuval Noah Harari when he uses the term “useless class” to describe masses of people who may be replaced by automation in the future.
Harari’s conclusion is less surprising for people who have noted the changes in labor and capital in the past century. A major innovation in production came in the form of Fordism in the early 1900s. This approach replaced skilled craftsmen with unskilled laborers whose tasks are highly specific and simple, causing individual workers to be easily replaceable. Our understanding of ourselves is undeniably shaped by our work, which seems to suggest that we are all essentially obsolete as long as some other person or object can provide the same function. In the eyes of our corporate overlords, we are just biological robots being replaced by cheaper electronic ones.
Technology seems to accelerate change and therefore, speed up obsolescence. There are now jobs that our parent’s generation would have never fathomed: social media marketer, machine learning engineer, data scientist. People of different age groups also tend to use different social media platforms, which influences how and with whom cultural artifacts are shared. A popular meme or cryptic string of emojis may be easily understood by every teenager but will likely confound most adults. The rise of Twitter has also caused an insatiable demand for instant news, which often undermines the reliability and thoughtfulness of the reporting. Oftentimes, erroneous news spread like wildfire almost instantaneously. By the time they are fact-checked, another headline has taken its place as the focus of outrage, causing an endless cycle of false beliefs.
In the past, the world changed more gradually. There used to be a clearer definition of roles played by the young, the adult, and the elderly. Age translated into lived experience, which in turn translated into practical wisdom. Older members of the community served as elders to guide the young. With increased digitalization, this relationship has been reversed — people over the age of 65 are found to be seven times more likely to spread fake news than someone between the age of 18 to 29. Broader shifts in society, including rising inequality, ever-growing college debt, and wider acceptance of climate change, mean that societal norms and life expectations are quickly changing, leaving behind the lives our parents lived. It feels increasingly difficult to be old, not only because the advisory role the old used to play is diminishing but also because newer contexts require active learning.
There are three deaths in Mexican culture, the final one occurring at the moment that a deceased person is last remembered by someone living. Similarly, if we are biologically alive but socially negligible, we are somewhat dead. This explains our fear of obsolescence as a type of death. However, the definition of obsolete includes the words “no longer”, which assumes a distinct prior state. Just as death is the natural outcome of living, perhaps obsolescence can be viewed as the outcome of being relevant. Seen this way, to be obsolete is a blessing. It means that a person has contributed to creating the present, which future generations will build over. This is akin to the layers of rock that form the Earth’s crust, each layer preceding the next and marking different geological epochs. While we will all become irrelevant, we should rejoice in the fact that we paved the way for what is to come. Obsolescence, therefore, is inevitable and a necessary ingredient for progress.
Accepting obsolescence is not easy, but even more difficult is knowing how much we should struggle in staying relevant while we are alive. One approach is to take steps to be a good ancestor for posterity. I believe that each generation is tasked with a specific set of challenges and how we respond will change the course of history. The future does not passively arrive but is actively created through the accumulation of choices by everyday people. Our present actions and approaches should respond to the unique issues that we face and build toward a desirable future. While we cannot avoid obsolescence, we can take steps to become obsolete in ways that we prefer.
This essay was written for an undergraduate philosophy class called “Meaning of Life” in the spring of 2014. The lecturer was Prof. James Yess.
Since Nietzsche proclaimed in 1882 that “God is Dead”, we have seen the demise of Christianity and theism in general, especially within the study of philosophy. The de facto worldview currently is determinism, a philosophy built on the principle that to each effect there is always a cause. Determinism is further nestled within a naturalistic, materialistic reality that states that every single phenomenon in the world is attributed to the interaction of matter, made of atoms and molecules. Within the metaphysical context of materialist determinism, there are various views held by different philosophers, yielding separate and distinct worldviews. Generally, these worldviews belong to two groups, the incompatibilists and the compatibilists. Incompatibilists believe that free will is incoherent with determinism, and compatibilists believe the opposite, that they are not mutually exclusive. This logically follows that incompatibilists like Honderich who believe that an indeterminate self is false, that our actions are caused solely by our environment and dispositions and that an unfixed future cannot occur within determinism.
Ever since religion has been relinquished from a majority of our lives, philosophers have been trying to provide answers to the perpetual question of man’s yearning for meaning and purpose in a universe which is neither sentient nor alive. Among those who take the question sincerely, some of the more uplifting ones come from the existentialists and determinists. In general, they have stated that although life itself has no objective value, subjective value can exist. This subjective value is not found but created. The death of God requires man to take the empty driver’s seat. Instead of God’s will, we now purpose our own wills and pursue them. Man, once a creature, is now a creator. How apt is the description “Homo Faber” in our current paradigm. However, the hard incompatibilist view that Honderich and his colleagues have promoted threatens this outlook. Their belief that there is a fixed future undermines the creative potential that humans have for our future. Instead of owning these wills and pursuits, the hard incompatibilist would strike down their hopes and tell them that they have no part to play in the creation and fulfilment of their desires. The hard incompatibilist would wrongly edify that these are merely illusory, that the person has no part to play in the direction of her life and that her person is merely a combination of dispositions and environmental factors. Herein lies the space of uncertainty, which I term “breathing room”. The breathing room postulates that there is space for man to be a part of the causal process within a deterministic framework. (Within this essay the terms breathing room and space will be used interchangeably.) The exploration of this space seeks to provide an alternate narrative to the claims of hard incompatibilism through uncertainty that man has control of. It expounds a worldview that better resembles the everyday experiences of man. The gap will first be explored within the determinism and then neurology. A hypothesis behind the workings of the gap, and how it ultimately affects human meaning and purpose will be discussed.
The hard incompatibilist claim that the future is fixed is, to me, a very distant conclusion made from the deterministic basis. First, it seems apparent to me that by projecting the future from their deterministic worldview, hard incompatibilists are going beyond the boundary and putting themselves in a position of unnecessary speculation. Determinism shines most through a reflection in retrospect of events and occurrences, but it is meaningless to see its relevance beyond the present. Although it may be true that the understanding of our past can lead to a more mindful approach to the future, this is incoherent to the worldview of the hard incompatibilist. Hard incompatibilists postulate that the future is fixed but cannot be known. Due to our lack of knowledge of this future, we would live in exactly the same way as we do if there are possibilities of multiple futures. To adopt this worldview is to believe that all of our choices are illusory and that there is no way at all that man can have any level of control over their lives. The problem about this perspective though is that, like religion, it cannot be disproven. To a large extent, it is merely a gross extrapolation of the deterministic worldview. Clearly, if the view that the future is fixed is by itself speculative, how definite is the following statement that our choices or life-hopes, coined by Pereboom, are illusory? Since our future can never be known to us, it is therefore meaningless for us to postulate perspectives that would restrict our outlook, especially ones that could lead to an attitude of passivity in life. The breathing room therefore exists in this not yet determinate space between past and future, where our choices are made and our actions decided upon.
It seems logical that we would have no control at all over our thoughts and subsequent actions if they stem from our dispositions and environmental causes. However, that claim has to be examined further. To enter our decision-making process, environmental factors have to be within the brain network. Therefore, the external factors are first sensed as stimuli that are processed into functional subconscious or conscious information. If a bat is swung quickly towards us, the brain responds by interpreting the fast-moving object as “danger”. Within the brain network therefore exists mental parallels or concepts of “bat”, “speed” and “danger”. Instead, if it was a soft foam tube swung towards us by a child, concepts evoked within the brain could be “fun”, “squishy” and “safe”. Obviously, within a materialistic context, these mental parallels are physical phenomena most possibly occurring as neurons part of a larger brain neural circuit. Our dispositions are more tricky because they can be confused as both an internal or external factor. A view purporting that it is an external factor presupposes a self that is separate from our dispositions. This view contradicts the general deterministic view that our self emerges solely out of the activities of our brain. As put succinctly by Daniel Dennett, our consciousness arises from the intricate “ratcheting” of our brain. Hence, it seems logical that our dispositions are subconscious internal factors that, when exposed to external factors, come together to cause an action. However, a component that does not defy deterministic limits can be introduced to this system and form part of causal determination. This component could be the thoughts of the conscious self. The belief that our subconscious greatly shapes our eventual actions does not inherently deny the effects our conscious thoughts have in the formation of choices and actions. Determinists like the Stoics and Descartes maintain that we are selves distinct from our dispositions. Pereboom also maintains that nothing in determinism rules out the view that a self can select principles of action and initiate action on their basis independently of the influence of her dispositions and environment. These views validate the possible existence of the breathing room, that choice can exist within a deterministic framework, without even the introduction of compatibilist notions. Instead of Pereboom’s suggestion that a self can initiate action independently, I believe that consciousness, dispositions and environments are all part of a codependent neural system from which decisions are made. This view of the human causal chain empowers people to be active in their decision-making process and not leave all of their choices completely to impulse and chance. Not only is this model of causal determination more familiar to the common man, but it can also be observed from the beliefs of several philosophers. John Dewey, for example, stated that we do not learn from experience, but from the reflection of experience. The reflection process is a conscious phenomenon which enriches our brain’s reward centers and stimulates the learning process, calibrating the ratchets of our brain with the lessons learnt.
In Man Against Darkness, Stace states that a man’s actions are as much events in the natural world as is an eclipse of the sun. Although I do generally agree with the naturalistic position, I doubt that an eclipse is a good analogy for the processes that occur within our brain. It has been said that there could be more neurons in our brain than stars in the Milky Way. That statement itself is probably enough to show how awesome the three pounds of matter in our cranium really is. The hard incompatibilist is awaiting the day when neuroscience provides all the answers to confirm their position. Currently, neuroscience has not painted us a complete picture of the brain’s workings. How the eventual findings are interpreted is crucial in the standings of current philosophical perspectives. It is therefore in this breathing room of uncertainty that allows multiple versions of determinism to coexist. For an object as intricate as the brain, I am not quite sure if scientists will ever be able to fully comprehend its vast, inherent complexities. In the face of that, scientists therefore create models that can closely represent how the brain works. These models can capture a part of the brain’s functional properties but not entirely. Astronomy is the oldest science and has been around for millennia, but astronomists still use ever-changing models to understand celestial objects and phenomena. Meteorology has been studied for close to a millennium, but until now weather forecasts can only do so much in predicting tomorrow’s weather. Although neuroscience would eventually get closer to understanding the fundamental mechanics of the brain, it might never be able to create a model of the brain that can accurately predict the outcomes of brain function. The unexpected or uncertain nature of its outcomes do not stem from randomness like the kind proposed in Heisenberg’s uncertainty principle, but instead from a totally deterministic, dynamical system as can be seen from, for example, Chaos Theory, where dynamics are extremely sensitive to initial conditions. Until the day that neuroscientists can predict to utmost certainty the entirety of brain function, which arguably would take a very long time, we will never truly understand our ability to choose and affect our own decisions. Therefore, the presence of this breathing room of choice does not conflict with current neuroscientific fact.
Thus far, only the existence of the breathing room has been argued for, but how it affects human meaning and purpose have not yet been discussed. The space enables consciousness to be a part of the choice-making process, therefore providing a certain amount of agency, though limited, to persons. This limited agency allows people to take ownership over their projects, purposes and pursuits. It was discussed earlier on how the brain has mental parallels of physical phenomenon which act as part of the entire neural circuitry. Thus far, we have established that consciousness, dispositions and the environment are on deck for causal determination. Within the brain, these concepts have to be a physically similar entity in order to interact with each other. Each of these concepts is material by nature. Within the current neuroscientific understanding, these concepts are either a distinct or group of neurons that are part of the entire brain network. Essentially, these neurons have the capacity to hold an idea or thought. Philosophers lament the loss of God in our increasingly secular societies, and how that has taken away universal morality and justice. However, to say that we have “lost” God is a misnomer. If God has never been there in the first place, how is it that we have lost her? I argue that what we have lost is the idea of God, and that the idea of God occupies an important space in our brains. Post-theism requires that man’s purpose comes from the aspirations that he has willed. Underlying dreams and aspirations are ideals and values. Without a set of ideals and values, we would not be able to create any purpose or meaning because they have to be put in context. As human beings, we tend to anthropomorphize all that we experience. Every religion therefore has human-like deities and Gods. This can be seen even now, when philosophers call the universe “unfeeling” or “apathetic”, which does not make sense because the use of such terms assumes a human nature in the object. This is equivalent to telling jokes to a rock expecting that it would listen and respond, it is false and illusory. Perhaps our biggest error is in our desire to humanize every single object and experience we encounter. We set up ideals in our brain and through religion, we idolize and consolidate these concepts. The power of the idea of God lies in its absolute perfection. Seen from this view, God is merely a human-like manifestation of the greatest of the greatest great. The loss of God therefore entails the loss of a vision of absolute perfection. However, that does not mean that the vacuum cannot be filled. Perhaps one of the most interesting aspects of human cognition is our ability to understand and communicate abstract ideas. Some of these ideas, like love, is instantly relatable and sometimes visceral to most even though they may not be able to put it well in words. Within our brains, these ideals are kept as pure abstract concepts, untainted by the forces of reality. Unlike Bertrand Russell, I believe that there is an authentic space that our ideals can occupy. Our ideals in the brain are neurons in the network like any other, being able to affect causal determination. Therefore, our ideals, consciousness, dispositions and environment all play a role in the determination of our lives and choices. Through the pursuit and passing on of our ideals, we can have a universal and, at the same time, unique purpose. This creates a narrative at both the grand and personal level. Each person, through communication and chosen actions, pass-on their ideals to the following generation and thus ensures that the goodness of man, and a part of them, exists for posterity. The younger generation, on the other hand, goes through a selection process of removing obsolete ideals and the strengthening of others to fit their newer contexts. Through a reversal, each person has now become a manifestation of their ideals, instead of the traditional opposite which gave rise to idols. Instead of false deities, we now have real-life heroes embodying certain sets of beliefs.
The problem with this position is that abstract ideas might be less accessible to the uneducated masses as compared to anthropomorphized idols. For that reason, I will never downplay the relevance of religion, especially for those who are born into unfavourable circumstances without a chance for education. That said, the stand taken by this narrative is one that inherently values diversity and a wide variety of different ideals and values.
If we are the only conscious organisms in the world, we are the nervous system, the consciousness of the universe. Hitherto, we are the only beings able to appreciate the vastness of the universe within our brains. Given this powerful starting point, how can the ultimate narrative of man be that of any other species, to merely survive for a brief moment and perish? Most of us, despite this relatively young Godless context, would still aspire to do good. At the point of our death, most of us hope to leave the world a better place. As the late Steve Jobs once said, “We are here to leave a dent in the universe”. The claim is an exaggeration, but we all aspire to be able to affect others and create real positive changes in the world through our lives and actions. As Gandhi has stated, we need to start with ourselves to change the world around us. A hard incompatibilist notion denies completely the possibility of self-changing, which undermines our ultimate belief of making a difference, be it small or significant, in the lives of those who surround us. Even within a deterministic context, when people recognise this breathing room and start to take ownership of their lives they realise that they can truly influence their lives and the lives of others. This allows them to take an authentically active approach to their lives. One of the lessons that can be gleaned from the demise of theism is that no matter how great the promised benefits, once people start to doubt the truth of their belief, it will soon crumble. I think that there is a parallel between that and the illusory mode of living life promoted by several hard incompatibilists. The worst lie one could ever tell is the one told to herself. Through their actions, deeds and stories people become manifest of their ideals, causing them to spread good causes across the human network and allow their ideas to be carried on by the next generation.
This essay was written for an undergraduate philosophy class called “Philosophy of Death” in the fall of 2013. The lecturer was Prof. Donald Keefer.
In everyday situations, human beings are forced to make decisions based on a set of non-conscious beliefs and value systems. These form part of one’s intuition in dealing with immediate, urgent considerations, usually leaving the person no time to carefully make sense of the given scenario. These intuitions form a set of working principles with which we navigate our world.
One of these working principles that most would agree with is the idea that all lives have equal value. When this working principle is put to the test, however, we can easily see how some people are usually “more equal” than others. More often than not, this general principle is overridden by other non-conscious intuitions based on the specific situation faced by any individual. The more interesting observation is how these intuitions seem to be the same for most people. These complex, intuitive value systems appear simply as common sense to most, but the mechanics of it is completely invisible and yet generally universal.
We shall now turn to a classic thought experiment to test this guiding principle: the trolley problem. Suppose we have a train moving at an extremely high speed and reaching a fork and you are the train operator. Let us assume that the train tracks were not properly designed, and this fork leads to the same destination. It is up to you to decide which train track to use when the train has reached the fork. It just so happens that a fifty year old man and a baby were on either sides of the fork. Let us also assume that avoiding the choice of selecting one path is impossible, that you have to make a decision about who you would save. More often than not, most respondents to this question would choose to save the baby than the old man. If the guiding principle that “all lives have equal value” is true, statistically it should be proven through an equal number of respondents choosing between the baby and the old man. A preliminary conclusion at this point therefore, is that humans are predisposed to believing that the length of our life is related to its value. This suggests that it is more fair for someone to die if s/he has been able to live a relatively longer life. The first guiding principle has been easily thwarted by the introduction of age.
This scenario would be a serious dilemma for most ethical systems. Take for example both Kantian and Utilitarian ethics. A Kantian ethicist would argue that one has equal duty to save both lives, but it provides no answers as to which life should be saved. The Utilitarian argument is as feeble in this context; the decision of who should be saved has to be made based on weighing the pains and pleasures that result because of the choice. First, to make that analysis within a split second is impossible. Second, the analysis of pain and pleasure is so subjective that one case could easily be argued over the other, given ordinary circumstances (that both individuals have loved ones who still exist and would feel pain from their death).
From a purely economic standpoint, saving the baby is not a fiscally wise decision. Due to the intertwining, complex nature of modern civilisation, it is reasonable to argue that our lives are supported by the society at large. Most of our essentials are purchased and have been through the hands of many people before our use. Therefore, everyone is incurring a debt to society starting from the point at which they are born by being a dependent of the larger society until they become a working adult. A child is nurtured through the care of parents to become a responsible citizen who would eventually contribute to society and begin to pay off his dues slowly. The baby is and would remain a dependent for the immediate future of his/her life. The 50 year old however, assuming that s/he has led a normal, productive life, has already paid his/her dues to society and perhaps has already contributed a significant portion to the society’s well-being in general. The economic argument for saving the baby therefore, is the potential that s/he has in contributing more back to society compared to the old man, which is only a hypothetical possibility.
The conundrum of the relationship between the length and the value of lives continues in philosophy. As Epicurus has mentioned in his Letter to Menoeceus, he argued that death is not evil, but instead indifferent. Since Epicurus believed in the hedonistic thesis that the human experience boils down to pleasure and pain, much like the proposals of later Utilitarians, death is by itself a neutral occurrence since it takes away the possibilities of experiencing both pleasure and pain (Scarre, 87). Epicurus’ argument further extends to the implication that when we die does not matter, because at the point of death, we cease to be.
Feldman tries to refute Epicurus’ argument by proposing hypothetical possible worlds that one’s life could be compared to (Scarre, 91). Feldman argues through the analogy of the dead ploughboy the other better lives he could have led. His case falls apart easily because for every better scenario that can be imagined, a worse outcome can also be fabricated.
In Death, Shelly Kagan argues that death is bad through the deprivation account, which is essentially similar to arguments made by Feldman. He later concludes by saying that puzzles to that question remain. Before diving too deeply into the argument about the evil of death, one can clearly observe that one of the causes for all these debates is how humans are intuitively predisposed to believing that a longer life is an inherent good.
However, these do not fully explain our intuitions to choose to save the baby because both individuals have the potential to live long, fruitful lives. Even if we take into account this assumption however, the same intuitions apply: the baby would tend to be saved significantly more than the old man.
Now assume that you, the train operator can look into the future and see the lives of these two individuals. Suppose the child and the old man both have an equal amount of time left living in the world. This additional information shifts the scale, but not significantly. It is almost as if we see our lives as a time bomb, with a set-off time of the average life expectancy at any given moment. The longer the time we have left, the more valuable the life of an individual.
However, when more details are added to the situation, the balance tips. Suppose the baby and old man each have ten years more to live, but the baby died young due to a painful disease whereas the old man dies healthy in his sleep. This additional information causes us to want to save the old man more than the baby. Again, suppose the baby does not grow up to lead a fruitful life, for example s/he suffers a depressing illness throughout his/her life or mixed with wrong company earlier in his life and wastes his entire life as a criminal, whereas the old man goes on to lead a relatively shorter but happy period of time. The same intuitions to save the old man apply.
Arguably, this adds another dimension in this procession of our intuition. These series of intuition tests start to give form to our intangible, complicated intuitions. Our intuition seems to work like a non-conscious operational flow chart, driven by our values and priorities at any given moment. It accepts exceptions to rules and is extremely flexible at dealing with complex situations, and amazingly all without deliberate, rational thought. At this point, a simplification of our general disposition is that humans value the potential of lives for pleasure. Death terminates this potential, and therefore is seen as an evil.
Although our intuitions give us guiding principles which are very useful in everyday life, we should not stop challenging them through rational thought. Bringing these intuitions to light is important for us to take action. These intuition tests reveal the irrational but generally universal traits of human intuition. When we know our tendencies toward certain choices, we can make better assessment and judgment about whether they are truly good decisions. Although humans have the ability to rationalise and make good and deliberate decisions, we have to realise that much of our lives occur through intuitive, automatic reaction. The analysis of intuition could point toward a direction for more robust ethical systems. By understanding our intuitions, we can also make better sense of our impulses and direct more meaningful lives for ourselves.
Works Cited
Scarre, Geoffrey. Death. Montreal: McGill-Queen’s University Press, 2007. Print.
Kagan, Shelly. Death. New Haven: Yale University Press, 2012. Print.