This ad campaign by Activista, mainly targeting Space X on Earth day – I believe that was in 2021 – is brilliant. It helps to put things into perspective in terms of how we approach our resources and earth.
The message still rings true today and in many ways, it is saying something about the human heart. Our wandering heart often wants to look for something else to sustain ourselves. Something else that may not be designed to sustain us, but we want to make it what our lives depend upon.
Yes, as a Christian, I’m talking about Christ, who provides the salvation we need when we are wandering about seeking salvation through our work, relationships and other forms of addiction in our lives.
Dr Janeway’s article on False Economies highlights some of the philosophical underpinnings of the modern, capitalistic study of economics that drives the system to behave in ways that endangers the entire economy’s long term prospects at times.
There were so many different themes brought out in the article that is worth more investigation and appreciation. The point that Arrow-Debreu’s work points to the fact that our markets in reality would never be efficient is something that we do not embrace enough of – especially in public policy.
The lack of political courage and unwillingness to be accountable to policy decisions drives the notion that we must ‘leave things to the market’. And today, with the world facing the climate challenge, I do not believe that the market is the solution to deal with the challenge. The political will to align incentives, define standards and mobilise efforts is necessary.
The recent Oxfam study about the rich getting richer faster than the poor being uplifted shows that, indeed, we have enough money to deal with the world’s problems. But far too often, it is either in the wrong hands or working towards the wrong goals. Economics assumes the market would direct resources to the ‘right goals’ but this goal-selection process at present is dysfunctional.
There is a fair bit of stress that is associated with uncertainty and we know it. Yet modernity gives us a lot of tools to prepare, and make certain bits of the future which only makes us crave for more control and perhaps heighten our expectations that the uncertainty can be eliminated.
So part of our stress now comes from the expectation of certainty. We no longer how to enjoy flexibility, and embrace the dynamism that exists in uncertainty. And then when everything is under control, we find ourselves bored, craving for some kind of variation and so on.
As the aspects of work that has complete certainty slowly gets outsourced to computers, robots and perhaps even artificial intelligence, we are going to be getting the harder bits of work. The ones that require us to actually embrace uncertainty; the type that involves no one knowing the answer. We need to regain our ability to think and solve problems bit by bit as opposed to treating everything as though there has to be a right answer and we have to get it right.
We are embarrassed about our mistakes. We need to get over them, and often, we do so by avoiding them. Please don’t talk about it or revisit the experience. That can be psychologically comforting. But are we doing justice to the cost that we bear for the mistakes?
I’ve written quite a fair bit in the past about the social or culture attitude towards mistakes, and I think a lot of the ideas are still worth exploring:
All of this is so that we can build and develop wisdom, where we know how to work within and navigate a dynamic environment. The problem with theoretical approaches and specific methodologies to achieving outcomes is that they assume that there is an ordered, stable environment within which we conduct our activities. Sometimes, that is just not exactly the case.
The only time you have to say something is a feature, not a bug, is when it appears to be a flaw. The notion behind this idea is that there was an intention. That aspect of a software, or product design, or service experience was not supposed to be a flaw but an intentional part of the design. It assumes there was an intention, some objective being served.
The reason people might think it was a bug could be because:
They had different objectives from that of the way the product designer had imagined the objectives of their users to be
They were not the target audience of the product/service
They were forcefully making a product fit their needs
They did not know how to use the product – which could reflect badly on the UI design or the UI of whatever instructions needed
The product had a poor product-market fit
The product designers were giving excuses for themselves
There isn’t supposed to be a debate whether something is a feature or a bug. It should always be resolved by the one who had designed the product/service. If it was a result of something being overlooked, it is a bug, and pointing out that it could be a feature is just an excuse.
You bought an expensive foie gras meal and paid for it but can’t finish it. So who foots the bill?
If you finish it and get sick as a result? Is the doctor’s fee part of your foie gras bill?
If you don’t finish, and it goes into a food waste heap that requires public subsidy to manage and clean up, are the taxpayers footing your bill?
Would knowing all that change your decision to buy that foie gras meal?
What if you knew the future path of your choices? Who would you allow to foot the bill? How far ahead would you care about the consequences of your actions?
This is a story about externalities, cost and consequences. Who should care? Who should we care for? How much should we care? No one teaches us all these? We have to work them out and make decisions.
The faceless corporate had been painted as the enemy of man in popular culture and broader artistic endeavour. The idea is haunting. Some kind of machinery driving its machinations through its cogs and gears to achieve some broad vague goal that sounds appealing in concept but nefarious in practice.
Of course, the reality is that it is not just the corporate that can behave and seem this way. There is the bureacracy that is a manifestation if a “government” or even a non-profit. There is also loose organisations centered on single-dimensional stuff (hobbies, interest groups, certain kind of political activism, etc).
The point is this idea of a “corporate” or some kind of machinery is anti-thetical to being human. Why would that be so? Here’s the tricky part.
We are all complex and multi-dimensional that in creating singular objectives or goals and trying to relentlessly pursue them reduces us to something less than human. And those “big entities” essentially embody this limited dimensionality compared to what life really is. Same goes with money, when we make everything in business about that. We reduce richness with riches. What a shame.
We don’t have to be anti-corporate. But we probably would do better to understand why its reach should not be all-extending.
We all want to make the world a better place. And in Singapore, we’ve somewhat cultivated the idea that we need to force people to take the right action or they won’t. Often it is because they will point to others who have not done it and say ‘why don’t you ask them?’
The people who failed to bring their trays back to the shelves at the hawker centres before NEA’s mandate had excuses – they were busy, the cleaners had to have something to do, they forgot, and so on. But it was never clear enough that they ‘had to’ do it. Once the mandate and the penalties came, it was clear. As clear as day. So, mandates make requirements clear to a large extent. It makes people sit up and recognise they had to take some action. More so than the consequences of dirty hawker centers, or when you have to take over a messy table.
What can we learn from this that we can apply to climate change?
If we don’t feel hit by the experience of a messy, unclean hawker centre, it is even harder to feel like we need to take any particular course of action just because we have a few more hot days. After all, one could turn up the air-conditioning (which worsens the problem at the system level). So mandates are needed to help with the coordination. The direct consequences alone are insufficient because of externalities, so the government should step in to ‘make them feel the pain’.
What does a job mean for you? What is work to you?
It used to be just tasks or collection of tasks that had to be done. The tasks were easily connected to the end goals.
Then things got complex and the tasks were clear but it felt more distant from the ultimate outcomes that the whole lot of people were trying to achieve.
Finally we did away with task-based identification of the work and changed parts of the work to be based on creating some kind of outcomes. In trying to connect the outcomes to the person, we lost the clarity on the specific tasks required. That can lead to undisciplined exhaustion of energies and burn out.
On the other hand, for all the jobs where tasks can be clearly specified, technology has been used to displace human workers. Leaving humans to only supervise or check through the results. In fact, at some point even the quality checks can be automated.
Where does that leave us? What does that mean about the future of work?
The future of work can be meaningful if we resume our human role of caring for who the outcome of work is for, and the manner in which the work is done. We carve out that higher role for ourselves by being capable of continuous improvement that focuses on the final objective of the work itself – the satisfaction of the user.
Everything that has become a norm in our lives went through some hype cycle. So in essence people may overblow its usefulness and think these things are going to change the world but then it doesn’t change the world overnight so things comes crashing down for a while before it goes on to slowly change the world. The internet is probably the best example. In the 90s people were sure that the internet was going to change everything, and so it went through a bit of a hype. So did computers in the 80s. But after the hype, things crashed, and then life went on except it got changed bit by bit, steadily and surely.
Generative AI is itself going through a bit of that; we are all sure things are going to turn out great. Of course, some doomsayers will be warning the world of the problems and calamity it would bring – just as Socrates thought writing was a poor form of communication and would also bring about the decline in memory of men. I think it is probably necessary to create more safeguards for AI and allow the governance to evolve with its development.
I think Gen AI will be helping to augment the capabilities of human workers for a really long time before they come to ‘replace’ workers so to speak. Yes of course you could use some kind of AI technology to help you even have a conversation at the call center, but it’s not going to be able to handle 100% of the queries, you will eventually still have a human in place. Consultants for example, who might have been spending time copy-pasting or doing data entry type of work might lose their jobs but then there will still be someone senior who needs to intervene.
The real economic challenge for us is how are we going to let Gen AI do the so-called low-level jobs while maintaining a pathway for us to train more junior workers into capable senior workers. Sure there is the grunt work that has to be done but traditionally, the juniors learn the ropes by doing those work. If they are going to be performed by Gen AI, then how on earth are they going to be able to get the chance to learn?
There is still substantial job opportunities which are slightly underpaid but cannot be replaced by Gen AI. These work are underpaid either because of systematic biases in the economic systems or as a function of labour market rigidities. They include the care-giving, pastoral guidance type of roles, as well as all of the cases where it is important to have a human example who can model moral character and other crucial human attributes. No kid is going to see the politeness of a Gen AI figure or speech bot and say he or she wants to be courteous because they are a role model for the kid.
To me, those problems will need to be gradually resolved before we would allow AI to play a bigger part in the lives of people. Part of the way some of these problems are resolved is actually through mutual cancellation with the demographic transition challenge. Economies that are mature and have severely ageing population will need to rely on AI for many things. Improving labour mobility globally should slow down the need for that but it is inevitable for these markets who have the resources to play the early adopters’ role.