Kecerdasan Buatan Akan Mengubah Cara Kita Berpikir Tentang Kepemimpinan
Meningkatnya perhatian yang diberikan pada kecerdasan buatan menimbulkan pertanyaan penting tentang integrasinya dengan kemanusiaan, menurut David De Cremer, penulis buku 'Leadership by Algorithm.'
Meningkatnya perhatian yang diberikan pada kecerdasan buatan menimbulkan pertanyaan penting tentang integrasinya dengan ilmu sosial dan kemanusiaan, menurut David De Cremer , pendiri dan direktur Pusat Teknologi AI untuk Kemanusiaan di National University of Singapore Business School. Dia adalah penulis buku terbaru, Leadership by Algorithm: Who Leads and Who Follows in the AI Era ?
Meskipun AI saat ini mampu melakukan tugas yang berulang dan dapat menggantikan banyak fungsi manajerial, seiring berjalannya waktu AI dapat memperoleh “kecerdasan umum” yang dimiliki manusia, katanya dalam wawancara baru-baru ini dengan AI for Business (AIB), sebuah inisiatif baru di Analytics di Wharton. . Dipimpin oleh profesor operasi, informasi dan keputusan Wharton Kartik Hosanagar , AIB adalah inisiatif penelitian yang berfokus membantu siswa memperluas pengetahuan dan penerapan pembelajaran mesin serta memahami implikasi AI dalam bisnis dan sosial.
Menurut De Cremer, AI tidak akan pernah memiliki “jiwa” dan tidak dapat menggantikan kualitas kepemimpinan manusia yang memungkinkan orang menjadi kreatif dan memiliki perspektif berbeda. Kepemimpinan diperlukan untuk memandu pengembangan dan penerapan AI dengan cara yang paling sesuai dengan kebutuhan manusia. “Pekerjaan di masa depan mungkin adalah seorang filsuf yang memahami teknologi, apa artinya bagi identitas manusia, dan apa artinya bagi masyarakat yang ingin kita wujudkan,” katanya.
Transkrip wawancara yang telah diedit muncul di bawah.
AI untuk Bisnis : Banyak yang ditulis tentang kecerdasan buatan. Apa yang menginspirasi Anda untuk menulis Leadership by Algorithm ? Kesenjangan apa di antara buku-buku tentang AI yang ingin Anda isi ?
David De Cremer: AI telah ada sejak lama. Istilah ini diciptakan pada tahun 1956 dan mengilhami “gelombang pertama” penelitian hingga pertengahan tahun 1970an. Namun sejak awal abad ke-21, semakin banyak penerapan langsung yang menjadi jelas dan mengubah sikap kita terhadap potensi “nyata” AI. Pergeseran ini terutama dipicu oleh peristiwa di mana AI mulai berinteraksi dengan juara dunia catur dan permainan Go dari Tiongkok. Sebagian besar perhatian tertuju, dan masih tertuju pada, teknologi itu sendiri: bahwa teknologi bertindak dengan cara yang tampak cerdas, yang juga merupakan definisi sederhana dari kecerdasan buatan.
Tampaknya cerdas dalam cara yang sama dengan kecerdasan manusia. Saya bukan seorang ilmuwan komputer; latar belakang saya adalah ekonomi perilaku. Namun saya menyadari bahwa integrasi antara ilmu sosial, kemanusiaan, dan kecerdasan buatan tidak mendapat perhatian sebagaimana mestinya. Kecerdasan buatan dimaksudkan untuk menciptakan nilai bagi masyarakat yang dihuni oleh manusia; pengguna akhir harus selalu manusia. Artinya AI harus bertindak, berpikir, membaca, dan menghasilkan hasil dalam konteks sosial.
AI sangat baik dalam tugas-tugas yang berulang dan rutin serta berpikir secara sistematis dan konsisten. Hal ini menyiratkan bahwa tugas dan pekerjaan yang paling mungkin diambil alih oleh AI adalah hard skill, dan bukan soft skill. Di satu sisi, pengamatan ini sesuai dengan apa yang disebut paradoks Moravec: Apa yang mudah bagi manusia sulit bagi AI, dan apa yang sulit bagi manusia tampaknya mudah bagi AI.
Kesimpulan pentingnya adalah bahwa dalam perkembangan manusia di masa depan, melatih soft skill kita akan menjadi lebih penting dan tidak kalah pentingnya dengan asumsi banyak orang. Saya ingin menjelaskan hal ini karena ada banyak tanda-tanda saat ini terutama sejak adanya COVID-19 yang mengharuskan kita untuk lebih beradaptasi dengan teknologi baru. Dengan demikian, hal ini menempatkan penggunaan dan pengaruh AI dalam masyarakat kita pada posisi dominan. Seiring dengan semakin sadarnya kita, kita kini memasuki sebuah masyarakat di mana orang-orang diberi tahu oleh algoritme mengenai selera mereka, dan, tanpa terlalu mempertanyakannya, kebanyakan orang dengan mudah mematuhinya. Mengingat keadaan ini, tampaknya bukan lagi sebuah fantasi liar bahwa AI dapat mengambil posisi kepemimpinan, itulah sebabnya saya ingin menulis buku ini.
“Kita kini memasuki sebuah masyarakat di mana orang-orang diberi tahu oleh algoritma mengenai selera mereka, dan, tanpa terlalu mempertanyakannya, kebanyakan orang dengan mudah mematuhinya.”
AIB: Mungkinkah mengembangkan AI sedemikian rupa sehingga teknologi menjadi lebih efisien tanpa merugikan umat manusia? Mengapa risiko ini ada? Bisakah hal ini dimitigasi?
De Cremer: Saya yakin itu mungkin. Hal ini juga berkaitan dengan topik bukunya. [Penting] bahwa kita memiliki kepemimpinan yang tepat. Buku ini tidak hanya membahas apakah AI akan menggantikan pemimpin; Saya juga menekankan bahwa manusia mempunyai kualitas unik tertentu yang tidak akan pernah dimiliki oleh teknologi. Sulit untuk memasukkan jiwa ke dalam mesin. Jika kita bisa melakukan itu, kita juga akan memahami rahasia kehidupan. Saya tidak terlalu optimis bahwa hal ini akan [menjadi kenyataan] dalam beberapa dekade mendatang, namun kita mempunyai tanggung jawab yang sangat besar. Kami sedang mengembangkan AI atau mesin yang dapat melakukan hal-hal yang tidak pernah kami bayangkan bertahun-tahun yang lalu.
Pada saat yang sama, karena kualitas unik kita dalam memiliki dan mengambil perspektif, pemikiran proaktif, dan kemampuan untuk mengambil segala sesuatu ke dalam abstraksi, terserah pada kita bagaimana kita akan menggunakannya. Jika Anda melihat kepemimpinan saat ini, saya tidak melihat banyak konsensus di dunia. Kita tidak memberikan perhatian yang cukup untuk melatih para pemimpin kita pemimpin dunia usaha, pemimpin politik, dan pemimpin masyarakat. Kita memerlukan pendidikan kepemimpinan yang baik. Pelatihan dimulai dari anak-anak kita. [Ini tentang] bagaimana kita melatih mereka untuk menghargai kreativitas, kemampuan untuk bekerja sama dengan orang lain, mengambil perspektif satu sama lain, dan mempelajari tanggung jawab tertentu yang dilakukan masyarakat kita. Jadi ya, kita bisa menggunakan mesin untuk kebaikan jika kita memahami dengan jelas identitas kemanusiaan kita dan nilai yang ingin kita ciptakan untuk masyarakat yang manusiawi.
AIB: Algoritma menjadi bagian penting dalam cara pengelolaan pekerjaan. Apa implikasinya ?
De Cremer: Algoritme adalah model yang menjadikan data menjadi cerdas, artinya algoritme membantu kita mengenali tren yang terjadi di dunia sekitar kita, dan yang ditangkap melalui pengumpulan data kita. Jika dianalisis dengan baik, data dapat memberi tahu kita cara menangani lingkungan dengan cara yang lebih baik dan efisien. Inilah yang saya coba lakukan di sekolah bisnis, dengan melihat bagaimana kita dapat menjadikan para pemimpin bisnis kita lebih paham teknologi dalam memahami bagaimana, di mana, dan mengapa menggunakan algoritme, otomatisasi, agar pengambilan keputusan lebih efisien.
Banyak pemimpin bisnis mengalami kesulitan dalam menjelaskan alasan bisnis mengapa mereka harus menggunakan AI. Mereka kesulitan untuk memahami apa yang dapat diberikan oleh AI bagi perusahaan mereka. Saat ini sebagian besar dari mereka dipengaruhi oleh survei yang menunjukkan bahwa sebagai sebuah bisnis, Anda harus terlibat dalam adopsi AI karena semua orang juga melakukannya. Namun manfaatnya bagi perusahaan unik Anda sering kali kurang dipahami dengan baik.
Setiap perusahaan memiliki data yang unik. Anda harus memanfaatkan hal tersebut dalam hal [membentuk] strategi Anda, dan dalam hal nilai yang dapat dan ingin diciptakan oleh perusahaan Anda. Untuk mencapai hal ini, Anda juga harus memahami nilai-nilai yang mendefinisikan perusahaan Anda dan yang membedakannya dengan pesaing Anda. Kami tidak melakukan pekerjaan yang baik dalam melatih para pemimpin bisnis kami untuk berpikir seperti ini. Daripada membuat mereka berpikir bahwa mereka harus menjadi pembuat kode, mereka harus fokus untuk menjadi lebih paham teknologi sehingga mereka dapat menjalankan strategi bisnis mereka sejalan dengan nilai-nilai mereka dalam lingkungan di mana teknologi adalah bagian dari proses bisnis.
Hal ini menyiratkan bahwa para pemimpin bisnis kami memahami apa sebenarnya yang dilakukan suatu algoritma, tetapi juga apa batasannya, apa potensinya, dan khususnya di bagian mana dalam rantai pengambilan keputusan perusahaan, AI dapat digunakan untuk meningkatkan produktivitas dan efisiensi. Untuk mencapai hal ini, kita membutuhkan pemimpin yang cukup paham teknologi untuk mengoptimalkan pengetahuan mereka yang luas tentang proses bisnis guna memaksimalkan efisiensi bagi perusahaan dan masyarakat. Di sinilah saya melihat kelemahan banyak pemimpin bisnis saat ini.
Tidak diragukan lagi, AI akan menjadi rekan kerja baru. Penting bagi kami untuk memutuskan di bagian mana dari proses bisnis Anda melakukan otomatisasi, di bagian mana yang memungkinkan untuk mengeluarkan manusia dari lingkaran tersebut, dan di bagian mana Anda memastikan agar manusia tetap berada di dalam lingkaran tersebut untuk memastikan bahwa otomatisasi dan penggunaan dapat dilakukan. penerapan AI tidak mengarah pada budaya kerja di mana orang merasa diawasi oleh mesin, atau diperlakukan seperti robot. Kita harus peka terhadap pertanyaan-pertanyaan ini. Para pemimpin membangun budaya, dan dalam melakukan hal ini mereka mengomunikasikan dan mewakili nilai-nilai dan norma-norma yang digunakan perusahaan untuk memutuskan bagaimana pekerjaan perlu dilakukan untuk menciptakan nilai bisnis.
AIB: Are algorithms replacing the human mind as machines replaced the body? Or are algorithms and machines amplifying the capabilities of the mind and body? Should humans worry that AI will render the mental abilities of humans obsolete or simply change them?
De Cremer: That is one of the big philosophical questions. We can refer to Descartes here, [who discovered the] body and mind [problem]. With the Industrial Revolution, we can say that the body was replaced by machine. Some people do believe that with artificial intelligence the mind will now be replaced. So, body and mind are basically taken over by machines.
“We can use machines for good if we are clear about what our human identity is and the value we want to create for a humane society.”
As I outlined in my book, there is more sophistication to that. We also know that the body and mind are connected. What connects them is the soul. And that soul is not a machine. The machine at this moment has no real grasp of what it means to understand its environment or how meaning can be inferred from it. Even more important in light of the idea of humanity and AI, a machine does not think about humans, or what it means to be a human. It does not care about humans. If you die today, AI does not worry about that.
So, AI does not have a connection to reality in terms of understanding semantics and deeply felt emotions. AI has no soul. That is essential for body and mind to function. We say that one plus one is three if you want to make a great team. But in this case if we say AI or machines replace the body and then replace the mind, we still have one plus one is two, but we do not have three, we don’t have the magic. Because of that, I do not believe AI is replacing our mind.
Secondly, the simple definition that I postulated earlier is that artificial intelligence represents behaviors, or decisions that are being made by a machine that seem intelligent. That definition is based on the idea that machine intelligence is able to imitate the intelligent behavior that humans show. But, that machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.
When we look at machine learning, it is modeled after neural networks. But we also know, for example, that neuroscience still knows little, maybe not even 10%, of how the brain works. So, we cannot say that we know everything and put that in a machine and argue that it replicated the human mind completely.
The simplest example I always use is that a computer works in ones and zeroes, but people do not work in ones and zeroes. When we talk about ethics with humans, things are mostly never black or white, but rather gray. As humans we are able to make sense of that gray area, because we have developed an intuition, a moral compass in the way we grew up and were educated. As a result, we can make sense of ambiguity. Computers at the moment cannot do that. Interestingly, efforts are being made today to see whether we can train machines like we educate children. If that succeeds, then machines will come closer to dealing with ambiguity as we do.
AIB: What implications do these questions have for leadership? What role can leaders play in encouraging the design of better technology that is used in wiser rather than smarter ways?
“That machines seem able to act in ways like humans does not mean that we are talking about the same kind of intelligence and existence.”
De Cremer: I make a distinction between managers and leaders. When we talk about running an organization, you need both management and leadership. Management provides the foundation for companies to work in a stable and orderly manner. We have procedures so we can make things a little bit more predictable. Since the early 20th century, as companies grew in size, you had to manage companies and [avoid] chaos. Management is thus the opposite of chaos. It is about structuring and [bringing] order to chaos by employing metrics to assess goals and KPI’s are achieved in more or less predictable ways. In a way, management as we know it, is a status-quo maintaining system.
Leadership, however, is not focused on the status quo but rather deals with change and the responsibility to give direction to deal with the chaos that comes along with change. That is why it is important for leadership to be able to adapt, to be agile, because once things change, as a leader you are looked upon to [provide solutions]. That is where our abilities to be creative, to think in proactive ways, understand what value people want to see and to adapt to ensure that this kind of value is achieved when change sets in.
AI will be extremely applicable to management because management is consistent, it tries to focus on the status quo, and because of its repetitiveness it is in essence a pretty predictable activity … and this is basically also how an algorithm works. AI is already doing this kind of work by predicting the behavior of employees, whether they will leave the company, or whether they are still motivated to do their job. Many managerial decisions are where I see algorithms can play a big role. It starts as AI being an advisor, providing information, but then slowly moving into management jobs. I call this management by algorithm – MBA. Theoretically and from a practical point of view, this will happen, because AI as we know it today in organizations is good at working with stationary data sets. It, however, has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.
Computer scientists working in robotics and with self-driving cars say the biggest challenge for robots is interacting with people, physical contact, and coordinating their movements with the execution of tasks. Basically, it is more difficult for robots to work within the context of teams than sending a robot to Mars. The reason for this is that the more complex the environment, the more likely it is that robots will make mistakes. As we are less tolerant to having robots inflict harm on humans, it thus becomes a dangerous activity to have autonomous robots and vehicles interacting with humans.
Leadership is about dealing with change. It is about making decisions that you know are valuable to humans. You need to understand what it means to be a human, that you can have human concerns, taking into account that you can be compassionate, and you can be humane. At the same time, you need to be able to imagine and be proactive, because your strategy in a changing situation may need to be adjusted to create the same value. You need to be able to make abstraction of this, and AI is not able to do this.
AIB: I am glad you brought up the question of compassion. Do you believe that algorithm-based leadership is capable of empathy, compassion, curiosity, or creativity?
“[Artificial intelligence] has a problem dealing with complexities. This is where AI, as we know it today, falls short on the leadership front.”
De Cremer: Startups and scientists are working on what we call “affective AI”. Can AI detect and feel emotions? Conceptually it is easy to understand. So, yes, AI will be able to detect emotions, as long as we have enough training data available. Of course, emotions are complex – also to humans – so, really understanding what emotions signify to the human experience, that’s something AI will not be able to do (at least in decades to come). As I said before, AI does not understand what it means to be human, so, taking the emotional intelligence perspective of what makes us human is clearly a limit for machines. That is also why we call it artificial intelligence. It is important to point out that we can also say that humans have an AI; I call that authentic intelligence.
At this moment AI does not have authentic intelligence. People believe that AI systems cannot have authentic emotions and an authentic sense of morality. It is impossible because they do not have the empathic and existential qualities people are equipped with. Also, I am not too sure that algorithms achieve authentic intelligence easily given the fact that they do not have a soul. So, if we cannot infuse them with a common sense that corresponds to the common sense of humans, which can make sense of gray zones and ambiguity, I don’t think they can develop a real sense of empathy, which is authentic and genuine.
What they can learn — and that is because of the imitation principle — is what we call surface-level emotions. They will be able to respond, they will scan your face, they will listen to the tone of your voice, and they will be able to identify categories of emotions and respond to it in ways that humans usually respond to. That is a surface-level understanding of the emotions that humans express. And I do believe that this ability will help machines to be efficient in most interactions with humans.
Why will it work? Because as humans we are very attuned to the ability of our interaction partners to respond to our emotions. So almost immediately and unconsciously, when someone pays attention to us, we reciprocate. Recognizing surface-level emotions would already do the trick. The deeper-level emotions correspond with what I call authentic intelligence, which is genuine, and an understanding of those type of emotions is what is needed to develop friendships and long-term connections. AI as we know it today is not even close to such an ability.
With respect to creativity, it is a similar story. Creativity means bringing forward a new idea, something that is new and meaningful to people. It solves a problem that is useful, and it makes sense to people. AI can play a role there, especially in identifying something new. Algorithms are much faster than humans in connecting information because they can scan, analyze, and observe trends in data so much faster than we do. So, in the first stage of creativity, yes, AI can bring things we know together to create a new combination so much faster and better than humans. But, humans will be needed to assess whether the new combination makes sense to solve problems humans want to solve. Creative ideas gain in value when they become meaningful to people and therefore human supervision as the final step in the creativity process will be needed.
“One of the concerns we have today is that machines are not reducing inequality but enhancing it.”
Let me illustrate this point with the following example: Experiments have been conducted where AI was given several ingredients to make pizzas, and some pizzas turned out to be attractive to humans, but other pizzas ended up being products that humans were unlikely to eat, like pineapple with marmite. Marmite is popular in the U.K. and according to the commercials, people love it or hate it, so, it’s a difficult ingredient. AI, however, does not think about whether humans will like such products or find them useful – it just identifies new combinations. So, the human will always be needed to determine whether such ideas will at the end of the day be useful and regarded as a meaningful product.
AIB: What are the limits to management by algorithm?
De Cremer: When we look at it from the narrow point of view of management, there are no limits. I believe that AI will be able to do almost any managerial task in the future. That is because of the way we define management as being focused on the idea of creating stability, order, consistency, predictability, by means of using metrics (e.g., KPIs).
AIB: How can we move towards a future where algorithms may not lead but still be at the service of humanity?
De Cremer: First, all managers and leaders will have to understand what AI is. They must understand AI’s potential and its limits — where humans must jump in and take responsibility. Humanity is important. We have to make sure that people not only look at technology from a utility perspective, where it can make a company run more efficiently because it reduces cost by not having to hire too many employees or not training people anymore to do certain tasks.
I would like to see a society where people become much more reflective. The job of the future may well be [that of] a philosopher…one who understands technology, what it means to our human identity, and what it means to the kind of society we would like to see. AI also makes us think about who we are as a species. What do we really want to achieve? Once we make AI a coworker, once we make AI a kind of citizen of our societies, I am sure the awareness of the idea “Us versus them” will become directive in the debates and discussions of the kind of institutes, organizations and society we would like to see. I called this awareness the “new diversity” in my book. Humans versus non-humans, or machines: It makes us think also about who we are, and we need that to determine what kind of value we want to create. That value will determine how we are going to use our technology.
One of the concerns we have today is that machines are not reducing inequality but enhancing it. For example, we all know that AI, in order to learn, needs data. But is data widely available to everyone or only a select few? Well, if we look at the usual suspects — Amazon, Facebook, Apple and so forth — we see that they own most of the data. They applied a business model where the customer became the product itself. Our data are valuable to them. As a result, these companies can run more sophisticated experiments, which are needed to improve our AI – which means that technology is also in the hands of a few. Democracy of data does not exist today. Given the fact that one important future direction in AI research is to make AI more powerful in terms of processing and predicting, obviously a certain fear exists that if we do not manage AI well, and we don’t think about it in terms of [whether] it is good for society as a whole, we may run into risks. Our future must be one where everyone can be tech-savvy but not one that eliminates our concerns and reflections on human identity. That is the kind of education I would like to see.
MORE FROM KNOWLEDGE AT WHARTON