There was an interesting notion made last year from 23 researchers that we needed a new interdisciplinary scientific field to study the societal consequences of Artificial Intelligence (AI). They proposed a new term called Machine Behaviour. The field would focus more on the implication of AI agents’ behaviour, not from the engineering perspectives. It is a hybrid between computer science and social science.
The followings are several sample research questions to be studied in the Machine Behaviour domain:
- Democracy = does the algorithm create a filter bubble?
- Autonomous vehicle = how does the car distribute risk between passengers and pedestrians?
- Autonomous weapon = does the weapon distinguish between combatants and civilian
- Algorithmic pricing = does the algorithm exhibit price discrimination
- Online dating = does the matching algorithm amplify or reduce homophily
The motivation was valid. With AI algorithms getting more ubiquitous, complex, and opaque, they might create detrimental outcomes for the human being. But, isn’t that the case with all kinds of science, technology, and innovation in general, not only AI?
In fact, there was a critical response to that piece in the same journal. They argued that the proposed scientific study has existed since the 1940s, named Cybernetics. It is the science of communications and automatic control systems in machines and living things.
Other recent emerging names for the same multidisciplinary field are called Science and Technology Studies (STS); Science, Technology and Society (STS), Science Technology and Innovation Policy (STIP), Science and Technology Policy (STP)… which was my Master’s Degree program. Scholars in this field did not merely come from the background of Social/Behavioral/Political Science but also Engineering.
Seeing at the diagram above, it looks like the people studying the impact of technology do not use quantitative evidence in their research. It implies that they need the evidence from people in the Machine Behaviour. But no, quantitative analysis has been part of STS all along.
Saying that innovators are only concerned with developing machines that can obtain specific objectives. For instance, self-driving cars, without considering their impacts on how these cars will harm human beings, is undermining the ethics of computer scientists. Furthermore, the machines should have been passed a series of heavy testing before distributed to the public. It is a common practice in research and development (R&D) to have a dedicated team in testing and quality assurance that differed from the developers.
So, do we really need a new term for a specific STS in AI?