Chapter 6 Human Rights and Social Well-being

In previous chapters, we have explored the benefits that many different AI applications could bring to us. In this chapter, we are going to explore the impacts (both benefits and concerns) that AI brings for human rights, as well as social well-being.

AI has the potential to increase our well-being and improve society by, for instance, making government social welfare operation functions more efficient; and help the environment by using the planet’s resources more sustainably. AI, on the other hand, also poses new risks for human rights, as diverse as non-discrimination, privacy, security, freedom of expression, etc..

6.1 Personal Identification and Surveillance

The ability of AI-enabled face, voice recognition systems provide immense potential to track and identify individuals. There are many different benefits of such systems towards national security. However, there are also significant privacy implications over the widespread use of such technologies.

Microsoft, in particular, has been vocal in expressing concern over three key implications of the use of facial recognition technology (microsoft?):

“First, especially in its current state of development, certain uses of facial recognition technology increase the risk of decisions and, more generally, outcomes that are biased and, in some cases, in violation of laws prohibiting discrimination. Second, the widespread use of this technology can lead to new intrusions into people’s privacy. And third, the use of facial recognition technology by a government for mass surveillance can encroach on democratic freedoms.”
—- Brad Smith, Microsoft President

6.1.1 Surveillance and national security

The following case study is extracted from Chapter 6.2.1 (main?)
Case study: Surveillance technology in crisis situations

A trial operation by police in India to test the use of facial recognition systems was able to scan the faces of 45,000 children in various children’s homes and establish the identities of 2,930 children who had been registered as missing. After bureaucratic difficulties between different agencies and the courts, the Delhi Police were able to utilise two datasets—60,000 children registered as missing, and 45,000 children residing in care institutions. From these two databases, they were able to identify almost 3,000 matches. Discussions are underway on how to use this system to identify missing children elsewhere in India. A key ingredient in this outcome was the ability for law enforcement to access these datasets. While AI-enabled surveillance may increase personal safety and reduce crime, the need to ensure that privacy is protected and that such technologies are not used to persecute groups is critical. Authorities need to give careful consideration to the use of AI in surveillance to ensure an appropriate balance is struck between protecting the safety of citizens and adopting intrusive surveillance measures that unfairly harm and persecute innocent people.

6.1.2 Employee monitoring

Enterprises are increasingly monitoring employees through different AI-powered technologies. Many analyses have been done based on their email and social media usage, others like Westpac, have been using facial recognition and mood detection. Even though these AI technologies have provided an innovative (and perhaps more effective and efficient) way to monitor employees, there are ethical questions that have to be asked, and one of the most critical questions is “What is the goal of such exercise?”

Case study: Monitoring employee behaviour with AI

The following case study is extracted from Chapter 6.2.2 (main?)

Westpac Bank is among companies in Australia that are exploring the use of AI-enabled facial recognition technologies to monitor the moods of employees. Representatives have indicated that the goal is to “take the pulse” of teams across the organisation.

The use of AI to monitor employees can be done in ways that are ethical or unethical. The first thing to look at is its ultimate goal of monitoring, for instance, is it to benefit the welfare of employees, or is it to maximise profit? Take an example, if, say, people’s smiles were being logged by a machine and employees were disciplined for not being happy enough, and the goal was to put on a masquerade of happiness to please customers for profit reasons, then the machine is treating humans as another component of a profit-generating outcome. On the other hand, if the people’s emotional state was assessed to deliver timely psychological assistance at the right time to people facing stress or an emotional breakdown, then the technology is serving people instead and could be defended on ethical grounds as long as it respected the autonomy of individuals and their right to choose not to participate.

As highlighted by the National Health and Medical Research Council (NHMRC), when researching or utilising technologies that monitor people’s emotions, it is important to ensure that their autonomy and right to make their own decisions are respected.

6.2 AI and Employment

AI systems including robots and chatbots are revolutionizing and automating many industrial operations.

The concern of AI replacing and taking place of human attracts increasing public attention. As much as they are playing an important role in making industrial tasks and processes better, their effect on human-centred jobs and capabilities in the workplace has become a major debate (AIemployment?). As AI progresses, some believe that it will steadily and inevitably take over large sectors of the workforce and will bring mass-scale unemployment and social unrest. The jobs that most of the people are currently doing, may get obsolete or automated by AI. Some others have a different viewpoint. AI will also create new jobs in the field. For instance, self-driving cars may need drivers for emergency rides. Take another example, more AI-engineers are needed to create chatbots for every industry and most of all more AI-trainers to train chatbots to act like a human.

In this section, we explore the impact of AI on Employment and the Workforce.

PRESCRIBED READING  
Artificial Intelligence: Australia’s Ethics Framework:
- Chapter 6.3 Artificial Intelligence and Employment
- Chapter 6.4 Gender Diversity in AI workforces
- Chapter 6.5 Artificial Intelligence and Indigenous Communities

DISCUSSION
- What impact is AI likely to have on Australian society in term of workforce and employment?
- How should we prepare ourselves for the AI era?