AI Now’s warning on harmful artificial intelligence

Skyline view of Paris with Eiffel Tower in background.

The AI Now Institute‘s annual report for 2019 is an eye-opener for some of the risks and impacts of artificial intelligence as its use in enterprise, government and domestic environments grows. Many of the problems are social, cultural, and political rather than primarily technical.

Last year, the Institute highlighted AI’s accountability gap, and asked who is responsible when AI systems harm us, and how we might remedy those harms. This year, the report focuses on the growing pushback against harmful AI (as shown in their 2019 chart).

While the adoption of AI is something we are mostly positive about at Firehead, we also think that the wider AI environment is something companies and communication professionals should be aware of to stay up to date and inform strategy in this area.

We’ve pulled out some of the key points for Firehead readers, for those considering the adoption of AI in their companies and those who work with AI or in digital communications.

Facial and affect recognition

Facial recognition is on the rise and controversial for a number of reasons, not least that it is often carried out without consent, raising data privacy and ethical concerns. The legal frameworks are not yet fully in place to ensure safe use, according to the report. Even if all technical issues were resolved today, the system would remain biased and “produce disparate harms, given the racial and income-based disparities of who gets surveilled, tracked and arrested”. This is one of the biggest areas of pushback at present.

Affect recognition is a subset of facial recognition technology and claims to ‘read’ our facial expressions and interpret them. It is already being used in classrooms and job interviews, again often without consent. But the 2019 AI Now report notes that “this type of AI phrenology has no reliable scientific foundation”.

Swapping humans for automated decision-making

The report details a few instances where automation without human input has gone wrong, fuelling discrimination on the basis of race, gender and class. Machine-led decisions can have severe impacts on people’s lives.

Often this is because the training data going into the system is flawed and this is not easily resolved, either at training data collection stage or at output stage as end users are unlikely to challenge computerised decisions.

The politics of classification is a growing issue for those creating and employing AI and the report recommendations suggest “machine learning researchers should account for potential risks and harms and better document the origins of their models and data”.

Smart city risks

Everyone from governments to tech companies are looking to maximise the potential of smart cities. But AI biases combined with the potential rise in smart city data collection and surveillance could easily amplify discrimination.

Products such as Amazon’s Ring – a doorbell/surveillance video camera system – means privatised mass surveillance is coming in by the front door. The tech giant is both partnering with police forces to use its data but also patenting the facial recognition data it collects for surveillance use and training data.

This and many more AI products being deployed that could be as harmful as they are useful, given AI’s often structural discrimination through feedback loops, and embedded skews and biases.

Automated worker management

AI platforms are being used for worker management but give workers little or no right to contest the working conditions. Massive automated platforms, such as those used by Uber and Amazon, can “direct worker behaviour, set performance targets, and determine workers wages” leaving the worker with little control.

These platforms often disproportionately hit temporary and contract workers, such as a delivery company that stole customer tips rather than passing them on, or Uber slashing driver pay with a quick update to their platform.

AI’s heavy carbon footprint

If you’re trying to cut carbon emissions or put sustainability at the heart of your business model, it’s good to be aware that AI is highly energy intensive and consumes a large amount of natural resources.

The report highlighted one example where 600,000 pounds of carbon dioxide were emitted from creating just one AI model for natural-language processing – roughly equivalent to 125 roundtrip flights between New York and Beijing.

Take action

Find out about New jobs, partnerships and opportunities in the era of AI with Firehead.

Sign up to the Firehead newsletter to be kept up to date with AI news and views – subscribe here.

Image: (CC) AI Now Institute, 2019.

CJ Walker

Related Posts

Call to action

Skills-based Hiring Trends and Technical Communication, part 3

Part 3 of 5 In this five-part series, Firehead takes a look at the new skills-based hiring trend – what it is, why it’s gaining ground, and how it effects technical communication. Remote Work Trends and Technical Communication Whilst remote…...

4 October 2024
CJ Walker

Skills-based Hiring Trends and Technical Communication, part 2

Part 2 of 5 The Top Five Most-in-Demand Technical Skills in 2023 In this five-part series, Firehead takes a look at the new skills-based hiring trend – what it is, why it’s gaining ground, and how it effects technical communication.…...

4 September 2024
CJ Walker