Save articles for later
Add articles to your saved list and come back to them any time.
The importance of consensual engagement with AI can be magnified depending on the nature of the use of the AI. Some applications of AI are inherently more controversial than others. For example, it’s important to note that with My Pictures Matter – a system where we crowdsource images from adults of themselves when they were children, to teach computers to recognise children so that we can track down images of child sexual abuse – we are in no way seeking to identify particular children. Having an AI detect whether any children may be present in an image is quite a different proposition to facial recognition that would pinpoint a child’s identity.
Facial recognition, particularly what is known as one-to-many facial recognition, whereby someone’s identity is automatically inferred by matching it against a large database of faces, is a particularly fraught application of AI.
Facial recognition is just one area where AI presents a major ethical challenge for our society.Credit: Marija Ercegovac
Used to effect what is essentially biometric surveillance, it is the kind of AI application that has a lot of people worried. It is certainly more contentious than one-to-one facial recognition, whereby a face is analysed for a match against one identity only — this is the kind of facial recognition involved in logging into your phone, or being scanned at automatic passport control gates. Complicating matters is the fact that, at the time of writing, Australia does not have specific laws regulating the use of facial recognition, beyond provisions of the Privacy Act (though a model law has been proposed).
It’s not just governmental or law enforcement use of this technology – where it can be argued there is ostensibly more oversight – that has caused concern. Australian consumer advocacy group Choice raised eyebrows with a report in 2022 that exposed the use of facial recognition in retail chains Kmart, Bunnings and The Good Guys. The report found that most customers had no idea they were being surveilled – few saw or remembered what was noted as manifestly inadequate physical signage at store entrances.
Bunnings argued that its facial recognition was designed to recognise individuals who had previously been involved in “incidents of concern”. Although this is broadly akin to the use of facial recognition in, for example, casinos to enforce bans on individuals, and security applications at other premises, the move into more general retail settings represents for many a disturbing creep of the technology.
Taking things up a notch, another prominent example of the use of facial recognition is represented by Clearview AI, described by one media outlet as the “world’s most controversial company”.
Clearview’s technology unashamedly advocates one-to-many facial recognition at enormous scale, with the system rapidly matching an uploaded image against a database of tens of billions of images scraped from public data sources on the internet. The software is marketed largely to law enforcement agencies and has had spectacular success in identifying offenders and disrupting serious crime. But in late 2021, Clearview found itself at the centre of a worldwide storm around privacy. The Office of the Australian Information Commissioner found that the company breached Australian privacy laws, with the commissioner stating: “The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
At the time of writing, this ruling was being appealed by Clearview, but for now, use of the company’s database in Australia is essentially banned. In the United States, a lawsuit led by the American Civil Liberties Union has seen Clearview agree to restrict the distribution of its database to local law enforcement agencies (while still marketing it to federal agencies).
The sources of much of Clearview’s data, Google and Facebook, have determined that the company breached the relevant terms of use and have demanded it cease scraping their sites for facial images.
The debate around Clearview remains interesting, however, in that a clear public interest is being served by the apprehension of serious criminal offenders through use of the software, in addition to demonstrable benefits to community safety. The technology has been used in the pursuit of child abusers, and multiple children have been rescued from harm.
When there is such evidence that technology is saving lives, the question is how to effectively regulate it worldwide so that it can be used for these purposes, while at the same time upholding the human rights and privacy of individuals.
Human rights should always be a consideration with respect to any technology. It’s just that AI, and especially the pace of its development, is challenging regulators as they’ve never been challenged before.
Whether it is regarding hot-button issues such as facial recognition or the other AI systems we interact with, the issue of consent is complicated. How many of us read the privacy policies associated with myriad devices and apps that collect our personal information? It is also becoming increasingly difficult – or at least extremely inconvenient – to opt out of the AI-powered services we interact with.
More and more of these services are being moved onto automated, algorithmic systems. Organisations known as data brokers, whose sole purpose is to buy and sell personal data, are part of a multibillion-dollar market that undoubtedly fuels much of AI.
Another consideration when it comes to all of this data harvesting is that the data needs to be stored somewhere. Yet, more and more, dedicated data centres are being recognised as an environmental concern – they collectively contributed close to 1 per cent of global carbon dioxide emissions in 2021, a number that has likely risen since then. Organisations also hold on to a lot of data they most likely don’t need, sometimes in the pursuit of data-led AI decision-making that may have only a marginal impact on their operations.
Historical data often does not get destroyed when it should, or data that is only needed for a one-time purpose is inappropriately retained. This produces numerous “honeypots” of personal data around the world that are frequently breached, resulting in it being leaked and used for nefarious purposes such as identity theft and other cybercrimes.
Furthermore, almost every digital service asks users to create yet another account so their activities can be tracked and their data harvested – and likely fed into AI systems of various kinds for analyses and predictions. More data accounts means more personal data spread across the internet, and more opportunities for criminals to exploit this data.
Living with AI by Campbell WilsonCredit:
Is it possible we have become the metaphorical slowly boiled frog with respect to the use of personal data by AI, perhaps only now beginning to realise that the temperature in our pot is uncomfortably high? And even if we have, do many of us really care how our data is being used?
After all, by using Facebook, billions of people are freely sharing significant amounts of personal information, probably without much regard to how it is being used. Whether we should care is a personal issue, but it’s hard to care about something if you don’t know it’s happening. And that’s where we need a lot more transparency when it comes to how data, and the AI powered by it, are being used.
This is an edited extract from Living with AI by Campbell Wilson, the 30th title by Monash University Publishing’s ‘In the National Interest’ series, out next week.
Most Viewed in National
From our partners
Source: Read Full Article