Apple has announced plans to roll out a safety feature to protect children from explicit, lewd, or offensive images. Whilst some welcome the change, privacy rights groups state the software may be an invasion of privacy and could lead to mass surveillance if misused.
The plans drawn up by Apple use artificial intelligence (AI) technology to compare images sent and received on children’s phones and then compare them to known imagery that is explicit. If the images sent and received are deemed to be similar, the image will be blurred and an option to notify an adult will appear. Apple hopes this technology will keep children safer on the internet, a growing concern for parents and technology companies. All scanning will be taking place “in-device” and Apple claims that they will not be able to access the images.
This type of data scanning is known as client-side scanning (CSS) and privacy groups say it could lead to “surveillance at a new level”. Experts are worried CSS could be misused, granting access to almost billions of devices and ripping people’s data away from them without their knowledge. Activists are worried that although well-meaning, this could be the start of a slippery slope leading to further invasion of privacy for the individual in society. If governments were in control of data from citizens’ devices it would be a clear breach of privacy laws. Apple has caved to pressure from states before, whether removing Russian opposition figure Alexei Navalnys’ tactical voting app or moving servers to state-owned Chinese data centres.
Apple declared the motions put forward are only for protecting children and experts agree that protections need to be in place. This however fails to convince some in the industry that Apple and other companies won’t try to overreach when it comes to personal data.
What do you think?