LETTERS TO THE EDITOR
Surveillance vs. safety dilemma of AI
June 12, 2025 00:00:00
As Artificial Intelligence (AI) increasingly integrates into public infrastructure, especially surveillance systems, a critical tension arises: How do we balance public safety with personal privacy and civil liberties?
The government and law enforcement use AI for crime prevention, crowd monitoring, and facial recognition systems to enhance security and emergency response, all the while raising serious ethical, legal, and social concerns. China's Social Credit System, which rewards or penalises citizens based on their behaviour, utilises AI surveillance to prevent crime and terrorism. However, constant monitoring can erode individual privacy.
In other contexts, AI expedites police investigations and improves response time. This approach, especially the facial recognition technology in London's CCTV cameras, has provoked a public outcry over privacy violations and due process. In response to similar concerns, using facial recognition by law enforcement is banned in cities like San Fransisco-- citing its potential to infringe on civil liberties. AI can reduce bias by using data instead of human judgment, but it often inherits biases from its training data, resulting in unfair targeting. AI surveillance also protects vulnerable environments like schools and hospitals yet normalises a "surveillance state," making citizens feel constantly observed.
The surveillance versus safety dilemma highlights a fundamental challenge of AI: using technology to protect society without undermining freedoms and rights. The answer isn't a complete rejection of AI surveillance; it's ensuring that technology is transparent, fair, accountable, and centered on human values.
Tanjim Bin Noor
Bachelor of Business Administration
North South University
tanjim.noor@northsouth.edu