Don't look at me - ForkLog: cryptocurrencies, AI, singularity, the future

img-b3324e1a267cbfea-7852159350685838# Don’t look at me

Regular surveillance cameras in megacities have long been replaced by entire systems equipped with artificial intelligence. Unfortunately or fortunately, many of us have to accept as fact what we used to read about in dystopias.

Krzysztof Szpak has analyzed how AI surveillance systems on city streets are built and why governments rushed to implement them everywhere.

Flock Police

In September 2025, a police officer with a court summons was standing at the doorstep of Krisanna Elzer’s house in Denver. She was accused of stealing a package from someone else’s doorstep in a neighboring town.

The evidence was footage from the automated surveillance system of the commercial company Flock Safety, showing Elzer’s car. However, the officer refused to share these details with the suspect. He suggested she present her objections in court.

“You know we have cameras in that city. Not a single breath of fresh air goes unnoticed without our knowledge,” explained the officer.

Confident in her innocence, Elzer began gathering her own evidence. On the day of the crime, she was nearby—visiting a tailor—but she did not steal any packages.

She collected data from GPS trackers in her phone and car apps, dashcam recordings, oral testimonies, and even photos of her clothing worn that day.

After numerous unsuccessful attempts to share information with authorities, the suspect contacted the police chief directly. He praised her efforts and told her that the court summons had been canceled.

As of December 2025, Flock Safety offered access to 80,000 cameras across 49 U.S. states.

From Van with Monitors to Crime Prediction

Surveillance cameras on streets, in stores, and institutions have long been commonplace. However, modern smart cameras and data processing methods are something new compared to their predecessors.

Analog CCTV Era

Once, the term CCTV (Closed-circuit television) referred to a closed network of cameras whose signals fed to a dozen monitors in front of a bored security guard at a shopping mall.

The technology was limited to video sensors, screens, and recording equipment.

Law enforcement agencies have been experimenting with surveillance systems since at least the mid-20th century.

Test of British police CCTV system at Trafalgar Square, 1960. Source: The National Archives. In 1960, British police tested two surveillance cameras at Trafalgar Square during the visit of the King and Queen of Thailand. Monitors were placed in a van near the installation site. This experience revealed several technical issues and elicited mixed reactions.

In 1979, the UK government research agency Police Scientific Development Branch developed the ANPR technology based on available optical character recognition methods.

By the 1990s, cameras at intersections and building facades became standard. Law enforcement integrated CCTV and ANPR into their daily toolkit.

Smart Cameras

With miniaturization of computer components, the growth of universal connectivity, and the advent of AI, traditional CCTV is giving way to smart cameras with centralized databases and automatic data analysis systems.

These devices are equipped with their own processors running an OS, storage, interfaces for local and internet connections, and sometimes microphones for audio recording.

Flock Safety camera with ANPR features. Source: Wikimedia Some manufacturers embed AI accelerators and NPU (Neural Processing Units) modules for real-time data processing directly on the device. Others use external hardware for AI analysis.

These systems can identify objects, recognize vehicle license plates and faces, and generate summaries of what they see. The capabilities depend on the software configuration and the equipment provider’s preferences.

The Brain Behind the Scenes

A smart camera can recognize objects and record their identifiers—car numbers, faces, or gait patterns. An analytical center collects data from cameras, combines it with information from other sources, and sends conclusions to operators.

Flock Safety offers a similar system called Nova—“a data platform for public safety,” which includes not only surveillance footage but also data from leaks, data brokers, and other commercial sources.

This system creates dossiers with movement maps, preferences, viewing history, habits, police records, and any other data.

With such a wealth of information, AI can make assumptions about people’s behavior and alert operators to suspicious situations. This feature is already available to Flock clients.

According to the company, Nova allows law enforcement to close cases “with a single click.”

Critics argue that this is a way to bypass the legal process of obtaining warrants and a basis for large-scale privacy violations.

Colorful Hair and Code Injections

Many people are indifferent to mass surveillance. To them, it’s just a tool to help catch and prevent crimes. But not everyone is so unconcerned about personal freedom.

The confrontation between smart cameras and those wishing to preserve privacy unfolds on several levels.

Besides legal battles at the state policy level, enthusiasts turn to camouflage art and more traditional hacking methods.

Spoofing

The most interesting attack method on such devices is spoofing or “presentation attacks.” This category involves manipulating the image received by the camera.

It includes masks, reflectors, specialized textures, and other techniques to “spoof” the image, preventing the system from recognizing or correctly identifying objects.

In 2016, designer Scott Urban’s Reflectacles project offered a line of glasses with reflectors that redirect infrared illumination from surveillance cameras back, overexposing the face image.

Reflectacles on a surveillance camera video. Source: Kickstarter. This brute-force technique leaves no data for analysis for a single camera but is ineffective in multi-angle surveillance.

Berlin-based researcher and artist Adam Harvey developed CV Dazzle solutions to counter facial recognition systems.

Samples from the 2010s included asymmetric hairstyles and makeup elements designed to fool the then-popular Viola-Jones algorithm. This method detects faces by analyzing shadows under the eyes and nose, symmetry, and nose position.

As a solution, the artist used unconventional shadow configurations and skin-contrasting colors.

CV Dazzle Look 5. Source: Adam.harvey.studio. With the advent of AI facial recognition systems, previous methods became obsolete, and in 2020, Harvey proposed an updated makeup version.

CV Dazzle Look 6 and 7. Source: Adam.harvey.studio. The artist emphasized that he demonstrates techniques, not specific patterns, and the optimal solution depends on the surveillance conditions.

Similar methods apply to license plate recognition systems. American enthusiast Benn Jordan described ways to create “adversarial” textures for ANPR detectors.

Using open recognition models, Jordan trained neural networks to generate visual noise that, when overlaid on a car’s license plate, causes the model to read incorrect characters or not see the plate at all.

Visual methods often face reliability issues. Their effectiveness depends on conditions and the number of cameras. Meanwhile, surveillance system providers expand the set of features used for recognition, such as gait, vehicle color, and external modifications.

Researchers continue seeking ways to bypass advanced models, but the more immediate threat to smart camera systems comes from hackers.

Device Hacking and Network Attacks

Like any internet-connected computer, regardless of AI capabilities, smart cameras and their server infrastructure are potentially vulnerable to hacking.

Over the systems’ existence, numerous vulnerabilities of varying severity have been documented.

In 2021, vulnerabilities were found in Hikvision surveillance cameras allowing code injection attacks. The security breach enabled full control over devices, software installation, and access to other cameras on the network.

In 2023, vulnerabilities in Axis cameras’ operating system allowed arbitrary command execution during ACAP app installation.

In 2025, Dahua’s surveillance systems had two vulnerabilities related to remote command execution and buffer overflow, both allowing attackers full control over the camera.

A separate attack vector involves direct interaction with the equipment, often located outdoors in public places. Malicious actors can exploit service interfaces, access local storage, or modify devices for their purposes.

To defend against direct attacks, manufacturers use data encryption, hardware-based verification, and cryptographic signatures for video files.

A device configured with proper security measures cannot be simply “reflashed” or have its data easily extracted. But sometimes mistakes happen.

In 2025, 404 Media reported that at least 60 Condor AI cameras from Flock Safety with people-tracking features remained unsecured against unauthorized access.

Cybersecurity expert John Gaines and researcher Benn Jordan found IP addresses of these devices via the Shodan search engine and discovered they could connect without login credentials.

Any interested party could watch live streams, download archives from the past 30 days, change settings, and read system logs.

The vendor explained the incident as a “configuration error affecting a limited number of devices” and claimed the issues had been fixed.

The same researchers reported that another Flock camera model provided an open Wi-Fi access point that could be activated by pressing certain buttons on the device’s body, allowing full control over the device and its software.

Gaines published an analysis of these and other vulnerabilities in a separate document covering 55 points.

In an official response, the company stated that these issues were already known and that potential hackers rely on direct access to cameras and “deep knowledge of internal device architecture.”

The equipment provider emphasized that all necessary updates are delivered automatically, and there is no threat to system operation.

Fighting the “Partially Competent”

Automated surveillance systems, especially with AI integration, have become a convenient tool for law enforcement.

Equipment vendors assure clients of their solutions’ capabilities—here’s a suspect’s car and its movement map, here’s the address. Cases can now be closed with a click.

It’s easy to get used to this. People tend to overly rely on automated data. This phenomenon is linked to a common cognitive bias.

Automation with AI follows the same principles: many users tend to trust ChatGPT’s answers and ignore possible contradictions. In everyday life, this can distort perceptions and sometimes lead to psychosis.

Even under ideal engineering conditions and full control by authorized operators, large-scale AI-powered surveillance can cause harm.

In 2025, U.S. authorities launched an investigation into possible misuse of Flock Safety technology for illegal surveillance. Law enforcement was suspected of using the system to track immigrants and monitor women crossing state borders in search of jurisdictions with legal abortions.

In this case, the system functioned correctly; no one hacked cameras or used deepfakes to manipulate video.

Not Breaking, but Improving

CCTV systems have long been widespread. Providing video surveillance with AI analytics is the new reality.

Even the most elaborate masks and fully obscured license plates won’t help preserve privacy amid total data collection.

Like any powerful tool, AI surveillance systems require regulation to prevent misuse and negligent security practices by providers and operators.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)