If you had the choice, would you live in a digital surveillance state?
Stated so flatly, the instinctive answer is “no.” All-encompassing surveillance sounds awful and privacy-destroying, if not soul-destroying.
And then there are Benjamin Franklin’s famous words: “Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety.”
In reality, the answer isn’t so clear-cut. There are benefits to a digital surveillance state, which is why various countries are running experiments.
It’s also more a case of creeping surveillance, like the frog in a pot of boiling water, than flipping the switch all at once. And a lot of it is private, like porch cameras that watch for package thieves.
Which leads to another question: Does a private homeowner’s doorbell camera count as part of a surveillance state?
What if that footage is kept in the cloud by the giant tech company that runs the service? What if that footage is made available in bulk to national and local law enforcement? And what if the footage is mass-scanned for criminal image matches by machine-learning software?
The argument in favor of a surveillance state, increasingly enabled by 21st century tools, is that bad guys everywhere become easier to catch and crimes become a lot harder to commit.
Imagine, for example, an urban center where all the cars are self-driving and all of the cars have video cameras. Then imagine the video footage from this “ambient surveillance” is fed into a central database.
In the old days, a mountain of video footage was useless because no human could sift through it. For a human, finding an event in footage where nothing happens 99.999% of the time is like searching for a needle in a hundred haystacks.
But artificial intelligence bots move fast and never get tired or bored. That means databases of ambient footage can be combed 24 hours a day, 365 days a year, with machine-learning techniques growing increasingly skilled at spotting faces or flagging suspicious behavior patterns.
That, in turn, means crime in the presence of ambient surveillance — from any kind of camera hooked into a central cloud — would become increasingly hard to commit.
And if these systems can match faces to a database, identities of known offenders could be determined instantly — or the offender could be flagged before a crime was even committed, based on algorithmic evaluation of behavior patterns (like, say, hanging around an ATM late at night).
The scary aspect of this is the degree to which power is placed in the hands of the state.
Advocates of personal liberty point out that, due to a proliferation of confusing laws and regulations, normal citizens break rules all the time without realizing it. What if surveillance tools were used to go after someone on any violation that popped up?
That would be the next level of “illegal search and seizure” techniques. Except laws don’t really exist around this topic yet. How much right to ambient surveillance should governments have? What should they be allowed to do with cloud footage, or facial recognition technology, or behavior-flagging algorithms?
At one point, these questions were more hypothetical than immediate, but not anymore. The city of London is living them out in real time — and has chosen the path of more surveillance, not less.
On Jan. 24, London’s Metropolitan Police announced the use of facial recognition technology not just to identify criminals after the fact, but to pick up their identities in real time.
That means if someone is on a police watchlist and a camera scans them, the recognition software will. at least in theory. send a red flag to the authorities.
This is the real-world version of an all-seeing eye, as London is already one of the most highly surveilled cities in the world. It is the beginning of ambient surveillance, algorithmic scanning, computerized behavioral pattern matching, and everything else that comes with it.
Technology like this has become commonplace in China, where facial recognition software can spot criminal offenders among huge crowds. But China is not a democracy, and privacy advocates in London are deeply worried.
The Metropolitan Police, meanwhile, point to London’s history of terrorist attacks — with another in the news just recently — and argue the technology will not just prevent crime, but save lives.
The West will be wrestling with issues like these — how much surveillance is too much? — all through the 2020s and beyond.
The questions will be tough because, when it comes to privacy versus safety, the legal framework for what is and what isn’t permissible doesn’t exist yet.
Then, too, the questions don’t have obvious answers. They will depend less on straightforward calculations and more on societal value weightings, like, “Is it more important to catch criminals and terrorists, or to safeguard civil liberties? How far is too far for the state? And how much can we trust big tech?”