Hardly a week goes by without the news media picking up on a report concerning the vulnerability of U.S. information and infrastructure to attack through the Internet and other networked computer systems.
Earlier this year, computers at the New York Times and Wall Street Journal were hacked, allegedly by agents of the Chinese government, in an attempt to learn the identities of the newspapers’ sources for their coverage in the People’s Republic.
The loose-knit hacker group Anonymous has claimed credit for a series of hacks of the Federal Reserve Bank, Bank of America, and American Express, which they see as responsible for the 2008 mortgage meltdown. The government of Iran is believed to have been behind a network attack last year that erased all data on 30,000 personal computers at Saudi Arabia-based ARAMCO, reportedly the most damaging cyberattack on U.S. interests so far.
Meanwhile, frightful scenarios abound. A November 2012 report from the National Research Council speculated a cyberattack on the U.S. power system could knock out power to large regions of the nation for several months.
The prospect of such “cyberattacks” has both Congress and the White House calling for aggressive action. In his State of the Union Address in February, President Obama issued an Executive Order to create a “framework” for government-mandated cybersecurity processes in systems that manage critical infrastructure. The order roughly is in line with actions in the Cybersecurity Act currently pending in Congress.
Trouble is, these proposed measures are vague and sweeping. They would impose a drastic increase in federal authority and control over the Internet and the information that resides in it, yet none includes any way to measure its effectiveness in preventing or deterring a cyberattack.
That the government’s primary response to cybersecurity is to expand its information-gathering powers even further is reason for skepticism. Cybersecurity is not inherently different from other aspects of personal protection. Responsibility for protection of assets falls chiefly to the owner.
We lock our doors, keep valuables out of sight, and walk in well-lit areas at night. Businesses refer to these precautions as “best practices,” and they see these as the first line of cybersecurity defense. In a 2011 survey by security consultancy Bit9, 1,861 IT professionals were asked what factors have the biggest impact on improving cybersecurity. Fifty-eight percent said implementing best practices and better security policies, 20 percent said employee awareness, and just 7 percent said government regulation and law enforcement were the answer.
Government imposing elaborate security protocols and trawling private records for obscure clues to potential attacks is both inefficient and intrusive — it just clogs the information superhighway with roadblocks. That’s not good security policy in any context.
If a crime takes place, that’s when government justifiably gets involved, overseeing the tasks of investigation and prosecution, subject to due process and constitutional safeguards. In combating cyberterrorism, the same rule applies: The most effective anti-terror efforts involve old-fashioned methods such as infiltration, legal information-gathering, and the like.
Instead of rushing to implement an intrusive set of cybersecurity regulations, legislators should step back and rationally assess the real cyberthreats and consider how existing laws apply. Theft, fraud, vandalism, and sabotage have been against the law since long before the Internet emerged. They’re as illegal as ever, when done through the Internet.
Today’s cybersecurity challenges can and should be met within a constitutional framework that respects liberty, privacy, property, and legal due process. There is no reason the law should favor state power at the expense of individual rights in combating computer crime or defending the nation’s information systems from foreign attack.
[First Published by The Washington Examiner]