I am grateful that Senator John Thune, Ranking Member of the Senate Committee on Commerce, Science, and Transportation, and FCC Commissioner Ajit Pai spoke at the Free State Foundation’s June 25 seminar, “Reforming Communications Policy in the Digital Age: The Path Forward.” And because it is such a pivotal time for communications policymaking, I am especially grateful that each used the occasion to deliver such important, substantive addresses.
Among those engaged in the debate, there are divergent views concerning the proper path forward for communications policy – in essence, one view embodies a pro-regulatory vision and the other a free market-oriented one. At the Free State Foundation, we work hard, based on our research and analysis, to articulate, on a principled basis, the case for the less regulatory, free market-oriented vision.
Senator Thune’s and Commissioner Pai’s Free State Foundation addresses constitute important contributions to the ongoing discussion concerning reform of our nation’s communications laws and policies. I urge you to review the full texts of their speeches hereand here. But, in the meantime, please do take a few minutes right now to read the excerpts immediately below.
SENATOR JOHN THUNE – EXCERPTS
While some pro-regulatory advocates claim our communications sector is dominated by monopolies and duopolies, the evidence in the marketplace doesn’t support that view. Monopoly markets are typically characterized by a lack of investment, a lack of innovation, no new entrants, and excessive profits.
Since 1996, the private sector has invested $1.2 trillion into building and constantly upgrading our nation’s communications networks, including about $60 billion annually in capital investments over the last few years. Regarding market entry, we have already seen rampant intermodal competition in the telephone and video markets. Not to mention efforts by companies like Google and DISH Network who are committed to becoming serious new broadband players.
As for excessive profits for communications providers, again, there’s little evidence. Former Clinton Administration Official, Everett Ehrlich, found that Fortune 500 broadband companies had an average profit margin of just 3.7 percent. The average profit margin for Fortune 500 Internet companies who offer services on top of the broadband infrastructure? A whopping 24 percent. As Ehrlich points out, “this sizeable difference makes clear that providers of broadband connectivity are not extracting undue profits from broadband users.”
Why does this all matter? Because painting a picture of a dysfunctional communications and broadband marketplace is central to the efforts of pro-regulatory advocates who claim more government intervention into the online world is needed to fix a “broken system.” Many of those who seek to regulate the Internet are using mistruths and hyperbole to scare both the public and policymakers into restricting economic and individual liberty.
* * *
The last time Congress significantly updated our communications laws was in 1996. Back then, you had to pay for the Internet by the hour, and going online meant tying up your home’s telephone line. There were only 100 thousand websites in 1996, and Google and Wikipedia had not been created yet. Today there are nearly 900 million websites.
The bipartisan and deregulatory Telecommunications Act of 1996 encouraged intermodal competition and provided a light regulatory touch for information services. Bipartisan leadership at the FCC reinforced the light touch for the Internet when implementing the law. All of this fostered an era of convergence and innovation in the communications space. Cable companies started to compete with telcos, telcos got into the cable TV business, and everyone started offering Internet access.
The Telecom Act was far from perfect, but it got the job done. Even so, it is best to view the Telecom Act as a transitional law for a transitional time, rather than as a permanent statute that will last 62 years without major revision, like its predecessor, the Communications Act of 1934. The original Communications Act was designed for an era of actual communications monopolies; the Telecom Act was designed for the transitional era that took us from monopoly to competition; and now, we need a new policy framework for today’s converged, competitive, and Internet-powered world.
This, of course, is much easier said than done. Modernizing the laws governing the communications and technology sectors is no small task, which is why I am glad my colleagues in the House of Representatives have already begun examining the regulation of the communications industry.
* * *
Now, I’m not saying that the Internet should be a lawless frontier free from any government oversight. That is the sort of straw-man accusation leveled by those who want to avoid doing the hard work of justifying regulations for the Internet ecosystem. Even so, policymakers must be careful to preserve the light-touch regime, first implemented by the Clinton Administration, that has been so successful in making us the digital envy of the world.
Some people, however, want to completely upset that regime and instead want to see the Internet shackled with Title II of the Communications Act. Title II is certainly not a “light touch,” not with its burdensome rate regulation, property valuation, and discontinuance provisions, along with many others.
Traditional wireline telephony now makes up just 22 percent of the 443 million phone lines in America, and that rate continues to decline each year. When consumers are rapidly abandoning traditional Title II services, it makes little sense to apply Title II regulations to today’s new technologies and business models. Even Google seems keen on avoiding the morass of Title II-the Internet giant has specifically chosen not to offer telephone services with its Google Fiber broadband product because it wants to avoid the regulatory burdens that come along with it.
Another reason I oppose Title II reclassification is because regulating an industry as if it were a public utility monopoly is the surest way to guarantee the industry will become a monopoly. As I discussed earlier, the evidence in the marketplace makes it clear that our broadband market is dynamic and competitive-not at all like the early days of Ma Bell that Title II was intended for. Public utility regulation traditionally is intended to do two things-protect the public from the harms of a monopoly, while simultaneously protecting that monopoly. Since the broadband market is demonstrably not a monopoly, regulating it as a public utility would only make the industry less competitive and less innovative. Or, in other words, make it more like a monopoly.
COMMISSIONER AJIT PAI – EXCERPTS
I don’t mean to suggest that our nation’s broadband policy has been perfect. It hasn’t. There’s certainly more we should be doing to clear out the regulatory underbrush that deters infrastructure investment and broadband deployment. But when it comes to our fundamental choice of a regulatory model, the United States has gotten it right.
Of course, there are those who disagree, and their voices have become louder of late. Many are now claiming that the only way to protect the Internet from ruin is to reclassify broadband as a Title II service. In other words, they want to end the minimal regulatory environment for broadband and replace it with rules based on 19th century railroad regulation.
This makes no sense. The common-carriage rules of Title II were designed to control one company that had a monopoly on long-distance telephone service, not the 1,712 companies that now compete to provide broadband service to the American consumer.
And beyond the sloganeering, there are any number of complicated questions to which I have yet to hear an answer. How much would consumers’ broadband prices go up to pay for the universal service charges all carriers must contribute? Why should we apply anti-consumer rules like tariffing to the broadband world? How would the Part 36 separations process apply to apportion the various components of the network between the several states and the FCC for regulatory purposes? And why should we open the door to actual access charges, imposed on edge providers, content delivery networks, and transit operators without their consent?
* * *
This means that uncertainty will hang over the marketplace for a long time. How many years would it take us to decide which parts of Title II merit forbearance? How many provisions must we even examine? When we still haven’t collected data in the special access proceeding, about a year-and-a-half after authorizing that collection, how could we possibly expect to timely gather data to handle the wider broadband market? And in a rapidly changing industry, how enduring would a particular FCC snapshot of the marketplace, upon which critical investment decisions would rely, really be?
But aside from the mechanics of implementing Title II, we need to ask a more basic question. Where would Title II regulation lead? One good indication is to compare the results produced by the American regulatory model to those of a more intrusive regulatory model: Europe’s. Rather than taking a light-touch regulatory approach to broadband, the European model treats broadband as a public utility, imposes telephone-style regulation, and purports to focus on promoting service-based (rather than facilities-based) competition.
The results of the public-utility model speak for themselves. Eighty-two percent of Americans (and 48 percent of rural Americans) have access to 25 Mbps broadband speeds. In Europe, those figures are only 54 percent and 12 percent, respectively. And these figures aren’t skewed by less developed countries; in France, the figures are 24 percent and 1 percent, respectively. Similarly, American broadband companies are investing more than twice as much as their European counterparts ($562 per household v. $244), and deploying fiber-to-the-premises about twice as often (23 percent v. 12 percent). Small wonder, then, that the European Commission itself has said that “Europe is losing the global race to build fast fixed broadband connections.”
* * *
NOTE: There also was an excellent panel discussion at the event with John Bergmayer, Public Knowledge; Scott Cleland, Precursor LLC; and Adam Thierer, Mercatus Center at George Mason University. A transcript of that session will be published in due course.
“I cannot undertake to lay my finger on that article of the Constitution which granted a right to Congress of expending, on objects of benevolence, the money of their constituents. … If Congress can do whatever in their discretion can be done by money, and will promote the General Welfare, the Government is no longer a limited one, possessing enumerated powers, but an indefinite one. …
The powers delegated by the proposed Constitution to the federal government are few and defined. Those which are to remain in the State governments are numerous and indefinite. … The government of the United States is a definite government, confined to specified objects. It is not like the state governments, whose powers are more general.
Charity is no part of the legislative duty of the government. … There are more instances of the abridgment of the freedom of the people by gradual and silent encroachments of those in power than by violent and sudden usurpations.”
- James Madison
When people are asked to name the Founding Fathers of the nation, they usually reel off Washington, Adams, and Jefferson, the first, second and third Presidents in addition to their earlier role in guiding the Revolution to success.
Occasionally, someone who, like myself, loves history will add Madison, the fourth President, but Lynne Cheney’s new biography of Madison rightly identifies him as the man most responsible “for creating the United States of America in the form we know it today.” It was Madison who guided the process by which the Founders arrived at the Constitution, contributing the fundamental principles it incorporated and writing the Bill of Rights, amendments that ensured its ratification by the original states.
Cheney’s biography, “James Madison: A Life Considered” ($36.00, Viking) benefits not only from her scholarship, but from her facility with the written word, making it a continual pleasure to read for a book of 563 pages, including its notes, bibliography, and index. If you were to set aside the summer to read just one book, this would be the one I would recommend.
If Cheney’s name rings a bell, it is because she is the wife of former Vice President Dick Cheney, but she is also a Ph.D. who has been studying Madison since 1987 when she was a member of the Commission on the Bicentennial of the Constitution. These days she is a senior fellow at the American Enterprise Institute.
The Cheney’s reside in Wilson, Wyoming. She is making the rounds of radio and television shows to promote her book and, most notably, interviewers tend to ignore her book in order to pry an opinion out of her about current events and politics. One gets the feeling that most did not read her book.
Those short in stature and, compared to the other Founders, quite young, all who came to know him swiftly developed a profound respect for his intellect and his knowledge of how governments were structured with some succeeding while others failed. When Madison spoke, they listened. There were in those days “factions” (which today we call political parties) that opposed his and the other Founder’s views.
“Jefferson,” wrote Cheney, “would later say that it was a wonder that Madison accomplished so much as he had, given that he faced ‘the endless quibbles, chicaneries, perversions, vexations, and delays of lawyers and demi-lawyers’” and Madison himself was often struck “by the way that ‘important bills prepared at leisure by skillful hands’ were treated to ‘crudeness and tedious discussion’, and he had seen legislative tricks of the most blatant sort.” So the politics of Madison’s time was not unlike much of today’s.
After the Constitution was written to replace the failed Articles of Confederation it needed to be vigorously defended. America benefited greatly from the fact that its population was highly literate and it was the Federalist papers, a series of essays mostly written by Madison was the way its principles and protections were explained to the public. Chaney notes that the Federalist essay that would eventually become most famous was the first one Madison wrote.
“In Federalist 10, published November 22,1787, he set forth the failures of ‘our governments’ (rather than ‘our states’ where, after all, the Constitution would be ratified), noting the instability and injustices that had caused good citizens across the country to increasingly distrust those governments and feel ‘alarm for private rights.’”
These alarms are reflected in our times by concerns that the President is bypassing Congress to govern by executive orders, is failing to enforce laws with which he disagrees, and that we have a Department of Justice and an IRS that cannot be trusted to apply laws fairly, acting against groups and individuals with whom they disagree such as the Tea Party movement and other conservative organizations. A rogue agency such as the Environmental Protection Agency is so out of control that Congress must at some point exert powerful restraints on it.
What is remarkable about Madison’s time was the fact that he, Jefferson is lifelong friend, and Adams, all lived long lives unlike the bulk of the population. Madison would devote his life to the creation of our extraordinary government and, throughout the early presidencies including his own, to ensuring the existence of the new nation, challenged as it was by Great Britain, first during the Revolution and then in the War of 1812.
On his last day as President, Madison vetoed an improvements bill, “arguing as he had since the days of The Federalist that the general government did not have general powers. It had specified powers, and recognizing its limits was essential to ‘the permanent success of the Constitution.’”
Cheney wrote that Madison understood that “if the limits the Constitution imposed on government were unrecognized, ‘the parchment had better be thrown into the fire at once.”, but Madison was all about protecting the Constitution and the new nation. For that he is owed the gratitude of all the generations that have followed him.
It is now our responsibility to protect it because freedom and liberty always have domestic and foreign enemies.
© Alan Caruba, 2014
[Originally published at Warning Signs]
A creative commons license is a kind of copyright license that gives people the right to use, share, and expand upon a creator’s work whether this is an art work, a piece of literature, or a scientific or academic material. It offers a significant protection against accusations of copyright infringement and is believed by some to offer artists a degree of flexibility they may desire. It is also in the interest of citizens to see that the artwork they pay for through government funding for the Arts is made available for their benefit in some fashion. Mandating creative commons licensing for all state-funded artwork would accomplish that goal.
Government Funding for the Arts
Work for government is almost by definition being carried out for the people in one form or another. So should all work done by government have a creative commons license making it open to those people?
The state engages in a lot of work that is licensed; it funds art, and culture (usually only in part), as well as scientific and academic research, the creation of significant amounts of software, and the creation of large data resources. Creative commons licensing could apply to all.
Some governments, such as the United States already go part way towards making work they fund available to the public. Any “government [that] work is prepared by an officer or employee of the United States government as part of that person’s official duties” is in the public domain. This means they are similarly open to reuse and reproduction as if they were in the creative commons. However it is notable that this does not apply to works produced by government contractors or by institutions that are largely or fully funded by government such as the Smithsonian
Taxpayers Should Own What they Pay for
Everyone benefits and is enriched by open access to resources that the government can provide. A work is the province of its creator in most respects, since it is from the mind and hand of its creator that it is born. But when the state opts to fund a project, it too becomes a part-owner of the ideas and creation that springs forth. The state should thus seek to make public the work it spends taxpayer money to create. This is in exactly the same way that when an employee of a company creates something, the rights to that work go to the company and not the employee.
The best way to get the most out of government-funded art, if it is going to exist at all, is through mandating that all such works be made publicly available. This allows the work to be redistributed, re-explored, and to be used as springboards for new, derivative works.
The right of the people to the fruits of their tax dollars is hampered by either the creator, or the government, retaining stricter forms of copyright, which effectively entitles the holder of the copyright to full control of the work; work that would not exist had it not been for the largesse of society. If state-funded work is to have meaning it must be in the public sphere and reusable by the public in whatever form they wish. Simply put, the taxpayers paid for it, so they own it.
Chicago faces a significant, and growing public pension problem. According to the Chicago Sun Times, Chicago’s four pension plans (including for teachers and public safety workers) face a combined debt of around $20 billion, a number that, without reform, is likely to continue to grow. In order to fill this gap in pension funding, Chicago Mayor Rahm Emanuel has proposed several new or expanded taxes to cover the growing debt.
After a proposed property tax hike was rejected by the Chicago City Council, Mayor Emanuel turned to another source of revenue, telephone bills. Emanuel’s proposed plan would increase the current taxes both wireless and land lines by 56 percent. The hike from $2.50 a month to $3.90 a month would be the maximum allowed by state law. 36 of Chicago’s 50 aldermen co-sponsored the tax hike. The Illinois Policy Institute criticized the tax hike and noted that the increase would place Chicago’s wireless tax rates higher than all of its regional neighbors.
“Illinois’ cell phone tax rate already is higher than all of the state’s neighbors. Residents in nearby states pay an average of 6 percent less in taxes on their phone bills.
On top of paying an effective federal tax rate of 5.82 percent and a 7 percent state of Illinois telecom excise tax, Chicago wireless consumers already pay a 7 percent municipal tax to the city and a $2.50 per line wireless 911 fee.”
John Nothdurft, Director of Government Relations at the Heartland Institute argues that the new tax hike is only the most recent in a series of tax hikes that has made Chicago one of the most overtaxed municipalities in the country.
“The City of Chicago is about to have another Number 1 ranking when it comes to high taxes. Mayor Emanuel’s proposed $50 million increase in the city’s phone tax would give Chicago the highest tax of its kind in the nation. The city already boasts the nation’s highest cigarette tax, which the city raised just last year, and is knocking on the doorstep of having the highest sales tax, property tax, and meal tax, to name a few,” commented Nothdurft. “Sure the city might not raise property taxes this year, but unless the cities spending and unfunded liabilities are addressed in a significant manner then it won’t be long before Mayor Emanuel comes back for more.”
Sam Karnick, Director of Research at the Heartland Institute agrees, and cautions Chicago taxpayers that these new taxes do not preclude significant tax hikes down the road.
“Mayor Emanuel is desperately looking for a solution to pension problems he didn’t create, but certainly would have, based on his record and party affiliation,” argues Karnick. “Given that any property tax hike is off the table for only one year, his and Gov. Quinn’s obvious goal is to get through the next state and local elections without raising property taxes or suffering further credit downgrades. If the mayor and governor have their way, Chicagoans will ultimately get two tax hikes out of this, and one-party rule will continue. It’s the very opposite of fiscal responsibility.”
Wireless taxes have quickly become the latest slush fund irresponsible governments are seeking to use to fund their out-of-control spending. In many states, wireless tax rates have already reached all-time highs. Almost half the states nationwide now impose a wireless tax above 10 percent (the national average is more than 16.3 percent). Even as revenue earned per wireless phone falls, taxes and fees climb. Many of these taxes, like Mayor Emmanuel’s, are being used to fund programs and services that are in no way related to telecommunications.
Critics of the tax hike have argued that the city phone tax was designated for the 911 call center and should not be used for other purposes. Raiding 9-1-1 funds, vehicle taxes, or any dedicated revenues for reasons other than their intended purpose is bad public policy. Public safety groups have criticized states for using 9-1-1 funds for other purposes. The National Emergency Number Association, National Association of State 911 Administrators, and 9-1-1 Industry Alliance called these sweeps “less than honest” and stated the diversion of funds places the nation’s 9-1-1 systems at risk while breaking “the trust established with the public.”
In addition to the public safety problems these fund raids create, taxpayers also should be concerned about how their tax dollars are being managed. When states are allowed to raid dedicated funds and divert those taxes from their stated purpose, these dedicated revenues become de facto slush funds and additional phone taxes will likely be tacked onto phone users’ bills. If a dedicated 9-1-1 fund builds up “extra” revenue, lawmakers should reduce the tax to a more reasonable level and not raid the fund for other expenditures.
Steve Stanek, the Managing Editor of Budget & Tax News contends that the 911 fund as it currently exists has not been used properly.
“The phone tax was supposed to fund a 911 call center that should have been paid for years ago, but isn’t because of huge cost overruns,” argued Stanek. “The tax was never to be used for government pensions. What’s next? Phone taxes to pay patch potholes and plow snow?”
High wireless taxes are a drag on both consumers and the wireless market, deterring innovation and infrastructure improvements, while disproportionately affecting minority and low-income populations. Before these taxes spin out of control, making wireless services less accessible for everyone, measures to stop it need to happen. One possibility is the implementation of a moratorium on these discriminatory tax hikes like the Wireless Tax Fairness Act, which would benefit both the economy and consumers.
“We don’t care about money here.”
“Well, that’s because you have it.”
“Would you repeat that?”
“You don’t care about money because you’ve always had it.”
—“The Aviator,” 2004
This Telegraph interview with Chelsea Clinton reveals a number of facets of the once and future first daughter which make her the perfect representative of her Millennial generation. She has the fickle but sincere flightiness over everything from career to diet, the waywardness of the overeducated and underchallenged, the comfort of comprehensive knowledge of the new sins, the inner child of Bart Simpson, the gluten allergy … but of course the gluten allergy.
“Fried chicken is my husband’s favourite food,” she divulges in her office at the Clinton Foundation in Manhattan, where she lives in a 10 million dollar apartment. The first time her then-boyfriend, now-husband, Marc Mezvinsky visited Little Rock, she whisked him off to her favourite childhood fried-chicken hole. In New York, she explains, he’ll now “gorge himself on fried chicken”. Chelsea insists she would too, were it not for an allergy to gluten. “I was a vegetarian for 10 years, a pescatarian for eight. Then I woke up one day when I was 29 and craved red meat,” says Chelsea, now 34, who recently announced she is expecting her first child. “I’m a big believer in listening to my body’s cravings.”
The primary difference between Chelsea and most of her fellow Millennials, of course, is that she has the luxury of having enough money to not care about money. She lives in a 10 million dollar apartment, had a 3 million dollar wedding, and gets paid 600,000 dollars a year to spend most of her time not working, and when she does work, it’s on camera… which, if you think about it, is pretty much the Millennial dream. Her career track reads like the resume of someone with more connections than she knows what to do with:
For a decade after graduating from Stanford in 2001, Chelsea experimented with the world beyond the Clinton machine. In peripatetic bursts, she tried out international relations, then management consulting, then Wall Street, then a PhD. She even signed on as an NBC News “special correspondent”. She rationalises this career promiscuity as a hallmark of being just another Millennial, experimenting until she figures out her professional purpose. But, of course, she’s not just another Millennial. She’s political royalty. And now, finally, she has decided to join the Clinton family business.
Yes, she’s now vice-chair of that little non-profit, the “recently rebranded” Bill, Hillary & Chelsea Clinton Foundation. Why did she choose to ditch the glamour of the go-go business life for the relative quiet of philanthropic endeavors?:
“It is frustrating, because who wants to grow up and follow their parents?” admits Chelsea. “I’ve tried really hard to care about things that were very different from my parents. I was curious if I could care about [money] on some fundamental level, and I couldn’t. That wasn’t the metric of success I wanted in my life. I’ve talked about this to my friends who are doctors and whose parents are doctors, or who are lawyers and their parents are lawyers. It’s a funny thing to realise I feel called to this work both as a daughter and also as someone who believes I have contributions to make.”
Within this conversation with Chelsea, meant to be little more than a puff piece, you see all the reasons for why Hillary Clinton lost the youth vote in 2008, and how she could lose the whole thing in 2016. How could her daughter’s generation have fallen for the inexperienced Barack Obama over the wiser, more tested woman? And how could a generation of wayward slackers once again pass on the opportunity to break that last glass ceiling? Chelsea shows us how. A fickle generation making up a sizable portion of a party’s voting base (and an expected third of voters in 2016) paired with an out of touch one percent candidate who hasn’t run for anything in eight years is looking like a worse deal all the time.
Hillary may, of course, cruise to the White House over a Republican Party that can’t decide what it is. Two months ago, she looked like a solid presidential candidate, with all the wind at her back and the clearing assistance of a generation of women in media dedicated to her advancement. But after the past few weeks of the contours of a crash-and-burn scenario are now in pretty clear view. She could still pull it off, of course – but the possibility of disaster, once so foreign to the conversation, now seems more feasible.
There’s something that Democratic leaders don’t seem to understand about Hillary now versus Hillary in 2008 or 2000. We’re a country that overwhelmingly caters to the biases of the youngest voters – terrified of growing old, we’re always chasing after the whims of the young. And it is going to be difficult to do so with a candidate who is so “old”. This has nothing to do with her age, mind you: it’s that her cultural apex came over a decade ago. It’s not that she’s decrepit, it’s that she’s terribly uncool. Shepard Fairey can’t do anything with this that won’t come across as a nostalgic meme. It’d be like rebooting Friends or trying to bring back slap bracelets. If the Hillary of 2000 was Seinfeld, the Hillary of 2016 is the Seinfeld Super Bowl commercial.
In the American past, experience, stability, and reliability in the public square was viewed as a virtue, something you wanted in a president. But presidential contests don’t look like that any more. The stable, solid, and familiar is just boring. A contest between old and busted versus new hotness is no contest at all. The kid glove questions which Hillary experiences when being interviewed by fawning female reporters are a far cry from what happens when someone asks her an actual, you know, question. Looking at how Hillary struggled with the gay marriage question is just a part of this. That’s an issue she was on the opposite side of in her electoral career because of the context of the times – but where she apparently expected that to play in her favor, it doesn’t at all for a generation of listeners for whom there is no history prior to Google. Hanging around TED, Gstaad, and the Aspen Institute, where Hillary is all women, no one’s going to be rude about it – but we’ll see what it’s like out on the trail.
What could that look like? There’s a hilarious little moment in Louis CK’s show where he’s trying and failing to hit on a nineteen-year-old NFL cheerleader who’s performing on a USO tour with him. He asks her about music, and she says she loves all kinds – he promptly names a series of prominent rock bands, none of which she’s ever heard of. When he asks about Aerosmith, he adds that the lead singer is Steven Tyler. Oh no, she says, as if correcting him – you mean The American Idol judge. That’s all she knows him as, and she has a hard time believing he was ever a singer.
Yes, Hillary Clinton could still get in touch with this generation. But the more her campaign resembles a fond resurrection of nineties nostalgia, the more it forces the “remember how great Windows 95 was” conversation, the more it reveals the dreadful truth that Hillary Clinton is quite possibly the least cool thing in American pop culture right now. The Clintons are morphing into Tom Wolfe characters before our eyes – except for the special unique snowflakes of Chelsea’s generation, Wolfe is 83 years old now, and his last book bombed.
[Originally published at The Federalist]
As announced yesterday, Aereo, a streaming broadcast TV company, was found to be violating copyrights on programming it was providing, given that the almost live broadcasts it made available represented a public performance of the content and hence was illegal under copyright law. In plain speak, Aereo’s entire business model was to take that which didn’t belong to it and sell it. Try selling access to your neighbor’s guest room on AirBnB, or taking your neighbor’s otherwise unused car to use for your own Uber sideline, and see how things work out.
But the innovative service apparently was appealing particularly to cord cutters, suggesting that if done legally it might be a legitimate marketplace success. In recognition of not impacting innovation the Supreme Court wisely narrowly tailored its ruling where technologies are concerned even while protecting intellectual property rights. Innovation and creators both saved!
As Madery Bridge guest writer Stevan Mitchell wrote earlier this year, “There is no question that an appropriate balance must be struck to preserve incentives for creators. The easier it becomes to replicate, transmit, record and re-experience a work the more this proposition holds true. Our overarching policy preferences may be best served, and the right balances most cleanly and predictably struck, however, by analogies that more closely resemble today’s bit stream communications and how they are used. The alternative is to continue to retrofit and stretch yesterday’s physical world analogies.”
[Originally published at Madery Bridge]
Interconnection is Different for Internet than Railroads or Electricity – Part 55 FCC Open Internet Order Series
The FCC has asserted a foundational regulatory premise that warrants rebuttal and disproving, given that the FCC is considering if Internet access, and Internet backbone peering, should be regulated like a utility under Title II telephone common carrier regulation.
In an important speech on Internet interconnection last month to the Progressive Policy Institute, the very able and experienced Ruth Milkman, Chairman Tom Wheeler’s Chief of Staff, asserted that “communications networks are no different” than railroad and electricity networks when it comes to interconnection. “… At bottom… the fact is that a network without connections and interconnections is one that simply doesn’t work. Disconnected networks do not serve the public interest.”
The grand asserted regulatory premise here is that because communications networks are “no different” than railroad or electricity networks, they should have proscriptive government regulation to ensure that they are, and remain, interconnected to ensure that the public is protected.
If this sweeping assertion is accepted at face value without challenge, the FCC could have unfettered incentive and justification to begin regulating Internet backbone peering for the first time.
The facts are that this FCC foundational assertion about communications interconnection being no different than railroad/electricity interconnection — is fundamentally untrue.
Internet communications networks are completely different than railroad and electricity networks and the Internet backbone has worked successfully and almost flawlessly for two decades without FCC regulation.
How are Internet networks completely different than railroad and electricity networks?
First, railroad and electricity interconnection is place-dependent, Internet “interconnection” is not place or physical-location-dependent.
This is a huge difference as physical-place-dependency can create a physical interconnection chokepoint in railroads or electricity. In contrast there are no physical-place-dependent chokepoints for the Internet because one can access/connect to the Internet from many different places, through many different entities, and via many different technologies, e.g. electrically via wires like copper or coax, optically via different fiber configurations, or wirelessly via many different licensed and unlicensed frequencies and providers.
Choice of place, facility, provider, and technology mean multiple dimensions of competition and no lasting chokepoints because if one encounters a temporary congestion problem in one part of the Internet, one has the choice to take their traffic and business elsewhere. No chokepoints mean no need for proscriptive regulation of Internet peering arrangements.
In the Netflix example, Netflix has a wide variety of choices (by place, facility, provider, or technology) to connect to any other Internet network, whether it be one of many CDNs or transit providers, or directly with a network provider. Netflix’ complaint is not over a chokepoint interconnection problem, but that it does not want to pay anything to ISPs for the costs of sending 34% of the Internet’s downstream traffic.
Netflix maintains, under its self-serving re-imagination of “net neutrality” that the FCC mustpermanently mandate a price of zero for Netflix traffic so users are forced to shoulder the entire cost burden of Netflix’ 34% of downstream Internet traffic.
Second, railroad and electricity interconnection is hardware-dependent, whereas Internet interconnection is software-dependent. Railroads and electric networks require one universal physical standard of wheel gauge and axle width, or physical electrical transformers and wall plugs, to interconnect to these respective networks. In contrast, the software design and protocol of Internet connections make interconnection hardware-agnostic, seamless and automatic, and hence inherently competitive and choice-rich.
Simply, the genius of Internet packet-technology networks is that they do not require any interconnection, permission, or negotiation points, because inherent in Internet Protocol is that packets are automatically routed seamlessly between different internet networks to their destination by design. Inherently Internet packet technology makes the concept of telephone interconnection obsolete because the technology supplants what used to require hardware and regulation to achieve. Most simply, Internet protocol innovation inherently obviates an FCC role for regulating Internet backbone regulation.
Third, railroad and electricity interconnection involved analog technology, whereas Internet interconnection involves digital computer technology. Importantly, railroads required a setcontinuous physical path or circuit from point A to point B. Electricity networks require acontinuous electrical circuit from origin to destination.
In contrast, digital technology in general, and Internet packet technology in particular, isdiscontinuous – the antithesis of a telephone or electrical continuous circuit. It is this inherently discontinuous digital innovation that enables Internet networks to be place-agnostic and hardware-agnostic, and hence inherently competitive and choice-rich.
More specifically, the innovation of digital IP packet networks subdivides information into many small packets to enable more efficient transmission. The packets get individually routed unpredictably and comingled with other packets to minimize bandwidth waste. At the ultimate destination, the packets get immediately reassembled by any device anywhere. Internet Protocol is inherently a competitive technology, made even more competitive inherently by Moore’s law, which ensures that digital networks continually enjoy rapidly declining computing costs.
In sum, the Clinton Administration knew when it privatized the Internet backbone twenty years ago that it did not require FCC involvement, and that it should not be subject to Title II common carrier regulation of prices, terms and conditions.
Twenty years of phenomenal success — where the competitive Internet backbone continually adapted exceptionally to handle the exponential growth of Internet traffic without material or lasting incident — is overwhelming evidence that the place-agnostic, software-driven, digital Internet backbone does not need any type of utility interconnection regulation.
Don’t let anyone assert unchallenged that Internet interconnection is no different than railroad or electricity interconnection. If that patently untrue assertion – that interconnection will not happen without government — is unchallenged, it enables regulators to justify unnecessary, unwarranted, and unjustified regulation of the Internet backbone.
The FCC does not need to regulate or intercede in Internet peering disputes, because if a company like Netflix does not like the prices, terms or conditions, offered by an ISP, they have the competitive choice to negotiate with any number of CDNs or transit providers to deliver their traffic to users.
So since there are so many CDN and transit choices, by definition a peering dispute at a particular place on the Internet cannot result in the “disconnected network” problem FCC Chief of Staff Milkman apparently feared in her recent speech on the subject.
The success and growth of the unregulated model for the Internet backbone peering marketplace has been nothing short of phenomenal in enabling and ensuring everyone reasonable connection to the Internet.
The Internet backbone peering marketplace works near perfectly. As the old adage says; “if it ain’t broke don’t fix it.”
FCC Open Internet Order Series
Part 1: The Many Vulnerabilities of an Open Internet [9-24-09]
Part 2: Why FCC proposed net neutrality regs unconstitutional, NPR Online Op-ed [9-24-09]
Part 3: Takeaways from FCC’s Proposed Open Internet Regs [10-22-09]
Part 4: How FCC Regulation Would Change the Internet [10-30-09]
Part 5: Is FCC Declaring ‘Open Season’ on Internet Freedom? [11-17-09]
Part 6: Critical Gaps in FCC’s Proposed Open Internet Regulations [11-30-09]
Part 7: Takeaways from the FCC’s Open Internet Further Inquiry [9-2-10]
Part 8: An FCC “Data-Driven” Double Standard? [10-27-10]
Part 9: Election Takeaways for the FCC [11-3-10]
Part 10: Irony of Little Openness in FCC Open Internet Reg-making [11-19-10]
Part 11: FCC Regulating Internet to Prevent Companies from Regulating Internet [11-22-10]
Part 12: Where is the FCC’s Legitimacy? [11-22-10]
Part 13: Will FCC Preserve or Change the Internet? [12-17-10]
Part 14: FCC Internet Price Regulation & Micro-management? [12-20-10]
Part 15: FCC Open Internet Decision Take-aways [12-21-10]
Part 16: FCC Defines Broadband Service as “BIAS”-ed [12-22-10]
Part 17: Why FCC’s Net Regs Need Administration/Congressional Regulatory Review [1-3-11]
Part 18: Welcome to the FCC-Centric Internet [1-25-11]
Part 19: FCC’s Net Regs in Conflict with President’s Pledges [1-26-11]
Part 20: Will FCC Respect President’s Call for “Least Burdensome” Regulation? [2-3-11]
Part 21: FCC’s In Search of Relevance in 706 Report [5-23-11]
Part 22: The FCC’s public wireless network blocks lawful Internet traffic [6-13-11]
Part 23: Why FCC Net Neutrality Regs Are So Vulnerable [9-8-11]
Part 24: Why Verizon Wins Appeal of FCC’s Net Regs [9-30-11]
Part 25: Supreme Court likely to leash FCC to the law [10-10-12]
Part 26: What Court Data Roaming Decision Means for FCC Open Internet Order [12-4-12]
Part 27: Oops! Crawford’s Model Broadband Nation, Korea, Opposes Net Neutrality [2-26-13]
Part 28: Little Impact on FCC Open Internet Order from SCOTUS Chevron Decision [5-21-13]
Part 29: More Legal Trouble for FCC’s Open Internet Order & Net Neutrality [6-2-13]
Part 30: U.S. Competition Beats EU Regulation in Broadband Race [6-21-13]
Part 31: Defending Google Fiber’s Reasonable Network Management [7-30-13]
Part 32: Capricious Net Neutrality Charges [8-7-13]
Part 33: Why FCC won’t pass Appeals Court’s oral exam [9-2-13]
Part 34: 5 BIG Implications from Court Signals on Net Neutrality – A Special Report [9-13-13]
Part 35: Dial-up Rules for the Broadband Age? My Daily Caller Op-ed Rebutting Marvin Ammori’s [11-6-13]
Part 36: Nattering Net Neutrality Nonsense Over AT&T’s Sponsored Data Offering [1-6-14]
Part 37: Is Net Neutrality Trying to Mutate into an Economic Entitlement? [1-12-14]
Part 38: Why Professor Crawford Has Title II Reclassification All Wrong [1-16-14]
Part 39: Title II Reclassification Would Violate President’s Executive Order [1-22-14]
Part 40: The Narrowing Net Neutrality Dispute [2-24-14]
Part 41: FCC’s Open Internet Order Do-over – Key Going Forward Takeaways [3-5-14]
Part 42: Net Neutrality is about Consumer Benefit not Corporate Welfare for Netflix [3-21-14]
Part 43: The Multi-speed Internet is Getting More Faster Speeds [4-28-14]
Part 44: Reality Check on the Electoral Politics of Net Neutrality [5-2-14]
Part 45: The “Aristechracy” Demands Consumers Subsidize Their Net Neutrality Free Lunch [5-8-14]
Part 46: Read AT&T’s Filing that Totally Debunks Title II Reclassification [5-9-14]
Part 47: Statement on FCC Open Internet NPRM [5-15-14]
Part 48: Net Neutrality Rhetoric: “Believe it or not!” [5-16-14]
Part 49: Top Ten Reasons Broadband Internet is not a Public Utility [5-20-14]
Part 50: Top Ten Reasons to Oppose Broadband Utility Regulation [5-28-14]
Part 51: Google’s Title II Broadband Utility Regulation Risks [6-3-14]
Part 52: Exposing Netflix’ Biggest Net Neutrality Deceptions [6-5-14]
Part 53: Silicon Valley Naïve on Broadband Regulation (3 min video) [6-15-14]
Part 54: FCC’s Netflix Internet Peering Inquiry – Top Ten Questions [6-17-14]
[Originally published at Precursor Blog]
The recent meeting in Mozambique of the signers of the Ottawa Convention, which bans the use of landmines, has brought the subject of landmines back into the spotlight. To date, 161 countries have signed the treaty, and its aims were included as official United Nations policy in 1999.
Long a vocal opponent of landmine proliferation and usage, President Barack Obama opened a review of America’s landmine policy in 2009. He has yet to take a major action, but many Obama-watchers fear he will soon take action to sign the treaty. He would be wrong to do so.
A time may come when landmines are no longer useful to the security of nations, but that time is not now. Whilst armies still depend on conventional weapons and movement – moving tanks and large infantry groups –the defensive tactic of landmines is highly useful and appropriate: it is cheap, affordable, and maintains borders. Their existence can slow or stop an advance by breaking up an attack and forcing attackers to go certain routes, delaying or even halting conflict.
Mines can even deter invasion in the first place. This has been the case in South Korea. The defense of South Korea from North Korean aggression depends upon the thick belt of landmines that lines the demilitarized zone. Without it, North Korea’s million-man army could easily cross into South Korea and take Seoul before sufficient defenses could be marshalled. South Korea is a key ally of the USA and to join in the ban on landmines would be to betray that ally. The failure of the Ottawa Convention to grant an exception for the Korean peninsula was the key reason for USA non-participation in the first place.
The convention also fails to adequately distinguish between different kinds of landmines. The US military has developed mines that can deactivate themselves and can even self-destruct. America only manufactures smart mines, and since 1976 the USA has tested 32,000 mines with a successful self-destruction rate of 99.996 per cent. The ban also fails to distinguish between responsible and irresponsible users. Under American deployment, only smart mines are used, and they are used responsibly, being set and removed in a methodical manner.
Another issue with a landmine ban is that it is easily circumvented by state and non-state actors alike. Landmines are merely a convenient way of providing what can be rigged in many ways – an explosion triggered when movement occurs in a particular area. Without landmines being legally available, soldiers and fighters will improvise landmines – they will wire up pressure plates and hand grenades and trip wires and high explosive charges, with much the same result. These tend to be much more difficult to disarm as they will not have a standard design and they may also have much more explosive power. This behavior was widespread in the Iraq and Afghanistan conflicts. The only differences are that these weapons are less efficient, and more dangerous to the user who prepares them.
It is not in America’s interest to ban landmines. Our ally relies on them and they still represent a valuable weapon system. Obama should not attach the United States to yet another treaty that diminishes our independence and denies us a tool for our defense and the defense of our allies.
Every transportation service is coming up with apps so customers can track their busses and trains to better plan their trips. Unfortunately, delays and construction still plague the public transportation systems and are still not as reliable as it could be. Difficulty hailing cabs, in addition to their prices, have turned many away from using taxis. Having access to a ride at a moment’s notice is something that had not been perfected. Then, four years ago, Uber came along.
Bret Swanson, president of Entropy Economics LLC, recently joined our Jim Lakely, on The Heartland Daily Podcast to discuss Uber. He explained how Uber taps into the transportation market by offering drivers to people looking for a ride. The app gives customers access to Uber drivers in the area who can come pick up and drive them wherever they need to go.
Uber drivers who are online will show up on maps of customers and Uber will connect you with the closest driver. Riders have the choice of a Uber SUV, sedan, black car, or even regular taxi cab. Unlike with taxis, customers do not need to worry about cash or credit cards because payment is all taken care of through the app. Customers can also request a quote prior to requesting a ride or split their fare with a friend, all through the app. The best part of Uber: its about 50 percent cheaper than taxi cabs.
Uber was launched in San Francisco and in four years, it has expanded to not only 72 cities in the United States, but 39 countries. The company has also become wildly popular with investors, is now valued at $18.2 billion. Last summer, Uber even launched a service to request ice cream trucks in a select number of cities. Despite its popularity, the company is constantly offering discounts and promotions to continue to encourage people not only to get the app but actually use the service. Uber’s approach provides safe, convenient and cheaper options for everyone. However, there is one party that would disaggree: taxi cab drivers.
Taxi drivers all over the country have staged protests against Uber, arguing that their fares should be the same as those of taxis. Though Uber allows customers to request taxis through the app, most opt to select an Uber vehicle instead. Uber’s fares are calculated on a basic supply-and-demand algorithm. The more demand for drivers rises during the day, the more prices go up. Uber surcharges, but not always at rush-hour times like taxi cabs do. Some days you may only have to pay an extra 5 percent during rush hour, but have to pay an extra 25 percent at 2 p.m. The whole system utilizes the free market and weighs the value of each ride by how in demand it is at the time.
Listen to the podcast in the player above.
Hillary Clinton’s memoir, Hard Choices, has failed the one test even the Obama White House cannot rig (or simply chose not to do): book sales numbers. Although the legacy media have commonly characterized sales of her book as lukewarm so far, the numbers are significantly worse than that, considering her name-recognition and public prominence.
As the Washington Examiner reports, sales of Clinton’s book have been less than one-quarter of what Sarah Palin achieved with her book, Going Rogue: An American Life, when the latter was released while the former Alaska governor was enduring near-universal scorn from the mainstream media.
Palin’s book hit number one on the New York Times bestseller list. The performance of Clinton’s book is best described as underwhelming. The Washington Examiner story notes the contrast:
Palin’s book, which was released the same year President Obama moved into the White House, sold approximately 496,000 copies in its first week of release, according to figures cited by theNew York Times.
That’s almost half a million copies in one week.
In contrast, Clinton, with all her softball interviews and a massive amount of free publicity from an excited press, sold only 100,000 copies from its Tuesday release through Saturday, Politico reported.
Mrs. Clinton, of course, will keep the massive advance her publisher chose to pay her. The politician will thrive, and the business will suffer: how typical of contemporary American life.
[Originally published at The American Culture]
But we’re witnessing this spectacle on behalf of the Export-Import Bank of the United States, which for many decades, and for good reason, has been called by its critics “The Bank of Boeing.” Its charter expires September 30, and a battle over its possible extension is brewing between the political establishment and reformers.
The Export-Import Bank got its start in 1934. It’s a Great Depression-era relic that has always favored the largest and most politically powerful companies.
Here’s how establishment defense of the Export-Import Bank has become: In 2008, presidential candidate Barack Obama correctly declared the Bank is “little more than a fund for corporate welfare.” Now that he is President Obama, firmly seated atop the federal government, he defends the Bank.
At the opposite end of the political spectrum but also near the center of the political establishment sits the U.S. Chamber of Commerce, whose lobbyists were among those who worked a “lobbying day” event for the Bank last month.
President Obama and other Export-Import Bank defenders have been claiming the bank makes a “profit” for America’s taxpayers.
The Congressional Budget Office recently debunked this claim in a report that finds, under proper accounting standards, the Bank costs U.S. taxpayers an average of $200 million a year in losses.
The CBO explains this by noting the Export-Import Bank does not account for “market risk,” the danger that borrowers will become delinquent in repaying their loans or stop repaying them. Private banks have to account for market risk. The CBO report says the Bank’s current accounting standards “do not provide a comprehensive measure of what federal credit programs actually cost the government and, by extension, taxpayers.”
The largest beneficiaries of the Export-Import Bank have been huge companies, with Boeing standing out in this regard. Boeing’s customers include both domestic and foreign airlines. Because of the loans and guarantees the Export-Import Bank gives to overseas buyers of Boeing airplanes, those overseas airlines often end up paying less for Boeing planes than domestic airlines pay. In helping Boeing, the Export-Import Bank can end up hurting domestic airlines.
Boeing in 2012 received more than 80 percent of the Export-Import Bank’s largesse. Virtually every year, at least 40 percent of Bank backing aids Boeing.
Boeing’s revenue tops $80 billion annually. Other huge companies that show up on the list of companies receiving Export-Import Bank backing include General Electric Co., Caterpillar Inc., and even Pemex (the Mexican government-owned oil company). In most years, 10 companies receive at least 75 percent of the Bank’s backing. Some years it’s more than 90 percent.
All of these companies have smaller competitors, and those companies often receive little or no support from the Bank. It’s another example of Big Business being in league with Big Government.
The billions of dollars of Export-Import Bank backing go to less than 2 percent of total U.S. exports. And there is every reason to believe that sliver of exports would have happened without the Bank.
The United States exported $2.27 trillion of goods and services in 2013, a $61 billion or 2.7 percent increase from 2012. Yet Export-Import Bank loans actually declined by $8.5 billion in 2013. So U.S. exports grew even when Bank loans declined.
Congress should heed the words of our president when he was a candidate who stood for something: The Export-Import Bank is “little more than a fund for corporate welfare.”
End it, especially now that the Congressional Budget Office has shown us the bank’s dodgy accounting has been covering up hundreds of millions of dollars of annual losses.
With a surprisingly wide margin of victory, Congressman James Lankford won the Oklahoma Republican U.S. Senate primary, defeating former Speaker of the State House of Representatives T.W. Shannon by 23 points and avoiding a runoff election. Lankford now becomes the prohibitive favorite to replace outgoing Senator Tom Coburn, who is retiring with two years remaining in his current term.
This was a very different race from the one taking place in Mississippi. Despite negative ads run against Lankford by conservative groups, the Oklahoma contest was not an example of an “establishment” Republican or RINO versus a Tea Party candidate. In short, both Lankford and Shannon are credible, likeable conservatives, both are qualified for higher elected office, and both are likely to be on the scene in the future—to Oklahoma’s credit.
A former Baptist minister (or is a Baptist minister, like a Marine, never “former”?), Lankford directed a large Christian youth camp for more than a decade before winning election to Congress in 2010 in the Tea Party tsunami.
T.W. Shannon, the first black Speaker of the House in Oklahoma and a member of the Chickasaw Nation, has worked for former Oklahoma Congressman J.C. Watts and current Rep. Tom Cole (who won his primary on Tuesday and will seek a 7th term in Congress). He is a business consultant with a law degree from Oklahoma City University.
Although it makes life a little dull for reporters, the two candidates were exceptionally similar in their positions on issues. This made the race about retail politics, framing the opponent, and eventually about the perhaps back-firing impact of out-of-state and PAC money spent trying to influence the race.
Shannon was boosted by an Tea Party blitz, drawing support from Senator Ted Cruz representing the Senate Conservatives Fund. FreedomWorks PAC also endorsed Shannon, calling him “a principled leader…He has blocked ObamaCare implementation in Oklahoma, signed a pledge to fight Common Core, founded the first States’ Rights Committee to protect Oklahomans from overreaching federal regulation, and consistently voted for lower taxes and more individual freedom.”
The Sunlight Foundation, a campaign finance watchdog group, argues that “dark money” was “the key factor driving Oklahoma’s Senate battle,” referencing especially a group called Oklahomans for a Conservative Future which spent $1.3 million, mostly attacking Congressman Lankford.
But primary voters are better informed than the electorate overall, so attacks against Lankford for “voting with liberals to raise the debt ceiling twice”—despite the fact that both Tom Coburn and Oklahoma’s other conservative Republican senator, James Inhofe, also voted for the debt ceiling measure—landed with a thud. Instead, it seems that Oklahomans took minor offense at being told what to do, including by groups that consistently support conservatives but whose mailing addresses are within spitting distance of Capitol Hill and therefore little more than possibly-well-intended interlopers.
This result was predicted five months ago by Congressman Tom Cole, who said in an interview with Roll Call that “Groups coming from outside the state, coming to try and set the agenda, sorry. You are welcome to come, but you ought to look at your track record.”
Oklahomans should hope that T.W. Shannon runs for office again in the future. That said, nothing in James Lankford’s two terms in Congress should have made him unappealing to Sooner voters. And they were not going to let negative ads, whether by outsiders or even Oklahomans, fool them.A similar story played out in Colorado’s Republican primary for governor, in which former Congressman Bob Beauprez eked out a victory in a four-man field. The race ended up far more competitive than most elections with a handful of candidates: Beauprez received 30 percent in victory, beating former Congressman Tom Tancredo (26.5 percent) and Colorado Secretary of State Scott Gessler (23 percent), while former State Senate Minority Leader Mike Kopp came in fourth with nearly 20 percent. It was as tight a four-person race as I have seen, with Gessler and Kopp outperforming many people’s expectations.Beauprez, who lost a prior race for governor by a wide margin to Democrat Bill Ritter in 2006, never abandoned a vision of himself returning to office. Seeing what he perceived to be a weak field encouraged him to throw his hat into the ring; with Tuesday’s victory, Beauprez faces a difficult challenge in defeating incumbent Democrat Governor John Hickenlooper who, despite angering many Coloradoans with attacks on gun rights, refusing to execute a mass-murderer, and supporting radical environmentalist plans to increase electricity costs (through increased renewable energy mandates) in rural Colorado, remains a fairly popular figure in the state.
The media has already reported the Colorado result as “establishment” win. While Bob Beauprez is reasonably characterized as an Establishment candidate, the others were hardly Tea Party representatives.
Given his outspoken opposition to both illegal and legal immigration, Tancredo is a breed unto himself. To be fair, he has principled constitutional and libertarian leanings that I admire, and I belatedly endorsed him in 2010. But I believe that his reputation as a one-trick pony would not only have made him unelectable, but also would have poisoned the ticket for other Republicans, particularly Congressman Cory Gardner, whose race to unseat Senator Mark Udall is winnable.
There was very little public polling done in Colorado in recent months. Earlier in the campaign, the front-runner appeared to be Tancredo, who lost a three-way contest for governor in 2010 when he switched to the American Constitution Party after the Colorado GOP nominated an unelectable candidate in a bit of misdirected Tea Party mania. Why did Tancredo, whose name recognition is roughly equal to Beauprez’s (both of whom are better known than Mssrs. Gessler and Kopp) lose his early lead? In part, similar to what happened in Oklahoma, because political ads backfired.
One of the first widely run ads in the campaign accused Tancredo of being “too conservative for Colorado” because of his strong opposition to Obamacare. This transparent ploy to make Tancredo more appealing to Republican primary voters by pretending to criticize him was paid for by a Democrat-affiliated 527 group called Protect Colorado Values. Clearly the Democrats perceived Tancredo’s potential negatives the same way I did, but their obvious involvement was a major miscalculation.
The same Democrats ran an ad accusing Bob Beauprez of supporting an individual health insurance mandate—which in fact he “reluctantly” did in 2007, though it never translated into support for Obamacare and he later changed his view. But despite Beauprez’s imperfect record (which is no worse than most other Republicans who served during the George W. Bush years) nobody who follows Colorado politics believes him to be anything but a solid conservative.
Again, as primary voters, who tend to be better-informed than the population overall, took umbrage at the transparent attempt at manipulation.
Republicans also ran unfair—and almost certainly ineffective, despite Tuesday’s results—ads against Tancredo, such as one supported by the popular former Senator Bill Armstrong that suggested Tancredo would legalize heroin and other hard drugs. In fact, Tancredo has taken a bold position for marijuana legalization and had said he would consider legalizing other drugs (mostly in the interest of reducing violence caused by gangs protecting drug profits), but the ad was so hyperbolic that its effect was likely minimal.
The outspoken social conservative Mike Kopp campaigned aggressively on opposition to marijuana legalization, but Colorado voters were not overwhelmed with a backward-looking message on an issue where the people have spoken.
Perhaps with the memory of Republicans’ enormous mistake in 2010 of nominating an unelectable small businessman whose personal story was, to put it kindly, exaggerated, and perhaps because many voices (such as on my radio show) urged GOP primary voters to consider first and foremost the candidate most likely to win in November, Bob Beauprez came from behind to earn his second shot at the Governor’s Mansion. While I think it will be a serious challenge to beat John Hickenlooper in November, Beauprez’s victory is welcome news to Republican senate candidate Cory Gardner and other Republicans down the ticket. My suggested motto, borrowing 1,500-year old wisdom, for participants in Tuesday’s primary: First, do no harm. By selecting Beauprez, they’ve heeded that advice.
In an under-the-radar local election in Loveland, Colorado, voters rejected by 52 percent to 48 percent a moratorium on fracking, despite an onslaught of misleading ads from liberal opponents of energy development. Voters may have noticed that Weld County, which Loveland borders, produces most of the oil in Colorado and, according to a pro-energy development group, “had the largest percentage increase in employment in the US in 2013.” Fracking bans, many disguised as measures supporting “local control”—the backing of which by Democrats should make anyone suspicious since liberals always want political power to be as far from the people as possible—may be on many other ballots across the state in November. Thus, Tuesday’s result is a welcome potential harbinger of sanity when it comes to one of Colorado’s most important industries.
Just a few comments on Mississippi (which my colleague Matt Purple is covering here): Thad Cochran represents everything that is wrong with the Republican Party; if that weren’t already clear, the fact that John McCain campaigned for him should have been the final necessary proof.
Pork king Cochran won his race by using Mississippi’s unfortunate election rules—which allow Democrats to vote in the Republican primary if they haven’t already voted in the Democratic primary—to win support from the opposition party by unashamedly promising more federal spending for his state. A typically inept mainstream media analysis was provided by CNN’s Gloria Borger who suggested that the GOP could learn something from Cochran’s winning coalition of establishment Republicans and Democrats since many of those Democrats had never before in their lives voted for a Republican. The problem is that approximately none of those Democrats will ever again vote for a Republican. In the meantime, Cochran’s supporters unsubtly played up the worst (e.g. racist) stereotypes of Tea Party candidates.
Republicans like Thad Cochran are the raison d’être for the Tea Party and candidates like the unsuccessful Chris McDaniel. A Republican senator who wins a primary election on the strength of Democratic support by making promises that should come from Democrats and other proponents of redistribution, pork, wasteful spending, and fundamentally unlimited government is the very definition of been-there-too-long. (Cochran has been in Congress, including the House, for more than 40 years—and it shows.) The GOP and every Republican who supported Cochran should feel something between slight embarrassment and outright shame.
A final note: One has to wonder how Thomas Carey feels today. Carey was the third Republican candidate in the original Mississippi primary race. He had no business in the race and no chance to win. Yet his presence almost certainly cost McDaniel the outright win on June 3, forcing the run-off election and allowing Cochran the time to organize Democrats to hold on to the seat he uses to buy votes with our money. Mr. Carey owes the nation an apology.
[Originally published at The American Spectator]
Since the end of the initial open enrollment period, there has been a marked rise in the frequency of a certain type of argument – an argument which I hear with regularity inside the Acela corridor, but almost never outside of it. The argument goes something like this: regardless of the political toxicity of Obamacare, it is here to stay, and the laws opponents and Congressional Republicans need to wake up to that fact, or else.
The “or else” could be anything, and is essentially interchangeable. The most common prediction is of electoral doom; less so are predictions of revolutionary protests in the streets, turning to violence in defense of their Medicaid benefits, or losing broad swathes of traditionally red states in the Senate contests this year, or most recently, a prediction that Republicans will lose 90 percent of women voters in 2016. And yes, I’ve heard all of these and more in recent weeks.
This argument has a milder version which is repeated in the more sensible press. These observers concede that yes, Obamacare is still very unpopular, and yes, premiums are still going up, and yes, it’s signed up fewer uninsured than we expected and even those newly insured are barely favorable of it… but still, they insist, talk of repeal and replace is just politicians irresponsibly playing to the more radical elements of their conservative base. Forget the polls – Obamacare is here to stay.
I think this is a mistaken view of the political realities at play here. Perhaps this is driven by the drumbeat of “good news, everyone” which has been put forward by supporters of the law. But in an era when wonks are so plentiful, data journalists fall fully ripened from the trees, and explainers flower with the glorious frequency of endless summer, it’s easy to lose sight of the simplicity of factors which will determine whether policies maintain their permanence or are dramatically reformed.
It’s a mistake to assume there is a magic number, a point of uninsured who gained insurance, a statistic of Medicaid signups, or a percentage of average premium increases which will mark the point where Obamacare is safe from Republican assault. The average American voter and policymaker is not watching these factors – they are aware of Obamacare’s performance primarily through how it impacts their livelihoods, costs, and constituents. The opponents of the law are far louder and more motivated than its supporters. And that is very unlikely to change any time soon.
This is why I do not understand the assumptions of inevitability on the part of the law’s supporters. The Republican Party has put the repeal of President Obama’s signature law at the center of its agenda for years. It has taken repeal vote after repeal vote and made pledge after pledge. As a matter of partisan priority, there is nothing greater. And one more year of Obamacare will not change that.
Every single feasible candidate for the 2016 Republican nomination will loudly declare their support for repealing the law. Most will also offer a policy replacement, culled from the various technocratic and free market think tanks or from the legislation currently introduced in Congress. Whoever Republicans choose as their nominee, their favored replacement will become the de facto alternative Republican plan which party leaders and elected officials will all be expected to defend. And should the Republican candidate win, it is inconceivable that they will not have run on making the replacement of Obamacare a top priority for the first 100 days in office.
Republicans are not going to back off their efforts for repeal. It is a top priority for their national base, for their donors, and for their constituents. If Republicans have the Senate, it becomes that much easier – but even without it, the margin will be narrow, and the possibility for dealmaking outranks the likelihood that every single Democratic Senator will toe the line and pass on the opportunity to help remake health policy as they see fit. And while the election of Hillary Clinton or another Democrat would prevent this circumstance and protect Obamacare from assault, assuming that such an election is inevitable is really what you’re saying when you say Obamacare is here to stay.
The political legacy of Obamacare and the 2012 election is a vindication of monopartisan governance. Great domestic policies are no longer achieved via bipartisan give and take or the leadership of careful compromisers – they are rammed through with the support of your party and your base when you have the power to do so. I fully expect to see Republicans attempt to do that should they retake the White House.
So what are we to do in the time until November 2016? Well, in the meantime, we can discuss the other factors and outcomes of this policy in the ways they impact America’s insurers, hospitals, drugmakers, and industries. But we should not lose sight of the fact that it is this political outcome, and this outcome alone, which will determine whether Obamacare survives or not. It’s just not that complicated.
Alexander Hamilton was America’s first Secretary of Treasury under President George Washington. When he first entered office in 1789, America was an agricultural nation of just 4 million still broke from its financially costly victory over the British Empire in the Revolutionary War.
The states had accumulated relatively massive debts to finance that war, which mostly remained unpaid. The United States did not even have a national currency, with Spanish coins still in wide circulation and use. Steve Forbes explains in his recently published definitive work, Money: How the Destruction of the Dollar Threatens the Global Economy and What We Can Do About It, “America’s finances were in a state of disarray after the wild inflation resulting from massive money printing during the American Revolution.” As a result, “Hamilton faced the challenge of restoring the economy of the young republic that had been devastated by the Revolutionary War….”
Hamilton boosted America’s economy first by advancing legislation for the federal government to assume and pay off the debts of the states, establishing the foundation for America’s historic creditworthiness. That was recognized by America’s AAA credit rating for over 200 years, until 2011 when the relentless spending of the Obama Democrats led to the first credit downgrade of the nation in history.
But even more importantly for the nation’s long term economic growth and prosperity, Hamilton promoted The Coinage Act of 1792, which established the first U.S. Mint, and fixed the value of the dollar at $19.39 per ounce. That was devalued slightly in 1834 to $20.67, which prevailed for 100 years, until President Roosevelt adopted the only major U.S. devaluation in history during the Depression, to $35 an ounce. That prevailed until President Nixon took America off the gold standard in 1971.
Forbes explained the results: “Overnight the economy sprang to life. Capital poured in from the Dutch and also America’s former enemies, the British. Barely a century after Hamilton’s reforms, the United States was the premier industrial power in the world, surpassing even Great Britain.” He added, “Hamilton’s system of banking and stable money quickly attracted and generated capital. It turned the American economy into the leading industrial power in the world.”
Forbes further explains that while America was under the gold standard, the economy boomed at an astounding 4% real rate of economic growth. At that rate, our economy, incomes and standard of living would double every 17 years. That was the foundation of the American dream and our historic, geometric explosion into the world’s leading “hyperpower.” Forbes adds that in the U.S., “Between 1870 and 1914, real wages more than doubled even though the country had millions of immigrants [greatly expanding the supply of labor]. Agricultural output tripled. Industrial production…surged a jaw-dropping 682%.”
Question is why did Hamilton understand economics so much better than the Ivy League poobahs of today, like Paul Krugman, who are more interested in promoting the socially hip stagnation of socialist equality than the dynamic economic growth of capitalism. If only Colonel Hamilton was alive today, he would be more worthy of the Nobel prize in economics than at least half of those prize winners living today.
Great Britain experienced quite similar results under the gold standard. In 1696, the Enlightenment philosopher John Locke was joined by the path-breaking scientist and physicist Isaac Newton in arguing against devaluation in the process of Britain replacing or “recoining” its debased currency with new, unshaved, fully restored coins. By 1717, Newton was Master of the Royal Mint, and he fixed the British pound to the value in gold of 3.89 pounds an ounce. That exact same historic value remained the same for more than 200 years, until 1931.
Forbes notes, “When it tied the pound to gold, Britain was a second-tier nation. Soon all of that would change.” A century later, “By the end of the Napoleonic Wars in 1815, Great Britain emerged indisputably as the world’s major power and global center of innovation.”
Economic Benefits of the Gold Standard
Fixing a nation’s currency to gold assures that the currency maintains a stable long term value, without inflation, or deflation. That enables a nation’s money to serve as a measure of value, like a ruler measures inches, or a clock measures time. Such a stable measure of value, in turn, means money can best perform its most essential function in facilitating transactions.
When money serves as a stable measure of value, it most clearly expresses the value of everything in terms of everything else. That best enables producers to determine whether their production is adding or wasting value as compared to the value of the inputs to that production. Or whether they should be producing something else instead that might create greater value. That information is essential for an economy to maximize output and economic growth over time.
When a farmer trades his crop for such stable money, he immediately knows what that crop is worth. And he knows that he can keep that value of his production in the currency because it will hold its value over time, until he is ready to buy something with it. That stability of the reward for production undisturbed by monetary fluctuations adds further to the incentive for such production.
Similarly, with a stable value for money, investors know the money they will receive back from their investment will be worth the same as the money they put in it, undepreciated by inflation. That encourages greater savings, investment and capital formation from within the country. And it encourages investment and capital to flow into the country from abroad. This maximizes overall investment, production and economic growth.
Nixon Takes America Off the Gold Standard
On August 15, 1971, President Nixon took America, and the world, off the gold standard completely, leaving a world of unanchored fiat currencies, by terminating the postwar Bretton Woods monetary regime. Nixon and his advisors mistakenly believed that this would help the economy by promoting American exports, which Forbes recognizes as 18th century mercantilist thinking.
But it was a decisive turn for the worse for the American economy, and the entire global economy. Since that time, real annual U.S. economic growth has averaged 3%, down 25% from the prior gold standard long term trend. Forbes explains, “If America had grown for all of its history at the lower post-Bretton Woods rate, its economy [today] would be about one quarter of the size of China’s. The United States would have ended up much smaller, less affluent, and less powerful.”
Moreover, “Since 1971, the dollar’s purchasing power has declined by more than 80%,” with about a third of that (26%) since 2000. Real incomes have been stagnant, or even declined. “[A] man in his thirties or forties who earned $54,163 in 1972 today earns around $45,224 in inflation adjusted dollars—a 17% cut in pay.” Unemployment has been significantly higher on average. Globally, “After the 1970s, world economic growth has been a full percentage point lower; inflation 1.5% higher.”
Forbes observes, “The correlation between unstable money and an unstable global economy would seem obvious.” Indeed, the termination of any link between the dollar and gold immediately inaugurated worsening boom and bust cycles of inflation and recession in the 1970s, with inflation soaring into double digits for several years. Inflation peaked at 25% over just two years in 1979 and 1980.
It took the worst recession since the Great Depression in 1981-1982 to tame that inflation, with double digit interest rates for years, and unemployment peaking at 10.8%. The Reagan/Volcker/Greenspan strong dollar monetary policies effectively restored a discretionary link to gold, with gold stabilizing around $300 to $350 for 20 years. That kept close control over inflation.
But this discretionary standard broke down as 2000 approached. The Fed loosened money and reduced interest rates over the Y2K scare, contributing to the tech stock bubble. Much worse, the Bush Administration supported a weak dollar monetary policy again on the mercantilist/Keynesian confusion that would help the economy by promoting exports. That included more loose money and 2½ years of negative real interest rates which served to pump up the housing bubble and lead, along with Clinton’s wild overregulation (in the name of affordable housing), to the 2008 financial crisis and recession.
Restoring a Dollar Link to Gold for the 21st Century
The best thing about Steve Forbes’ new book, Money, is that it discusses exactly the specific reforms that should be adopted today to establish a modern, 21st century link to gold for the dollar. That new system would not require the federal government to hold any gold stockpiles, and the money supply would not be limited to the availability of any quantity of gold.
Federal law would fix the dollar’s value in gold at a specified market price. That price would be set by some index to recent market prices for gold, perhaps the average gold price for the last 5 to 10 years, marked up by 10% as a hedge against causing deflation in the process. Federal law would mandate that the Fed conduct its monetary policy to ensure a stable value of the dollar at that market price.
The Fed would enforce that price through its open market operations buying and selling U.S. government bonds. If the price of gold began wandering in the market above the specified market price, that would signal the threat of inflation, and the Fed would begin tightening monetary policy by selling bonds to the market in return for cash withdrawn from the market. That reduced money supply would hold down price increases in the market, including for gold. The Fed would continue this policy, until the market price for gold returned to its specified target value.
If the price of gold began wandering in the market below the specified market price, that would signal the threat of deflation. The Fed would then begin loosening monetary policy by printing cash to buy U.S. government bonds in the market. That would increase the money supply, which would tend to increase prices in the marketplace, including for gold. The Fed would continue this policy until the market price for gold returned to its specified target value. The Fed would be required by the federal law to take such actions to prevent the price of gold from varying from the target price by more than 1%, which was the range permitted under the Bretton Woods system for currencies to fluctuate against the then gold backed dollar.
The federal law would provide that this new monetary policy would become effective at a specific date set in the future, perhaps 12 months away, to enable the private economy to plan for and adjust to the new policy. The law should grant the President or some other federal official the power to adjust the target price for gold to reflect more recent market prices as the implementation date approaches. Those more recent market prices would better reflect what the target gold price should be when the dollar is based on this new link to gold. A lesson learned from experience with President Obama, the law should also specify that any member of Congress would have standing to sue the President or other designated official if he or she did not carry out the law regarding this later market based adjustment as provided, and that federal courts would have the power to enforce relief. For example, not following more recent market prices in adjusting the target price would be a violation of the law.
This would effectively mean that the Fed would no longer have any power to pursue discretionary monetary policies to try to guide the economy in one direction or another. The new federal law would bar the Fed from attempting to manipulate interest rates, for example. The Fed would no longer have the power to set the federal funds rate, which is the rate banks pay to one another to borrow reserves. The Fed would continue to have the power to act as a lender of last resort to deal with financial panics that might temporarily threaten an otherwise sound bank. So the Fed could continue to set the “discount rate” that it would charge for such short term, lender of last resort borrowing. But even that would be required to be set above market rates, so that the Fed would not become a cheap source of funds for banks to borrow to lend out.
Along with a federal balanced budget amendment to the Constitution, this would effectively make Keynesian economics illegal. That would be highly desirable, because Keynesian economics is proven not to work, and Keynesian advocates are so oblivious to reasoned discussion on the point.
As a safeguard to help ensure that the Fed did follow its responsibilities under this new law, the law should specify that anyone could turn dollars into the Fed, and get gold at the legally specified target price. If the Fed was following the law, it could always buy gold in the market to pay for such a redemption in return for the target price for gold. If the Fed was not following the law, then it would likely not be able to finance such mandatory redemptions. The new federal gold law should again specify that any member of Congress would have automatic standing to sue the Fed to enforce the law.
Another safeguard would involve removing all barriers to the rise of private, competing, alternative currencies, to challenge the Fed to enforce and follow the law. That would mean no taxes, including capital gains taxes, could be assessed on sales of gold and silver. If the Fed did not follow the law, then these competing currencies could displace the dollar.
Such a new gold link to the dollar would be the last, missing component to any comprehensive strategy to restore traditional, world leading, American prosperity. Such a strategy would include as well personal and corporate tax reform to lower tax rates, deregulation of unnecessary regulatory costs and barriers, reduced federal spending to balance the budget and reduce the national debt as a percent of GDP, and free trade. Those policies could be expected to restore long term U.S. economic growth to 4% of GDP, which would leapfrog the American economy another generation ahead of the rest of the world.
So much blood and treasure was wasted during the long occupation in Iraq that there was a sigh of relief across America when the troops finally left. Yet the end of the American presence has resulted in chaos. Islamist extremists in recent days have been making gains against the Iraqi military, seizing several towns, including the city of Mosul. The sheer rapidity of the collapse of law and order in Iraq led to a lot of hand-wringing in the White House. President Obama finally decided to send a few hundred troops to bolster the beleaguered regime of Prime Minister Nouri al-Maliki. This choice will only serve to further diminish the status of the United States in the region.
There is a better course of action: let Iraq break up. For nearly a decade the United States has been trying to keep the three distinct ethno-national groups in Iraq cooperating. This policy has failed disastrously, with Shi’ites and Sunnis still at each other’s throats and the Kurds finding their semi-autonomy threatened by the central government in Baghdad. The only way to salvage something from the wreckage of Iraq is to effectively strip it for parts.
Broken up along largely ethnic lines, the Shia region would be relatively safe, as the Sunni minority would not risk the ire of Iran, which has always seen itself as a guarantor of the Shia population. The problem of Islamic extremism could then be dealt with on a more targeted basis. It would also lend greater clarity to the growing cross-border threat in Syria.
What should America do? First, Obama should send representatives to the Kurdish regional government in Northern Iraq. To date, the United States has kept a lid on the clear desire of the Kurds to declare independence. Now they should help smooth the process out. Kurdistan would be a relatively free, stable, and potentially prosperous ally for the United States in a region that has soured of late to the Stars and Stripes.
Guaranteeing Kurdish political autonomy could also be made cost-neutral. The significant oil wealth recently discovered in Kurdish territory could easily fund the American military presence. This is an arrangement that could be worked out. The Kurdish leaders have proven very pragmatic in their outlook and could easily be prevailed upon to support a US military presence in the region to guarantee their security.
The result of this policy maneuver would be an Iraq region that is no longer riven with so much ethnic conflicts, though the threat of Islamist extremism would remain serious. It is a big ask for America to spend even more to prop up a teetering regime, and Americans have largely lost the stomach for prolonged conflicts. And rightly so. If the situation is to be salvaged, it must be faced in the knowledge that the sort of full-scale, boots on the ground, operation necessary to even temporarily resuscitate the central government and secure its borders is not going to happen.
The way forward is to take the least costly steps that will guarantee a modicum of stability, support peaceful and friendly governments, and secure the safety of Americans at home and abroad. A broken up Iraq and a free Kurdistan now seems to be the way to make the best of a dire situation.
It is time for the Obama administration to change its rhetoric from condemning pro-independence actions in Kurdistan to a policy of facilitating peaceful secession.
It’s vacation time for the nation’s school kids and, while they play, states are beginning to push back against the latest effort of the federal government to exert total control over the nation’s schools; Common Core, whose curriculum standards and content rapidly revealed it to be a nightmare.
As I frequently note, the word “education” does not appear in the U.S. Constitution because the Founding Fathers knew full well that education was the job of localities and states to ensure quality and the opportunity that it provides everyone willing to learn the basics and beyond. From its earliest days, Americans would create a town, build a church, and follow up with a school. Until liberals complained about it, school days began with a prayer.
Liberals know that whoever controls schools controls the future. Dictatorships of all descriptions in particular place heavy emphasis on raising new generations with the kind of indoctrination that only the early years in school can impart. It should come as no surprise that the last failed liberal President, Jimmy Carter, ushered in the creation of the U.S. Department of Education
Still largely unknown to the general public is the control of the Department by teachers unions, the National Education Association and the American Federation of Teachers, and their support of the Democratic Party. This accounts for much of the well documented decline of education in America. The union’s chief concern is higher pay and benefits for teachers, not the welfare of the children in their care. Their focus is on politics, not teaching.
In March, the Cato Institute’s Center for Educational Freedom issued a new study on “Academic Performance and Spending over the Past 40 years” which revealed that “the average state has seen a three percent decline in academic performance despite a more than doubling in inflation-adjusted per-pupil spending.” Sometimes the spending increases are astonishing as in the case of New York State in which spending rose by 115%. California and Florida are not far behind with an increase of 80%.
Common Core has rapidly become a political hot potato as parents have let their state governors and legislators know how bad it is. Writing in the Heartland Institute’s May edition of its newsletter, School Reform News, Joy Pullmann, its managing editor, reported that Indiana Gov. Mike Pense was the first to sign a bill in March rejecting Common Core national standards, “but the parents and curriculum experts whose criticism led to the change also criticized the first draft of replacement standards for looking very similar to the Common Core mandates it is meant to replace.”
In a Heartland booklet, “The Common Core: A Bad Choice for America”, Pullmann notes that “States may not change Common Core standards, must adopt all of them at once, and may only add up to an additional 15 percent of requirements. The standards themselves have no clear governance, meaning there is no procedure for states to follow to make changes they feel are necessary. It is highly unlikely individual states would control or greatly influence any such process.”
At the very heart of the debate concerning Common Core is the notion that every single school in America should teach the exact same thing in the exact same way. That’s not how real education works and any teacher will tell you that different students learn at different rates and some require some extra help. Schools free of such one-size-fits-all thinking educated generations of Americans who made the nation the greatest economic power in the world.
Thus far, in addition to Indiana, state legislatures in Oklahoma, South Carolina, and Missouri have approved measures to exit Common Core’s national standards. Louisiana’s Gov. Bobby Jindal in mid-June said “We want out of Common Core” and is taking steps to reject it.
Common Core is the fulfillment of liberal’s dream of education. It was developed in 2009 by the National Governors Association and the Council of Chief State School Officers. It was quickly incentivized by the Obama administration with $4.35 billion in Race to the Top competitive grants and waivers from the federal No Child Left Behind law for states that signed on. Minus the states that have rejected Common Core, there are still 42 who have adopted it.
Ron Paul, commenting on the Oklahoma opt-out, said “Common Core is the latest attempt to bribe states, with money taken from the American people, into adopting a curriculum developed by federal bureaucrats and education “experts.” In exchange for federal funds, states must change their curriculum by, for example, replacing traditional mathematics with ‘reform math.’ Reform math turns real mathematics on its head by focusing on ‘abstract thinking’ instead of traditional concepts like addition and subtraction. Schools must also replace classic works of literature with ‘informational’ texts, such as studies by the Federal Reserve Bank of San Francisco. Those poor kids!”
Common Core’s curriculum standards are testimony to why abandoning local control over a community’s or city’s educational program is a very bad idea and why, once again, the federal government has demonstrated why it makes worse virtually any program that should be left to the states.
The 2009 “Stimulus” bill contained $7.2 billion for local government broadband — the federal government giving city, county and municipal governments money to get into the Internet Service Provider (ISP) business.
Everyone in Utah may be charged $20 a month to bail out UTOPIA, their woefully mis-named, decade-long disaster government broadband project. The government broadband iProvo lost tens of millions, then Google swooped in and purchased everything for one dollar.
Shocker: Google loves government broadband.
Government broadband is so terrible, in fact, that twenty states have passed laws limiting it.
Meanwhile, these local governments have been just as awful as stewards for their residents when it comes to private broadband. You know, the kind that actually works, the competition with government broadband.
Local governments shakedown the living daylights out of any wired company that comes asking to provide service — making it nearly impossible for them to do so. Which has resulted in many areas suffering a dearth of hardline options.
Government is (yet again) the problem. The answer to government isn’t more government. Unfortunately, no one has told this to Federal Communications Commission (FCC) Chairman Tom Wheeler.
Decrying this government-created lack of options, Wheeler has declared he will issue another Obama Administration fiat, steamrolling the laws of the twenty states and ramping up federal government spending on local government broadband.
Does the federal government have the authority to do this? Of course not.
And why not start with the thirty states that don’t have these laws? You know, be a little less dictatorial about it?
We answer all of this — and much more — in the accompanying video.
You’ll find the answers … disquieting.
The dearth of transplantable organs remains a serious problem in the United States and in much of the world. There are 123,000 Americans currently waiting for an organ. 18 of them die every day because demand continues to exceed supply. The problem has drawn the attention of many activists and policymakers, but sometimes the proposed solutions have proven more unpleasant than the problem. Chief among these unsavory solutions is the policy of opt-out organ donation.
Opt-out organ donation operates on the principle of presumed consent. This means that the government assumes that an individual is willing to have their organs harvested upon their death unless that individual has explicitly opted out of being a donor. Advocates for this system argue that this would greatly increase the number of organs available for transplantation and would save many lives.
The advocates for opt-out organ donation ignore something very important in their rush to claim dominion over the bodies of the dead: ordinary people’s views of the human body. To most Americans, the inanimate human body is more than a mere container of usable tissues. Even absent the spark of life, a body is usually seen as still being part of the deceased person.
This is not so much a religious or even spiritual sentiment, but a deeply human one. We attach significance to the body, whether it is a shell or all that remains of a person who was. We see it often as something worthy of respect.
This is why the body is not only essential to many funerary rituals, but is also a critical part of many people’s personal mourning and remembrance. It is why, in the wake of natural disaster, a huge amount of effort is put into the recovery of bodies that could have no medical use. It is why soldiers risk their lives to recover the remains of their fallen comrades. In essence, there is a personhood that we acknowledge by convention and sentiment even in the case of the dead.
Why is this perception of the body antagonistic to opt-out organ donation? Because it gives the presumption of ownership and control to the government.
Defenders of an opt-out policy might retort that because no one is obliged to donate their organs and can tick the box to remove themselves from the list, the self-ownership of the individual is not compromised. That reasoning is deeply flawed because it ignores the fundamental quality of the very idea of presumed consent. By presuming consent, the government essentially says that it owns your remains unless you go through a process that explicitly tells them otherwise. That completely turns on its head the idea of self-ownership as a baseline assumption.
Self-ownership, the underlying right of an individual to be independent of external domination is nullified when an individual has to sign a petition to prevent the state from harvesting their organs. What an opt-out system does is change the relationship of the individual and the state in such a way that the state has a much greater presumptive power over the individual’s very humanity.
Furthermore, there is an unpleasant smell of utilitarianism about opt-out programs. It seems to relegate choice to a secondary concern to the overall welfare of the polity. When the state begins making such a calculus about the disposition of its citizens, it does not take long for it to view them as means rather than ends. For citizens to be truly free they must not simply be agents of the state apparatus. There must always be some distinction between individuals and their societies.
There are other ways to increase organ donations. Donor drives are just one example. Whatever encouragements they offer, the burden must be on the state to encourage people to make the decision to donate their organs, not to just assume people have already consented.
Google recently boughtDropcam for $555m, a company which makes inexpensive, easy-to-install, WiFi-video-streaming-cameras that connect to cloud-based networks for convenient monitoring, set-up and retrieval.
Please don’t miss this graphic – here – of how the Dropcam acquisition fits into Google’s plans for a new ubiquitous physical surveillance network that will complement and leverage its existing virtual surveillance network.
Dropcam fills a big missing part of Google’s vision – literally to see, hear and track everything – in order to fulfill Google’s mission “to organize the world’s information.”
Most Rapid and Complete Vertical-Integration
What is remarkable here is that in only about six months Google has bought six key companies (Boston Dynamics, Nest, DeepMind, Titan Aerospace, SkyBox, & Dropcam) that comprise many of the key building blocks necessary to create a ubiquitous surveillance network that can physically track most everyone and everything from the sky and on the ground.
Effectively Google is taking its dominant ad-driven surveillance model to the next level. Obviously it is not content with dominating just the virtual world of data and monetization of software products and services. Apparently, Google has ambitions to leverage its virtual dominance to dominate large swaths of the physical economy as well: e.g. wearables, devices, aerial mapping, robots, cars, energy management, smart home services, Internet access, etc.
Importantly, physical surveillance, involving hardware and people, is much more difficult-to-scale, costly and people-intensive than Google’s virtual surveillance via cookies and other easy-to-scale software tracking technologies.
Evidently, no other company/entity is looking at the 21st century world/economy as holistically as Google’s apparent vision of fully integrating virtual and physical surveillance networks.
One could argue that these strategic acquisitions over the last-half year could be more cumulatively transformative of Google’s strategic direction, business mix and capabilities long term than any other half-year in Google’s storied history.
Simply, like the Google+ effort seamlessly integrated dozens of online products and services into a unified offering, expect Google to embark on another integration effort to secretly and seamlessly integrate these many new physical assets into a unified physical surveillance network. Once complete, expect Google’s dominance to be much greater than it is now because they are vertically-integrating much faster and more completely than any other entity — by far.
Accelerating & Compounding Privacy/Wiretapping Problems
The privacy problems with physical surveillance in the real world are dramatically greater than in the largely-privacy-free virtual world.
For example, consider the two big privacy problems Google got into when it effectively wiretapped both Gmail and home WiFi via Street View. For Gmail, a Federal Judge has ruledthat Google’s installation of a physical “Content One Box” to scan Gmails to create advertising profiles was effectively illegal interception or “wiretapping.” For Street View, a Federal Appeals Court also has ruled that Google’s Street View interception of home WiFi signals was effectively wiretapping because the signals were judged to be private and not public.
The super big problem here for Google is that in at least two of Google’s highest-profile and longstanding services, Google did not believe it needed to either disclose what they were doing with others’ communications, or ask anyone for permission to do what they were doing with their private information.
If surveillance innovation-without-permission is the norm at Google, and Google continues to maintain the legal position that people “have no expectation of privacy,” Google’s physical surveillance using Dropcam, and other physical surveillance technologies, for Google’s business purposes, could be at risk of being ruled illegal wiretapping as well.
A Profound Business Conflict-of-Interest
In conclusion, the acquisition of Dropcam, potentially provides Google’s engineers and advertising business model with arguably some of the most private, intimate, and valuable personal information available — a continuous, inside-look into someone’s inner sanctum where the public and competitors could never go or see. The temptation for Google to use and leverage this valuable private information will be enormous.
With Nest, but even more so with Dropcam, Google has created a profoundly serious business conflict-of-interest by putting a paid-privacy-based-service inside a privacy-hostile advertising business model thirsting for access to the most valuable private info.
If there is one thing that we’ve learned about Google — from its world’s worst privacy rap sheet, and its latest ambitions for a ubiquitous physical surveillance network – is that Google has very serious problems in respecting boundaries and asking for permission to use others’ private data.
George Orwell in his classic dystopian novel “1984,” envisioned a surveillance-technology called a telescreen that is eerily similar to Google-Dropcam’s capabilities today. It appears Google’s latest acquisition spree to assemble a ubiquitous physical surveillance networkenables Google to be the 21st century’s Big Brother Inc.
Forewarned is forearmed.Orignially published at www.precursorblog.com.
Much attention has been given the increase in transit use in America. In context, the gains have been small, and very concentrated (see: No Fundamental Shift to Transit, Not Even a Shift). Much of the gain has been in the urban cores, which house only 14 percent of metropolitan area population. Virtually all of the urban core gain (99 percent) has been in the six metropolitan areas with transit legacy cities (New York, Chicago, Philadelphia, San Francisco, Boston, and Washington).
In recent articles, I have detailed a finer grained, more representative picture of urban cores, suburbs and exurbs than is possible with conventional jurisdictional (core city versus suburban) analysis. The articles published so far are indicated in the “City Sector Articles Note,” below.
Transit Commuting in the Urban Core
As is so often the case with transit statistics, recent urban cores trends are largely a New York story. New York accounted for nearly 80 percent of the increase in urban core transit commuting. New York and the other five metropolitan areas with “transit legacy cities” represented more than 99 percent of the increase in urban core transit commuting (Figure 1). This is not surprising, because the urban cores of these metropolitan areas developed during the heyday of transit dominance, and before broad automobile availability. Indeed, urban core transit commuting became even more concentrated over the past decade. The 99 percent of new commuting (600,000 one-way trips) by transit in the legacy city metropolitan areas was as well above their 88 percent of urban core transit commuting in 2000.
New York’s transit commute share was 49.7 percent in 2010, well above the 27.6 percent posted by the other five metropolitan areas with transit legacy cities. The urban cores of the remaining 45 major metropolitan areas (those over 1,000,000 population) had a much lower combined transit work trip market share, at 12.8 percent.
The suburban and exurban areas, with 86 percent of the major metropolitan area population, had much lower transit commute shares. The Earlier Suburban areas (generally median house construction dates of 1946 to 1979, with significant automobile orientation) had a transit market share of 5.7 percent, the Later Suburban areas 2.3 percent and the Exurban areas 1.4 percent (Figure 2).
The 2000s were indeed a relatively good decade for transit, after nearly 50 years that saw its ridership (passenger miles) drop by nearly three-quarters to its 1992 nadir. Since that time, transit has recovered 20 percent of its loss. Transit commuting has always been the strongest in urban cores, because the intense concentration of destinations in the larger downtown areas (central business districts) that can be effectively served by transit, unlike the dispersed patterns that exist in the much larger parts of metropolitan areas that are suburban or exurban. Transit’s share of work trips by urban core residents rose a full 10 percent, from 29.7 percent to 32.7 percent (Figure 3).
There were also transit commuting gains in the suburbs and exurbs. However, similar gains over the next quarter century would leave transit’s share at below 5 percent in the suburbs and exurbs, because of its small base or ridership in these areas.
Walking and Cycling
The share of commuters walking and cycling (referred to as “active transportation” in the Queen’s University research on Canada’s metropolitan areas) rose 12 percent in the urban core (from 9.2 percent to 10.3 percent), even more than transit. This is considerably higher than in suburban and exurban areas, where walking and cycling remained at a 1.9 percent market share from 2000 to 2010.
Working at Home
Working at home (including telecommuting) continues to grow faster than any work access mode, though like transit, from a small base. Working at home experienced strong increases in each of the four metropolitan sectors, rising a full percentage point or more in each. At the beginning of the decade, working at home accounted for less work commutes than walking and cycling, and by 2010 was nearly 30 percent larger.
The working at home largest gain was in the Earlier Suburban Areas, with a nearly 500,000 person increase. Unlike transit, working at home does not require concentrated destinations, effectively accessing employment throughout the metropolitan area, the nation and the world. As a result, working at home’s growth is fairly constant across the urban core, suburbs and exurbs (Figure 4). Working at home has a number of advantages. For example, working at home (1) eliminates the work trip, freeing additional leisure or work time for the employee, (2) eliminates greenhouse gas emissions from the work trip, (3) expands the geographical area and the efficiency of the labor market (important because larger labor markets tend to have greater economic growth and job creation, and it does all this without (4) requiring government expenditure.
Despite empty premises about transit’s potential, driving remains the only mode of transport capable of comprehensively serving the modern metropolitan area. Driving alone has continued its domination, rising from 73.4 percent to 73.5 percent of major metropolitan area commuting and accounting for three quarters of new work trips. In the past decade, driving alone added 6.1 million commuters, nearly equal to the total of 6.3 million major metropolitan area transit commuters and more than the working at home figure of 3.5 million. To be sure, driving alone added commuters in the urban core, but lost share to transit, dropping from 45.2 percent to 43.4 percent. In suburban and exurban areas, driving alone continued to increase, from 78.2 percent to 78.5 percent of all commuting.).
Density of Cars
The urban cores have by far the highest car densities, despite their strong transit market shares. With a 4,200 household vehicles available per square mile (1,600 per square kilometer), the concentration of cars in urban cores was nearly three times that of the Earlier Suburban areas (1,550 per square mile or 600 per square kilometer) and five times that of the Later Suburban areas (950 per square kilometer). Exurban areas, with their largely rural densities had a car density of 100 per square mile (40 per square kilometer).
Work Trip Travel Times
Despite largely anecdotal stories about the super-long commutes of those living in suburbs and exurbs, the longest work trip travel times were in the urban cores, at 31.8 minutes one-way. The shortest travel times were in the Earlier Suburbs (26.3 minutes) and slightly longer in the Later Suburbs (27.7 minutes). Exurban travel times were 29.2 minutes. Work trip travel times declined slightly between 2000 and 2010, except in exurban areas, where they stayed the same. The shorter travel times are to be expected with the continuing evolution from monocentric to polycentric and even “non-centric” employment patterns and a stagnant job market (Figure 5).
Contrasting Transportation in the City Sectors
The examination of metropolitan transportation data by city sector highlights the huge differences that exist between urban cores and the much more extensive suburbs and exurbs. Overall the transit market share in the urban core approaches nine times the share in the suburbs and exurbs. The walking and cycling commute share in the urban core is more than five times that of the suburbs and exurbs. Moreover, the trends of the past 10 years indicate virtually no retrenchment in automobile orientation, as major metropolitan areas rose from 84 percent suburban and exurban in 2000 to 86 percent in 2010. This is despite unprecedented increases is gasoline prices and the disruption of the housing market during worst economic downturn since the Great Depression.
[Originally published at New Geography]