Somewhat Reasonable

Syndicate content Somewhat Reasonable | Somewhat Reasonable
The Policy and Commentary Blog of The Heartland Institute
Updated: 37 min 21 sec ago

Missouri Parents Fight for School Transfer Law

October 26, 2014, 9:56 AM

The ongoing struggle between parents and the Missouri government over the state’s school transfer law is another example of politics and bureaucracy winning out over parents, children, and their futures.

A judge ruled in late August in favor of parents and allowed students to return to the accredited districts to which they had legally transferred under a state law the previous year. This is a victory, but it has been largely limited to the families directly involved in the fight. The history of this conflict shows there is every reason to believe politicians and officials will continue to manipulate the school transfer law in order to stop it from functioning as it should.

The law says students attending school in an unaccredited district are allowed to transfer out to a nearby district that has accreditation.

School and state officials have been doing everything in their power to stop this from happening. The tactics have included requesting special, temporary accreditation of some sort and trying to claim the state has taken over and that should be considered accreditation. The education bureaucrats are trying to make students who successfully left the unaccredited districts return to those districts this school year.

Two thousand students transferred out of the unaccredited Normandy and Riverview Gardens districts last year. Officials have gone out of their way to fight as few as 10 families representing 17 students at a time to keep the transfer law from standing.

Missouri uses students’ performance on the Annual Performance Report (APR) in grading the effectiveness of schools and determining accreditation status. The system takes into account academic achievement, achievement in subgroups such as students with limited proficiency in English, college- and career- or high-school readiness, attendance rate, and graduation rate. In 2014, Normandy scored 7.1 percent on the APR. Riverview Gardens scored 45.4 percent. The graduation rate in Normandy is just 45.5 percent, and in Riverview Gardens it is 64 percent.

Alleged attempts to improve or fix the transfer law have turned out to be nothing more than efforts to stop transfers and prevent the law from working. Some Democrats support overhauling the transfer law in hopes of keeping students, and their state education funds, trapped in the unaccredited districts. Some Republicans have supported the overhaul because affluent residents do not want low-income students transferring into their schools.

Both sides have tried to gin up sympathy for the education establishment by noting the sending school districts lose money when students transfer. But the system should be designed to benefit the students, not the districts.

The Missouri Supreme Court has already upheld the transfer law. In August another judge decided in favor of the parents, who are trying only to provide the best education their children are allowed under the letter of the law.

But in districts such as Francis Howell, only families named in the original lawsuit are allowed to send their children back to the school districts to which they transferred. The districts have refused to accept the court’s decision as a precedent and will not allow all the transfer students to return to the accredited districts. A class-action lawsuit is in the works to force them to do so. Until now, parents have been individually filing petitions with the courts.

Among those fighting to be allowed to transfer are the families of 300 students who want to return to the accredited Ferguson-Florissant School District. Despite the recent events and unrest in Ferguson, these families are doing everything in their power to guarantee their children go to the district’s schools rather than be forced back to Normandy. If that doesn’t make state and school officials examine the quality of education being provided by the Normandy district, what will?

 

Categories: On the Blog

Fear the Day Government’s Great Fiction Lies Exposed

October 26, 2014, 9:39 AM

“Government is the great fiction through which everyone endeavors to live at the expense of everyone else,” wrote the celebrated French legislator, economist, and political theorist Frederic Bastiat 165 years ago. With recent reports out of the Census Bureau indicating nearly half of all Americans are receiving some form of direct government subsidy – Social Security, Medicare, Medicaid, food stamps, unemployment benefits, housing assistance, veterans’ benefits, etc. – can there be any doubt he was right?

Among the Census Bureau’s findings: More than 100 million Americans (more than one- third of the population) were receiving “means-tested” welfare assistance at the end of 2012, including 51 million on food stamps and 83 million on Medicaid. Many households received both. If we include Social Security, Medicare benefits, and veterans’ benefits, which do not depend on means testing for eligibility, nearly half of all households are receiving money from the other half.

That’s really what all this comes down to: some Americans taking from others. There is no doubt some Social Security recipients are already beginning to sputter with fury: “I paid into that!”

Yes, you did, and your payments – even with supposed investments – don’t come close to covering what you’re taking out of it. One of the great fictions of Social Security (and Medicare, which is part of Social Security) is that the government takes money from us while we work so that it will be there for us when we retire. In fact, no money is set aside. It’s all spent to pay for benefits or siphoned away to finance other government projects in years when tax revenues fall short of benefit payments.

If our tax dollars were really set aside for our retirement years, the government should have no problem letting Americans opt out of Social Security, right? The government wouldn’t need other people’s money to fund our benefits. But suggest an opt-out to someone in Congress and see what response you get.

As for the means-tested welfare programs, astonishingly, the number of welfare recipients has climbed since 2009, when the recession supposedly ended. The economy is growing and unemployment is falling, at least according to the Obama administration. Yet the government’s own records show government dependency is climbing.

In 2013, according to the Federal Bureau of Fiscal Services, the federal government paid more than $2 trillion in social benefits, nearly 70 percent of which went toward Social Security and Medicare. This is out of federal spending totaling $3.4 trillion. Far more money is spent on social programs than on everything else the federal government funds, including the military, education, agriculture, and transportation systems.

During the George W. Bush presidency, from 2001 to 2009, the federal debt climbed from $5.7 trillion to $10.4 trillion. Since 2009, trillions more have been added, and it’s now nearly $18 trillion. If the government’s promises are being properly funded, the debt would not be soaring.

President Lyndon Johnson launched the “War on Poverty” 50 years ago. Have we won the war? Are we about to win the war? Is there any end to the war in sight?

“Government is the great fiction through which everyone endeavors to live at the expense of everyone else.” The War on Poverty promoted the fiction, with new chapters added regularly since then, including those added by supposedly stingy Republicans. The Medicare drug program during Republican George W. Bush’s reign was the single largest entitlement expansion since the 1960s, and it was done without money being designated to fund it.

Fear the day when reality shatters the fiction. The longer the fiction lasts, the more shattering the reality will be.

Categories: On the Blog

Will the FCC Break the Internet? – My Daily Caller Op-ed

October 25, 2014, 9:42 AM

Actions speak louder than words.

The world is watching to see where the FCC’s actions will lead international telecommunications regulators going forward.

Will FCC leadership reinforce the successful Internet policy status quo?

Or will the FCC reverse course and risk breaking the global Internet by leading international telecommunications regulators to price-regulate their sovereign parts of the global Internet to restore the national postal and telecom utilities of the 20th century?

Currently the FCC is considering reversing the legal status of American Internet services from lightly-regulated information services to utility-regulated “telecommunications” services in response to a 2014 appeals court decision that limited a portion of the FCC’s net neutrality regulatory authority.

Neither the FCC nor the Internet operates in a vacuum. Most everything is now interconnected.

The big point here is if the FCC unilaterally changes the legal status of American Internet service to utility-regulated “telecommunications,” it could lead to big negative global repercussions that could seriously undermine U.S. trade and foreign policy interests going forward.

Strong Clinton Administration policy leadership was critical to enabling the current global free flow of information that we now know as the Internet.

In the 1990s, America successfully persuaded the world to not subject the Internet to “telecommunications” utility regulation via treaties and agreements overseen by the United Nations’ International Telecommunications Union (ITU).

That’s because ITU agreement ITU-T D.50 recognizes the sovereign right of each state to regulate “telecommunications” as that state determines.

Thus, if the FCC puts domestic politics first in “telecommunications” regulation, every other country can too.

So how could the wrong kind of FCC leadership break the Internet?

Today the Internet is unique because it is global with no borders. The Internet’s free flow of virtual information is not subject to the normal sovereign border inspection or tariffs that physical international travel, delivery or trade must endure.

In stark contrast, the raison d’être of the ITU’s “telecommunications” utility-regulation regime is to create and enforce sovereign borders and tariffs.

Thus over time, redefining the Internet to be common “telecommunications” easily could devolve the Internet back to the 1990’s telephone and postal national utility model, yielding a de facto broken and Balkanized splinter-net.

Trade Policy

What’s the risk to U.S. trade?

The current Internet status quo is as near to perfect-free-trade for American interests as America could aspire.

There is almost unfettered free flow of information from the U.S. to the world, subject to no national customs border inspection, transit accountability, or import tariffs.

In addition, America’s Internet and big data companies have benefited richly from the lax U.S.-EU data protection safe harbor that allows U.S. companies to annually self-certify, with no meaningful system of accountability, that they comply with EU data protection law.

But apparently this as-good-as-it-gets Internet free-trade dynamic is not good enough for Silicon Valley companies.

They now want the FCC to officially subsidize their massive Internet infrastructure-use via a clever re-branding of net neutrality to mean “no-fast-lane” and “no-paid-prioritization” allowed.

Specifically, Silicon Valley companies are heavily lobbying the FCC for a permanent, FCC-set, zero-price for its downstream traffic to consumers and businesses.

How could these FCC subsidies cause trade policy problems?

The ITU’s “telecommunications” settlements regime, “sender-party-pays,” is just like the sender of a letter or package paying for a stamp or postage for delivery domestically or internationally.

Today Silicon Valley companies “export” vastly more volume of Internet traffic to the rest of the world (in videos streamed, content displayed, and services provided) than other countries digitally export to the U.S. via the Internet.

While the FCC may imagine that it is in its political interests to subsidize Silicon Valley as a “national champion” as part of an FCC industrial policy, the political interests of foreign regulators is not America’s.

Any potential FCC revival of the 1990’s ITU “telecommunications” international settlement regime puts Silicon Valley companies like Google-YouTube, Netflix, Facebook, Amazon, etc., at great risk of having to pay many billions of dollars net to foreign governments to reach their foreign consumers and businesses.

And foreign governments could charge that U.S. governmental infrastructure-use subsidies for Silicon Valley constitute an unfair protectionist trade advantage.

The digital section of US-EU trade negotiations over the Transatlantic Trade and Investment Partnership (TTIP) already faces enough trade problems given the EU’s opening positions to end the US-EU data protection safe harbor and its high priority to create a Single European Digital Market.

U.S. trade negotiators certainly don’t need the FCC effectively commandeering U.S. digital trade policy by unilaterally redefining un-tariffed Internet trade to be tariffed “telecommunications” trade.

Foreign Policy

What’s the risk here to U.S. foreign policy?

First, American foreign policy has promoted freedom of speech and no censorship as important to democracy, trade and civil society.

However, in the post-Snowden context, the world fears widespread NSA deep-packet-inspection of Internet traffic.

Thus it would not be helpful to U.S. interests for the FCC to redefine the Internet to be “telecommunications” trade because that could invite autocratic governments around the world to deploy their own deep-packet-inspection at their borders for the purposes of censorship, under the political cover of an FCC-legitimized “sender-party-pays” Internet “telecommunications” trade regime.

Second, another foreign policy problem with the FCC asserting utility regulation authority over the American Internet would be the de facto FCC abandonment of the multistakeholder process of Internet governance.

Just this month, U.S. Secretary of Commerce Penny Pritzker at an ICANN Internet governance forum promised to “not allow the global Internet to be co-opted by any person, entity, or nation seeking to substitute their parochial worldview for the collective wisdom of this community.”

Certainly it is not helpful to this particular U.S. foreign policy for the American FCC to be the most visible parochial entity “seeking to substitute their parochial worldview for the collective wisdom of this community.”

Third and most importantly, is the foreign policy risk of unwittingly playing into the hands of China’s and Russia’s geopolitical machinations to “de-Americanize” the Internet.

Anyone who pays attention to world affairs knows that China and Russia are aggressively extending their geopolitical spheres of influence at America’s expense.

They know China’s cyber-forces have massively infiltrated most all major American tech companies and stolen an incalculable amount of American trade secrets and intellectual property.

They also know that the U.S. government suspects that Russian-backed cyber-forces may be responsible for many of the biggest cyber-attacks on U.S. retail companies that have made away with tens of millions of Americans’ credit card numbers.

In addition to these Chinese and Russian covert efforts, China and Russia are overtly trying to have the International Telecommunications Union replace the U.S.-backed ICANN and the international multistakeholder community in governing the Internet.

The next Secretary General of the UN International Telecommunications Union is expected to be China’s Houlin Zhao, who is currently Deputy Secretary General of the ITU.

To bring this geopolitical point home, there are two things that China and Russia hopes the U.S. will do to unwittingly advance their plans to “de-Americanize” the Internet and weaken America’s economic and technological leadership.

First they want the U.S. to surrender control of the Internet’s “root zone file,” which is the Internet’s global address book that enables anyone to connect to anything on the Internet, to the multistakeholder community so the ITU can then eventually take it over.

China, Russia and their many autocratic allies around the world know that if the Internet’s address book does not remain on U.S. soil enjoying American sovereign protection, the current global “root-zone-file” could be broken up into sovereign root-zone-files under other sovereign countries’ control, thus breaking the global Internet.

Second, China and Russia could only dream that America’s FCC would redefine the Internet to be “telecommunications” because that would give them perfect political cover to effectively take control of Internet governance via the ITU.

Thus the open question for the FCC and the U.S. government: is the potential risk of partial gaps in the FCC’s domestic net neutrality authority more important to address than causing the very real risks of unwittingly abetting foreign interests bent on breaking up the global Internet?

The FCC is not “independent” of the United States government or free to set its own trade or foreign policy.

On international matters, the FCC knows it must tread softly and carry no stick.

The FCC also appreciates that the Internet has major geopolitical, trade, and economic import, and that the Internet has become a new tacit cyber-battleground with China and Russia, which seek to “de-Americanize” the Internet.

Clearly, the FCC is at a crossroads.

Will the Wheeler-FCC advance the successful Internet status quo?

Or will the Wheeler-FCC reverse course for parochial reasons and lead the international telecom regulator community down a utility-regulation “telecommunications” path that risks the sovereign break-up of the global Internet?

History will be the judge of the FCC’s actions, not its words.

***

FCC Open Internet Order Series

Part 1: The Many Vulnerabilities of an Open Internet [9-24-09]

Part 2: Why FCC proposed net neutrality regs unconstitutional, NPR Online Op-ed [9-24-09]

Part 3: Takeaways from FCC’s Proposed Open Internet Regs [10-22-09]

Part 4: How FCC Regulation Would Change the Internet [10-30-09]

Part 5: Is FCC Declaring ‘Open Season’ on Internet Freedom? [11-17-09]

Part 6: Critical Gaps in FCC’s Proposed Open Internet Regulations [11-30-09]

Part 7: Takeaways from the FCC’s Open Internet Further Inquiry [9-2-10]

Part 8: An FCC “Data-Driven” Double Standard? [10-27-10]

Part 9: Election Takeaways for the FCC [11-3-10]

Part 10: Irony of Little Openness in FCC Open Internet Reg-making [11-19-10]

Part 11: FCC Regulating Internet to Prevent Companies from Regulating Internet [11-22-10]

Part 12: Where is the FCC’s Legitimacy? [11-22-10]

Part 13: Will FCC Preserve or Change the Internet? [12-17-10]

Part 14: FCC Internet Price Regulation & Micro-management? [12-20-10]

Part 15: FCC Open Internet Decision Take-aways [12-21-10]

Part 16: FCC Defines Broadband Service as “BIAS”-ed [12-22-10]

Part 17: Why FCC’s Net Regs Need Administration/Congressional Regulatory Review [1-3-11]

Part 18: Welcome to the FCC-Centric Internet [1-25-11]

Part 19: FCC’s Net Regs in Conflict with President’s Pledges [1-26-11]

Part 20: Will FCC Respect President’s Call for “Least Burdensome” Regulation? [2-3-11]

Part 21: FCC’s In Search of Relevance in 706 Report [5-23-11]

Part 22: The FCC’s public wireless network blocks lawful Internet traffic [6-13-11]

Part 23: Why FCC Net Neutrality Regs Are So Vulnerable [9-8-11]

Part 24: Why Verizon Wins Appeal of FCC’s Net Regs [9-30-11]

Part 25: Supreme Court likely to leash FCC to the law [10-10-12]

Part 26: What Court Data Roaming Decision Means for FCC Open Internet Order [12-4-12]

Part 27: Oops! Crawford’s Model Broadband Nation, Korea, Opposes Net Neutrality [2-26-13]

Part 28: Little Impact on FCC Open Internet Order from SCOTUS Chevron Decision [5-21-13]

Part 29: More Legal Trouble for FCC’s Open Internet Order & Net Neutrality [6-2-13]

Part 30: U.S. Competition Beats EU Regulation in Broadband Race [6-21-13]

Part 31: Defending Google Fiber’s Reasonable Network Management [7-30-13]

Part 32: Capricious Net Neutrality Charges [8-7-13]

Part 33: Why FCC won’t pass Appeals Court’s oral exam [9-2-13]

Part 34: 5 BIG Implications from Court Signals on Net Neutrality – A Special Report [9-13-13]

Part 35: Dial-up Rules for the Broadband Age? My Daily Caller Op-ed Rebutting Marvin Ammori’s [11-6-13]

Part 36: Nattering Net Neutrality Nonsense Over AT&T’s Sponsored Data Offering [1-6-14]

Part 37: Is Net Neutrality Trying to Mutate into an Economic Entitlement? [1-12-14]

Part 38: Why Professor Crawford Has Title II Reclassification All Wrong [1-16-14]

Part 39: Title II Reclassification Would Violate President’s Executive Order [1-22-14]

Part 40: The Narrowing Net Neutrality Dispute [2-24-14]

Part 41: FCC’s Open Internet Order Do-over – Key Going Forward Takeaways [3-5-14]

Part 42: Net Neutrality is about Consumer Benefit not Corporate Welfare for Netflix [3-21-14]

Part 43: The Multi-speed Internet is Getting More Faster Speeds [4-28-14]

Part 44: Reality Check on the Electoral Politics of Net Neutrality [5-2-14]

Part 45: The “Aristechracy” Demands Consumers Subsidize Their Net Neutrality Free Lunch [5-8-14]

Part 46: Read AT&T’s Filing that Totally Debunks Title II Reclassification [5-9-14]

Part 47: Statement on FCC Open Internet NPRM [5-15-14]

Part 48: Net Neutrality Rhetoric: “Believe it or not!” [5-16-14]

Part 49: Top Ten Reasons Broadband Internet is not a Public Utility [5-20-14]

Part 50: Top Ten Reasons to Oppose Broadband Utility Regulation [5-28-14]

Part 51: Google’s Title II Broadband Utility Regulation Risks [6-3-14]

Part 52:  Exposing Netflix’ Biggest Net Neutrality Deceptions [6-5-14]

Part 53: Silicon Valley Naïve on Broadband Regulation (3 min video) [6-15-14]

Part 54: FCC’s Netflix Internet Peering Inquiry – Top Ten Questions [6-17-14]

Part 55: Interconnection is Different for Internet than Railroads or Electricity [6-26-14]

Part 56: Top Ten Failures of FCC Title II Utility Regulation [7-7-14]

Part 57: NetCompetition Statement & Comments on FCC Open Internet Order Remand [7-11-14]

Part 58: MD Rules Uber is a Common Carrier – Will FCC Agree? [8-6-14]

Part 59: Internet Peering Doesn’t Need Fixing – NetComp CommActUpdate Submission [8-11-14]

Part 60: Why is Silicon Valley Rebranding/Redefining Net Neutrality?  [9-2-14]

Part 61: the FCC’s Redefinition of Broadband Competition [9-4-14]

Part 62: NetCompetition Comments to FCC Opposing Title II Utility Regulation of Broadband [9-9-14]

Part 63: De-competition De-competition De-competition [9-14-14]

Part 64: The Forgotten Consumer in the Fast Lane Net Neutrality Debate [9-18-14]

Part 65: FTC Implicitly Urges FCC to Not Reclassify Broadband as a Utility [9-23-14]

Part 66: Evaluating the Title II Rainbow of Proposals for the FCC to Go Nuclear [9-29-14]

Part 67: Why Waxman’s FCC Internet Utility Regulation Plan Would Be Unlawful [10-5-14]

Part 68: Silicon Valley’s Biggest Internet Mistake [10-15-14]

[Originally published at The Daily Caller]

Categories: On the Blog

Predictions of Doom Caused by Tax Relief Measures Ring Hollow

October 24, 2014, 10:23 AM

Many years ago, the Chinese philosopher Lao Tzu opined, “Those who have knowledge, don’t predict. Those who predict, don’t have knowledge.”

He was most likely observing human behavior in the aggregate, but local government officials who fought against tax relief measures for the state’s Local Government Fund (LGF) subsidies would be wise to heed Lao Tzu’s words in the future.

In 2011, numerous local-government special-interest groups and elected officials fought against Gov. John Kasich’s proposed reduction to the Local Government Fund, a pool of taxpayers’ money collected by the state government and redistributed to local governments’ general revenue funds.

The LGF, created by Gov. George White, was born with great fanfare and pomp. Writing of his plan to enact a 3 percent sales tax in Ohio, White proclaimed forcing taxpayers to subsidize communities in which they did not reside would save the state—and the hundreds of municipalities within it—“from bankruptcy and chaos.” Not one to set a low bar for expectations for “spreading the wealth around,” White announced his plan would allow him a peaceful slumber: “I know now that the poor will be fed and clothed and our children given the opportunity of a free education which is the birthright of every American school child and that the safety and health of our people will be guaranteed.”

In 2011, when Kasich and legislators began cutting the LGF, numerous municipalities began ringing the alarm, forecasting the return of White’s predicted “bankruptcy and chaos” should their access to other people’s money be restricted at all. Circleville Mayor Chuck Taylor told the Columbus Dispatch his city’s budget already had been “cut to the bone,” complaining, “It’s going to be devastating to us, to be honest.”

Ohio Sen. Capri Cafaro (D-Hubbard) bemoaned the 5.3 percent cut in subsidies, warning, “the state is creating a fiscal crisis for local governments that will likely lead to tax increases, reduced services and additional layoffs.”

After local governments have spent three years without their fire-hose access to money extracted from other communities’ workers, the predicted apocalypse has failed to appear.

Recent studies of Ohio municipalities’ financial status have confirmed the world has not ended. A groundbreaking, comprehensive database compiled by ace Gannett Media reporters Chrissie Thompson, Jessie Balmert, and Jona Ison showed “counties and cities are largely weathering cuts in state money.” In fact, they note municipalities are actually exceeding state minimums for “rainy-day funds.” Clearly, the sun is shining on local governments’ budget sheets.

A parallel study, conducted by the Buckeye Institute for Public Policy Solutions, found local government tax revenues have increased by $310 million since the state slowed its subsidization of local governments. Confirming Gannett Media’s deep-dive, the Buckeye Institute study found 90 percent of all county governments are running budget surpluses, saving unassigned general revenue funds for possible leaner times ahead.

A general fear of the future—the “undiscovered country,” as Shakespeare called it—is understandable, but the hysteria over LGF subsidy cuts and the resulting relief for the state’s taxpayers has clearly proven unwarranted. Should future sessions of the General Assembly decide on further relief of the taxpayer burden, the predictable prognostications of peril from our elected officials and their surrounding nebula of pro-taxation policy advisors will have less credibility than ever.

Categories: On the Blog

FCC’s Secret Meetings Raise Significant Process Concerns

October 23, 2014, 2:49 PM

A little-noticed article in the Wall Street Journal over Labor Day Weekend concerning the proposed Comcast-Time Warner Cable merger caught my eye, not only because the article obviously concerns an important matter of communications policy, but also because it raises questions regarding a matter of proper administrative agency process.

In the online version, the article is titled, “Comcast Targeted by Entertainment Giants.” This presages the article’s focus on the substantive communications policy matter. Along with my colleague, Seth Cooper, I filed public comments in the FCC’s proceeding that set forth our views concerning the proper way for the FCC to consider the merger proposal. You can read our comments, and I don’t intend to discuss the substance of the merger proposal here.

Instead, what I want to focus on is the matter of proper agency process. The article’s subtitle says a lot about my process concern: “FCC Encourages Media Companies to Provide Confidential Complaints on Time Warner Cable Purchase.”According to the WSJ, the FCC “is encouraging those big companies to offer feedback confidentially, people familiar with the matter say.”

In my decades-long experience with FCC matters, it is fairly unusual, if not unprecedented, for the FCC to take the initiative in encouraging confidential complaints in the context of an on-the-record merger review proceeding. The fact that it is doing so here caught my “administrative law” eye. (As a former Chair of the American Bar Association’s Section of Administrative Law and Regulatory Practice, a current member of the Administrative Conference of the United States, and a current Fellow at the National Academy of Public Administration, I do have such an “administrative law” eye. But, of course, I am speaking here only for myself.)

The theory spun out in the WSJ article is that the so-called “Entertainment Giants” may be too intimidated to put whatever concerns they may have about the merger on the public record. Unless these companies are able to meet with Commission officials on a confidential basis, so the story goes, they may not present their concerns at all because they fear that they may be subject to retribution by Comcast.

I can follow the theory, but nevertheless I do question the use of secret meetings in the context of the FCC’s transaction review proceedings. The practice of conducting off-the-record meetings raises questions of fundamental fairness that go to the integrity of the agency’s decision-making process. This is because no one –including Comcast and Time Warner Cable, the parties most directly affected – is in a position to rebut claims made by the parties during the confidential meetings.

In administrative law terms, the FCC’s merger review proceeding – a proceeding in which the FCC is considering applications to approve the transfer of specific spectrum licenses and other specific authorizations – is an adjudicatory proceeding affecting the legal rights of the parties to the applications. In most cases, adjudicatory proceedings are “restricted” proceedings. This means that ex parte, or off-the-record, contacts between interested parties and Commission decision-making officials are not allowed. In restricted proceedings, all communications between interested parties and FCC officials must be on-the-record.

But in certain adjudicatory proceedings that may have significant public policy implications beyond the rights of the immediately affected parties, the FCC may invoke what it calls a “permit-but-disclose” process. The agency typically designates major merger reviews “permit-but-disclose” proceedings under Section 1.1206(b) of its rules, and it did so in a public notice in this case. As the name implies, in a “permit-but-disclose” proceeding, an interested party may make an ex parte presentation to Commission decision-making personnel, as long as the person promptly places in the public record the substance of the presentation.

The “permit-but-disclose” process allows interested parties to present their views to Commission officials considering the transaction, while ensuring, at the same time, that the substance of those views is placed in the record so that other interested parties, including the applicants seeking approval of the transaction, have notice of the presentation and an opportunity to respond.

If the Wall Street Journal reporting is accurate, and in fact the FCC is deviating from the “permit-but-disclose” practice in the case of the Comcast-TWC merger proceeding, then I have concerns. Providing fair notice and an opportunity to respond are fundamental elements of due process, even in a constitutional sense. A “permit-but-non-disclose” process, which by definition lacks fair notice and an opportunity to respond, is problematic from the perspective of proper conduct of an adjudicatory proceeding.

Now, I understand that perhaps in this instance the FCC may be invoking a further exception to the restricted proceeding requirements that otherwise apply to adjudicatory matters. Section 1.1204(a)(9) of the Commission’s rules provides that the Commission may allow a secret presentation to be made “to protect an individual from the possibility of reprisal, or [if] there is a reasonable expectation that disclosure would endanger the life or physical safety of an individual.” I understand that this provision may have a role to play as a “safety valve” in very rare situations, including when life or limb may be threatened.

Despite some of the exaggerated and unhelpful heated rhetoric bandied about regarding so-called “media giants” – whether they be cable operators like Comcast and Time Warner Cable on the one hand or content programmers on the other – no one seriously entertains the notion that anyone’s life or physical safety is threatened by on-the-record participation in the merger proceeding. So, perhaps agency officials are reading “reprisal” in the sense of an interested party’s possible fears that it might not be treated as well as it otherwise would like in a business negotiation if it expresses concerns about the proposed merger.

Well, of course. It is understandable that one business “giant” (or even little giant) might prefer not to tick off another by expressing concerns in a public proceeding. But this worry, such as it is, must be balanced by concerns about maintaining the integrity of the agency’s administrative process. I don’t know what is said in the secret meetings – well, that’s obvious – but, without knowing more, my sense is that here the balance tips in favor of putting the substance of the claims on the public record. After all, remedies are available if anticompetitive retaliatory conduct is proven, and they will remain available whether or not the Comcast-TWC merger is approved.

Finally, I understand that the Department of Justice, in investigating proposed mergers, conducts secret meetings just like the FCC apparently is conducting in this instance. I don’t know for sure, but I suspect that DOJ is conducting confidential meetings with some of the very same parties with whom FCC officials are meeting. To some extent this just serves to highlight the duplication of effort, in many instances unnecessary duplication of effort, when both DOJ and the FCC investigate the same merger.

But in a more fundamental sense, DOJ’s conduct of confidential meetings just serves to highlight my concern about the FCC’s process. DOJ, an executive branch antitrust enforcement agency, presumably is investigating whatever competitive concerns it may have about the proposed merger, including those brought to its attention by competitors of Comcast and TWC and those who deal with them. But, ultimately, if DOJ concludes the merger presents competitive concerns, it must either file a complaint in court seeking to block or condition it. This would begin an on-the-record process in federal court that will be conducted in full public view.

In the case of the FCC, ultimately it will adopt a public order regarding the applications seeking transfer of specific licenses. But the substance of the secret meetings will never be put on the public record before this official action is taken. Comcast and Time Warner Cable most likely won’t even know who met with whom, and they won’t have an opportunity to respond.

There may be more than I know as to why the FCC is proceeding in the unusual fashion it is. But based on what I know, I think this is a problematic way for the Commission, acting in its quasi-judicial capacity, to proceed in an adjudicatory proceeding.

Alexander Bickel, the prominent constitutional law scholar, wrote in his 1975 book, The Morality of Consent, that “the highest form of morality almost always is the morality of process.” I share Professor Bickel’s view regarding the importance of process.

In this case, converting a “permit-but-disclose” proceeding into a “permit-but-non-disclose” one raises significant process concerns.

[Originally published at The Free State Foundation]

Categories: On the Blog

Wisconsin Wind Health Hazard

October 23, 2014, 11:06 AM

Wind farms in particular and wind power in general create a number of problems.

We at The Heartland Institute have written extensively concerning the death toll wind farm operations inflict on birds and bats.

Anyone who travels across this country or lives near the vicinity of wind farms can describe the increasing industrial blight wind farms impose on previously unmarred vistas and formerly wild, undeveloped locales.  Wind farms, sprawling across thousands of miles, have an unmatched footprint on the basis of the land required per unit of energy produced.

Wind power is expensive and unreliable.

All of these points are becoming evident, not just to we policy wonks who study the issue, but increasingly to the public at large.

Now, issues in Wisconsin highlight another possible problem caused by wind farms, perhaps the most damning of all: more and more people who reside near wind farms are claiming the turbines’ operations are making them sick. If, in fact, industrial wind turbines do cause human health problems, it could result in significant restrictions on their placement and operations — making them even less popular and unprofitable (absent large subsidies) than they already are. Lawsuits could also be in the offing.

Wisconsin’s Brown County Health Board has gone on record this week declaring the Shirley Wind Project, owned by Duke Energy, a “human health hazard.” The Board’s action puts Duke on the defensive to prove the farm is not a health risk.

It is the duty of health departments to collect information regarding human health within in their respective counties.  If other county or state health departments take the Brown County Health Board’s declaration seriously, they may start investigating the issues in their areas.  This could bring on onslaught of health complaints, which would become a problem for wind farm operators around the nation.

For more on the Brown County/Shirley Wind Farm situation see:

http://www.rightwisconsin.com/perspectives/A-Game-Changer-for-the-Wind-Energy-Debate-out-of-Brown-County-279728222.html

http://www.fdlreporter.com/story/opinion/2014/10/19/wind-turbines-rotten-cuts-ways/17454793/?from=global&sessionKey=&autologin=

http://www.jrn.com/nbc26/news/Health-Board-Says-the-Shirley-Wind-Project-is-a-Health-Hazard-279626362.html

http://edgarcountywatchdogs.com/2014/10/duke-energys-shirley-wind-farm-declared-health-hazard/

One wonders where the mainstream media coverage is on this question of human health risk.  Major media players cover every oil spill, pollution from farming operations and the slightest possible health risk from chemicals, yet when the issue is a possible health risk from the environmental left’s sacred cow, wind energy, there is a deafening silence from national media outlets on the issue.

Categories: On the Blog

Housing Affordability in China

October 23, 2014, 10:48 AM

Finally, there is credible housing affordability data from China. For years, analysts have produced “back of the envelope” anecdotal calculations that have been often as inconsistent as they have been wrong. The Economist has compiled an index of housing affordability in 40 cities, which uses an “average multiple” (average house price divided by average household income) (China Index of Housing Affordability). This is in contrast to the “median multiple,” which is the median house price divided by the median household income (used in the Demographia International Housing Affordability Survey and other affordability indexes). The Demographia Survey rates affordability in 9 geographies, including Hong Kong (a special administrative region of China). The average multiple for a metropolitan market is generally similar to the median multiple.

The Economist Data and Methodology

The Economist develops its ratio from central government data on house sales and incomes in individual cities. Like the Demographia SurveyThe Economist provides estimates for housing affordability from the perspective of the average urban household, as opposed to the “ex-pat” or “luxury” markets that are typically reported by real estate commentators. The Economist also estimates its price to income ratio using an average house size of 100 square meters (approximately 1,075 square feet). This is larger than the average new house size in the United Kingdom, but smaller than those in the United States, Australia, Canada and New Zealand.

With an overall average multiple of 8.8, China’s housing is less affordable (Figure 1) than all of the nine geographies rated in the Demographia Survey, except for Hong Kong (14.9). Even so, China’s housing affordability has improved from a national average multiple of 11.7 in April of 2010.

Affordability by City

It appears that if The Economist had included Hong Kong in its China ranking, it would have been ranked the most unaffordable in the country. Hong Kong houses are much smaller than the Chinese average, at 45 square meters (480 square feet). This would have given Hong Kong, with an unadjusted multiple of 14.9, a house size adjusted multiple of more than 30.

For years, there have been press reports of astronomic price to income multiples in China. The Economist data indicates that in some cities (Shenzhen, Beijing, Hanghzou and Wenzhou) this has indeed been true. But incomes have risen faster than house prices in recent years, and average multiples above 20 are, for now, a thing of the past.

Shenzhen, the “instant” megacity next to Hong Kong, is ranked as the least affordable with an average multiple of 19.6. The Economist indicates that this may be the result of demand from Hong Kong residents. Shenzhen had reached an average multiple of nearly 25 in 2010. An even higher average multiple was recorded in Beijing, which reached 27 in 2010. Beijing house prices have fallen substantially, however, dropping to 16.6 in 2014, the second most unaffordable in China.

China’s other megacities (over 10 million population) have lower average multiples than Shenzhen and Beijing. Shanghai has an average multiple of 12.8 and Guangzhou has an average multiple of 11.4. Tianjin, approximately 100 miles (140 kilometers) from Beijing and China’s newest megacity has an average multiple of 11.2.

China’s most affordable city is Hohhot, capital of Inner Mongolia (Nei Mongol), with an average multiple of 4.9. Generally, interior cities had better housing affordability than those along the east coast. For example, Changsha (capital of Hunan) has an average multiple of 5.9, Kunming 6.6, while the two leading cities of China’s Red Basin, Chongqing and Chengdu, were somewhat higher (7.1 and 7.4).

Comparison to Other Demographia Cities

Yet the multiples for many Chinese cities are no worse than highly unaffordable cities in Australia, New Zealand, Canada, the United States, and the United Kingdom.

Outside Hong Kong, the other most expensive cities in the Demographia Survey would rank in the second 10 of Chinese cities. Vancouver, with a median multiple of 10.3, is more expensive than all but 12 of the 40 cities rated in China. San Francisco, with a median multiple of 9.3, would rank 15th. Sydney, with a median multiple of 9.0, would rank in a 16th tie with Dalian. San Jose, at 8.7, would rank in a 19th place tie for unaffordability with Wuhan and Ningbo.

A sampling of cities from China and the Demographia Survey is illustrated in Figure 2.

Toward an Affordable China

One of rapidly urbanizing China’s biggest challenges is to improve housing affordability. This is an imperative, with easing of the hukou internal resident permit system and the one-child policy. United Nations projections indicate that China’s urban areas will add another third to their population in the next 25 years, an increase of more than 250 million. China is better housed today than perhaps at any time in its history. But it needs to be still better housed, as internal migrants become permanent urban residents and as rural citizens move to the cities for better lives.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photo: Jinan

[Originally published at New Geography]

Categories: On the Blog

New CBS Streaming Service Reflects Rapidly Changing TV Market

October 23, 2014, 10:14 AM

Hot on the heels of the announcement of a new streaming service from cable channel HBO (reported here last week), broadcast TV giant CBS has begun a standalone streaming service to deliver CBS programming.

The service does not distribute CBS’s broadcast feed through the service, instead offering shows from the network’s extensive programming library, including full seasons of current daytime and primetime programs. Current primetime shows will not be made available until the day after they air on the broadcast network. The network’s sports programming remains tied to broadcast and cable/satellite delivery as well.

Those concessions will allow the network’s local broadcast affiliates and cable and satellite providers to breathe easier for a while, but probably not for long.

CBS All Access costs subscribers $5.99 per month, and some programming includes commercials. It launched last Thursday, so those interested in giving it a try can do so right away.

Whether there will be a big market for the service remains to be seen, of course, as the amount of programming choices continues its rapid increase on both cable/satellite television and via the internet. CBS, however, has been the top-rated TV network for most years of the past couple of decades, and its library includes past fan favorites such as Star Trek, Twin Peaks, and Cheers. Cord-cutters who have already watched nearly all the Netflix-available programs and movies they’re interested in might consider switching.

In addition, given CBS’s deep pockets, it might not be long before CBS All Access starts premiering programs of its own, as Netflix and Amazon.com have done with much success.

The one certainty is that the market for TV programming is fracturing rapidly, which will ultimately force down the prices to consumers while increasing choice. From a consumer perspective, that is an ideal outcome.

 

[First published at The American Culture]

Categories: On the Blog

Heartland Daily Podcast: Jennifer Lynch – FBI’s New Massive NGI Database

October 23, 2014, 9:36 AM

Electronic Frontier Foundation senior staff attorney and digital surveillance expert Jennifer Lynch joins he Heartland Institute’s Budget and Tax News managing editor, Jesse Hathaway, to discuss the Federal Bureau of Investigation’s (FBI) new massive electronic surveillance and investigation database, the Next Generation Identification system (NGI).

Lynch explains how the NGI may infringe upon American citizens’ right to peaceably assemble in political protests, as well as how other surveillance and database technologies employed by the government threaten our privacy.

Categories: On the Blog

No Ebola Panic Despite Media Hysteria

October 23, 2014, 7:20 AM

One man has died of Ebola in the U.S. and he came here from Liberia. Two of the nurses that tended him are in intensive care and likely to survive. A third was thought to be infected, but wasn’t. That news has been sufficient to keep most Americans calm as the media has done its best to exploit Ebola-related news.

The public absorbed the facts and came to their own conclusion.

An October 8 Pew Research survey found that “Most are confident in Government’s ability to prevent major Ebola outbreak in U.S.” That reflects the way we have all been conditioned to look to the federal government to solve our problems, but the public mood had not changed by October 20 when a Rasmussen Reports analysis of a survey concluded that “Americans are keeping their cool about Ebola, but some acknowledge that they have changed travel plans because of the outbreak of the deadly virus in the United States.”

Wrong. There has been no “outbreak.” One dead Liberian and two nurses is not an outbreak.

Fully 66% of the Rasmussen respondents said that Ebola is a serious public health problem, including 29% who deemed it very serious, but few believe it is an active public health threat here in the U.S.

All this was occurring as spokesmen for the Centers of Disease Control tried to both warn and reassure Americans, managing only to evoke a measure of derision. President Obama also sought to reassure Americans, but fewer and fewer believe anything he has to say these days.

Then he appointed an “Ebola czar” who had no medical or healthcare background whatever to qualify for the job. Add in Obama’s failure to institute a travel ban and the likelihood is that Democratic candidates will pay a price for this on Nov 4.

I suspect the President’s advisors are telling him the Ebola problem has been a blessing because the media will not be reporting any of the stories that could harm Democratic candidates. Starting with the fact that the nation’s voters are evenly divided between a liberal or conservative point of view that means that independent voters will be the deciding factor and they are independent because they pay more attention to events and the news.

One of the stories that are being held back from the news is the outcome of the U.S. Army investigation of Sgt. Bowe Bergdahl who was traded by Obama for five top Taliban leaders to secure his release. Members of his unit unanimously say he deserted them and, if that is the Army’s conclusion, it makes the swap look dubious, if not treasonable.

The news after the midterm elections will be filled with reports of employers cutting healthcare insurance to both full and part-time employees. Wal-Mart has already announced this for its part-timers. There is already news of the fact the ObamaCare, the Affordable Patient Care Act, is proving to be very expensive for those who signed up. This includes news about its higher deductibles and premiums.

Robert E. Moffit, a senior fellow in The Heritage Foundation’s Center for Health Policy Studies, recently reported that “Thanks to ObamaCare, Health Costs Soared this Year”, noting that “On November 15, open enrollment in the ObamaCare exchanges begins again.” Among the lessons learned from Year One of ObamaCare is that “Health costs jumped—big time.” Compared with employer-based coverage, the average deductible of a little over $1,000, doubled to more than $2,000.

Obama promised that the typical family premium cost would be lowered by $2,500, but it has actually increased and ObamaCare actually reduced competition in most health-insurance markets. We do not know how many Americans are actually insured. Despite predictions of millions who would be insured, the administration “now concedes that there are 700,000 fewer persons in the exchanges.”

The claim was that ObamaCare would reduce U.S. health spending, but a recent Health and Human Services report—delayed as long as possible—found that its Accountable Care Organization element has increased costs. States are dropping out of ObamaCare exchanges as a result.

The Obama administration has been very quiet about his intension to by-pass Congress to impose an amnesty program for the eleven million or more illegal aliens in the US. Most polls demonstrate widespread opposition to amnesty. Obama is expected to try to institute one anyway.

Lastly, unless the Islamic State shows up at the gates of Baghdad and takes control, there is likely to be little news from an Iraq that exists now in name only.

The results of Obama’s six years in office have been a disaster in many ways and the outcome of the midterm elections will have a dramatic effect on Obama’s ability to continue his destruction of the U.S. economy and other policies.

Essentially, a majority of Americans, including many of his former supporters, have concluded that there is no Ebola crisis and that Obama’s time in office has been the very opposite of what he promised. The change they want is to see an end to Obama’s term in office. A start in that direction is the November 4 midterm elections.

Categories: On the Blog

Untangling a Big Government Mess

October 22, 2014, 3:07 PM

Changing our country and its laws back to a manageable and sane state is more complicated than the average small-government advocate may think. One cannot simply look at the situation in black and white, right and wrong mindset. A longer term strategy must be established.

This article is the result of a conversation I was engaged in earlier this week. I was speaking with a like-minded individual about the minimum wage. While we are both principally against the idea of a minimum wage, I was playing the “devil’s advocate” role. My stance was that there are so many laws and regulations on the books that distort and harm the economy, a minimum wage is necessary to prevent even lower wages.

Before I go any further, I want to state that I understand the consequences of a minimum wage and the effects it has on those that are unable to find a job.

The minimum wage is a solution the government created to deal with the side effects of failed economic policies. Like in many cases, the government chooses to treat the symptoms while leaving the underlying conditions unaltered. When capitalism is transformed by government into a crony-capitalism hybrid, the natural laws of supply and demand are not allowed to operate correctly. What we are left with is a flawed system with underutilized resources, including labor.

This is what brings me to my main point. Untangling the mess that we are in will take careful and thought-out steps. Our situation is like a stereotypical tangled ball of Christmas lights. You can’t just start pulling at strings; you have to pull the right strings first. The laws that we speak out against routinely do not exist in a vacuum; they have effects and consequences in other areas.

Here’s an example. Most, if not all, small-government advocates prefer smaller taxes. However, if a law passed eliminating all taxes only for the top 1%, most, if not all, would be against that change. Even though this appeals to the principle of lower taxes, most would concede that a more balanced approach is necessary.

While I do not advocate a raise in the minimum wage, I do believe other actions need to take place first prior to the potential elimination of said wage. Actions that level the playing field between small and big business, actions that create a friendlier environment to hire more people, etc., need to occur first. One major step in the right direction would be the simplification of the tax code. Closing loopholes exploited by big business and reducing regulations that hurt small business would create a more uniform market. Eliminating or reducing payroll taxes would also benefit employees by decreasing the cost of a new hire. These steps are just a few moves that could occur quickly with little or no unintended consequences.

If the economy were allowed to function in true free-market fashion, resources would be used as efficiently as possible. Eventually, full employment would be attained and a natural rise in wages would follow.

The laws that distort the free market and society have been built up since the founding of our nation. Unwinding the mess cannot be done haphazardly. Pulling the wrong string first may cause a knot that can’t be untied, causing the whole mess to crumble down on the average citizen’s back. Moving toward a society with a far more limited government requires careful planning and strategy.

Categories: On the Blog

The “Malaise” Has Returned

October 22, 2014, 2:39 PM

The joke is that Jimmy Carter is happy that Barack Obama has replaced him as the worst President of the modern era.  It is a supreme irony that Obama’s campaign theme was “Hope and Change” when Americans have lost a great deal of hope about their personal futures and the only change they want is to see Obama gone from office.

Elected by a narrow margin in 1976, Carter managed in his one term to see his approval ratings fall to twenty-five percent by June 1979. The lesson Americans have to learn over and over again is that liberal policies and programs don’t work.

In six years, the kind of dependence on the government to take care of people from cradle to grave has left the nation with 92 million unemployed or who have stopped looking for a job, entitled 45 million to food stamps, and there is still talk of a “minimum wage” in the interest of “fairness” that simply kills jobs, especially those that used to be filled by young people just entering the workplace. The worst part of Obama’s presidency is the lies he tells in the belief, apparently, that most Americans are so stupid they won’t see through them.

On July 15, 1979, in an effort to encourage a greater sense of confidence, Jimmy Carter delivered a speech that became known as the “malaise” speech, but which did not include that word. What it did, however, is double down on all the bad policies Carter had pursued and blamed Americans for not accepting them. By then the economy was in decline, gasoline prices and interest rates had climbed to record levels, and the voters were understandably pessimistic. Iranians had taken U.S. diplomats hostage and they would not be released until Ronald Reagan took the oath of office.

Carter’s speech began by asking “Why have we not been able to get together as a nation to resolve our serious energy problem?”  Quite literally there was no need then or now for an energy problem because, as recently noted by the Energy Information Administration, the United States has enough coal to last more than 200 years! With the development of hydraulic fracturing, fracking, we now have access to more oil than exists in Saudi Arabia.

Obama literally came into office saying he intended to wage a war on coal and he has; using the Environmental Protection Agency to institute regulations that have led to the closing a mines and the shutdown of coal-fired plants that used to produce 50% of the nation’s electricity; now down to 40%. He resisted allowing the drilling for oil in the huge reserves on our east and west coasts. He has refused to permit the construction of the Keystone XL pipeline. These policies have led to the loss of thousands of jobs during the time that followed the 2008 financial crisis.

In his speech, Carter said, “The erosion of our confidence in the future is threatening to destroy the social and the political fabric of America.” We would do well to remember that we have been through periods like this before and corrected course.

In 1980 Ronald Reagan would be elected to replace Carter and America prospered through his two terms, returning to being a major superpower, economically and militarily. That’s what conservatism produces.

Carter, however, blamed Americans for the problems of his times. “Two-thirds of our people do not even vote. The productivity of American workers is actually dropping, and the willingness of Americans to save for the future has fallen below that of all other people in the Western world.”

One of Obama’s earliest acts was to visit foreign nations and blame America for many of the world’s problems. Militarily he pulled our troops out of Iraq and he intends to do the same in Afghanistan. He has cut the military budget to the bone and has now defined its mission as one to address “climate change”, not the enemies of our nation.

Obama spent his entire first term blaming George W. Bush for every problem that he did nothing to correct. Indeed, Obama has never seen himself as the real problem, finding anyone else to blame.

Those Americans watching Carter deliver his speech must surely have cringed as he announced that he intended to set import quotas on foreign energy resources. He said he wanted Congress to impose a “windfall profits” tax on the very energy firms that he wanted to get us out of the doldrums and dependency that was causing the problem. He wanted the utility companies to “cut their massive use of oil by fifty percent within a decade.” He wanted them to switch to coal and now we live in a nation whose President doesn’t want our utilities to use coal. Why? Despite massive evidence to the contrary, he has advocated “renewable” energy, wind and solar, neither of which can ever meet the nation’s needs.

“In closing, let me say this: I will do my best, but I will not do it alone. Let your voice be heard,” said Carter.

In the 1980 election the voter’s voice was heard. Carter was gone and Reagan was our President. With him came his infectious patriotism and optimism. By late 1983 his economic program had ended the recession he inherited from Carter. A similar program would have put an end to what is now routinely called Obama’s Great Recession.

We are at a point not dissimilar from the days of Jimmy Carter and with an even greater sense of dissatisfaction and distrust of Barack Obama.

I reach back in our recent history to remind you that on November 4thin our midterm elections and in the 2016 presidential election we can repeat history by ridding the nation of those members of Congress that voted for ObamaCare and have supported President Obama. We must wait to see who the GOP will offer as a presidential candidate, but we have time for that.

We have time to “hope” for a better future and we have the means to make the “change” to achieve it.

© Alan Caruba, 2014

[Originally published at Warning Signs]

Categories: On the Blog

Who Says you Need a Doctor to Fight Ebola?

October 22, 2014, 1:10 PM

Perhaps it’s not surprising coming from our first Community Organizer president that the trait the administration claims is most needed in an “Ebola czar” — not that it’s been shown that such a position needs to be created in the first place — is, as Dr. Anthony Fauci of the National Institutes of Health put it, “somebody who’s a good organizer.”

It’s been proven that rabble-rousing on the South Side of Chicago does not qualify one to lead anything more significant than a golf foursome (though you have to give Obama credit for spending his time doing what he’s best at, showing a clear understanding of the principle of comparative advantage).

Similarly, one wonders just what the newly named czar, Ron Klain, has “organized” that should give the American people confidence that the most incompetent administration in modern U.S. history is doing what needs to be done to keep citizens safe from a virus that the media is turning into the biggest medical scare since the Spanish Flu.

To wit, Ron Klain — no doubt a very smart man and talented lawyer, including having graduated magna cum laude from Harvard Law School and clerking for a Supreme Court justice — is best known for organizing and advising Democratic politicians from Ed Markey to Bill Clinton to Al Gore to Gen. Wesley Clark to John Kerry, and most recently serving as Chief of Staff to Vice President Joe Biden (proving that intelligence cannot be gained by proximity) before taking a job in the private sector.

Other notes of interest about Mr. Klain include his membership in the Algore Cult of Global Warming and that he was a lobbyist for Fannie Mae, helping a firm that required tens of billions of dollars in taxpayer bailouts with “regulatory issues,” according to the Washington Post. He has publicly supported the ill-conceived “Buffett Rule” — calling for higher taxes on the wealthy — although within an analysis that at least recognizes that the “middle class” is as skeptical of Democrats as it is of Republicans.

Clearly the man is indeed a qualified organizer — of the office workings and spin machines of liberals.

But unless you count a couple of health advice-related portfolio companies of Case Holdings, where he serves as president, Mr. Klain’s medical experience seems limited to visits to his own GP.  A quick note to Mr. Case: Klain sure couldn’t sniff out the impending disaster of Solyndra despite being warned of it, so you might want to keep an eye on him if he’s involved with keeping your money safe.

Most of all, the selection of Ron Klain underscores the blindness of President Obama to what the American public really thinks of his Keystone Cops approximation of an executive branch.

Petty (and not so petty) dictators throughout history have valued loyalty over competence or honesty. That characteristic, perhaps more than any other, defines the composition of the Obama cabinet and their underlings such as Lois Lerner and Susan Rice — to whom, strangely, we are told that Ron Klain will report. I expect Mr. Klain to be far more competent than either Lerner or Rice (or Eric Holder or Janet Napolitano or Kathleen Sebelius, just to name a few), but every bit as loyal — which is really the point.

But what purpose does even the existence of this position serve?

Are there not literally dozens of people within the Departments of Health and Human Services or Homeland Security or even Defense or State, within the Centers for Disease Control or the National Institute of Allergy and Infectious Diseases or within management of major teaching hospitals, who could combine managerial skill with medical or epidemiological experience in a way that would be both useful and credible?

(My two cents: I recommend Dr. Alex Rowe of the CDC who has led important work in “improving health care provider performance in low- and middle-income countries” in Africa. Alex (whom I’ve known since high school) is just one example of a person whose experience seems vastly more relevant than Mr. Klain’s medically empty résumé, though I’m far from certain that his wife would appreciate my suggesting him for the job.)

And should not an “Ebola czar” be reporting to the HHS Secretary (and perhaps secondarily to the Secretary of Homeland Security) rather than to counter-terrorism adviser Lisa Monaco and national security adviser Susan Rice, a political hack whose only redeeming quality is that she’s willing to lie to the public (not just about Benghazi) and take the political heat for the boss? If anything, this chain of command is a recipe for complete dysfunction in the American response to Ebola, not just because the public doesn’t trust Susan Rice but also because the actual work needs to be done well outside of her and Ms. Monaco’s nominal areas of expertise and control.

The obvious conclusion is that the purpose of the naming of Ron Klain to the post of Ebola czar is political: to demonstrate that the administration is doing something about an issue over which the media is shamefully whipping the public into a frenzy. It is the health care equivalent — not least in the modest immediate risk to the homeland being amplified by horrific images from far-away places — of Obama’s air campaign against ISIS. (Or, if you work for the administration where the word “Syria” appears to be verboten, ISIL.)

And just as the air campaign is not a serious military strategy, the Ebola czar-naming is not a serious medical strategy. Even more ridiculous, it’s already failing as a political CYA.

Ron Klain did not attend a high-level meeting last Friday to discuss the federal response to Ebola. He also missed a Saturday evening meeting on the same subject. But with President Obama having spent Saturday afternoon golfing (you must see that link), Mr. Klain can be forgiven for believing that all this “Cancel my fundraisers!” and “It’s our top priority!” rhetoric is just for the masses.

So let’s see if we can summarize this: Motivated by a fear of even greater public perception of presidential incompetence and aloofness as we head into an election that should result in the trouncing of congressional Democrats, the president has appointed a politically loyal “organizer” with no medical experience whatsoever to a medically focused position that should not exist.

This in order to make the public feel better about a virus that, although often fatal when contracted, would be unlikely to pose a substantial threat to the U.S. if the government would take the obvious step of stopping residents of a few West African countries from entering the United States (apparently there’s no “red line” there either) until the breakout can be controlled where it currently exists rather than trying to hire airport Ebola screeners for$19/hour.

For any other administration, this confused mess would be considered a blunder. For President Obama, it’s quite literally par for the course.

[Originally published at The American Spectator

 

Categories: On the Blog

Radio Rewind: National Football League Sacks Taxpayers

October 22, 2014, 12:53 PM

Recently, I joined Columbus radio talker Chuck Douglas — host of Saturday-morning talk show On Point with Chuck Douglas — to discuss how the National Football League (NFL) receives special tax carve-outs and exemptions, as well as millions of dollars in taxpayer subsidies for stadium construction and maintenance.

Despite volumes of academic evidence to the contrary, supporters of sports subsidies often claim that the National Football League requires taxpayer support to remain solvent, and that public financing of privately-owned stadiums will lead to increased economic revenue. Neither claim, unfortunately, is supported by the evidence.

In fact, the academic evidence shows that subsidization actually “reduces the level of real per capita income in metropolitan areas,” by causing public money that would otherwise be spent on core government functions, to be diverted to socializing team owners’ risks, while they’re allowed to pocket the rewards.

Take a listen!

Categories: On the Blog

Thinking the Unthinkable – Part II

October 22, 2014, 11:50 AM

In Scott Cleland’s recent piece titled, “Silicon Valley’s Biggest Internet Mistake,” he makes an important, too little addressed point: Were the FCC to classify Internet service as a “telecommunications” service under Title II of the Communications Act, this drastic step likely would have significant adverse international ramifications.

In a September 29 paper titled, “Thinking the Unthinkable: Imposing a ‘Utility Model’ on Internet Providers,” I explained, from a purely domestic policy perspective, why FCC imposition of the Title II common carrier utility model on broadband Internet providers should be “unthinkable.” The adverse international consequences provide another reason.

As Scott explains in his commentary:

Legally, “telecommunications” is what international treaties and agreements regulate like a utility, under the Constitution of the United Nations’ International Telecommunications Union (ITU). Specifically, ITU agreement: ITU-T D.50, recognizes the sovereign right of each State to regulate “telecommunications” as that State determines. Apparently, Silicon Valley interests are blind to the many risks of “telecommunications” regulation to their foreign businesses….[T]he FCC reclassifying the American Internet as “telecommunications” predictably would invite most every other country to reclassify their Internet traffic as “telecommunications” too, so that they could impose lucrative price tariffs on Silicon Valley’s dominant share of Internet traffic into their countries.

This is not an unjustified concern. Indeed, there is rising apprehension in many quarters about the designs of many foreign countries harbor to exert more government control over Internet traffic within their own countries and, indeed, throughout the world through international organizations. Especially at a time when the U.S. has embarked on a process that is intended to lead to a new governance structure for ICANN, the FCC – and the entire U.S. government – ought to be concerned about actions here at home that are likely to be construed by foreign governments as authorizing more government interference in Internet operations.

In fact, this very concern regarding the international ramifications resulting from FCC adoption of net neutrality regulations was expressed by Ambassador Philip Verveer in May 2010 in his capacity as the State Department’s Coordinator for International Communications and Information Policy. Of course, Philip Verveer now serves as Senior Counselor to FCC Chairman Tom Wheeler.

Answering a question at a Media Institute luncheon as the FCC was considering the then-pending net neutrality rulemaking, according to the report in Broadcasting & Cable, Ambassador Verveer said this:

“I can tell you from my travels around the world and my 
discussions with figures in various governments around the world there is a
very significant preoccupation with respect to what we are proposing with 
respect to broadband and especially with respect to the net neutrality.”

Most significantly, Ambassador Verveer went on to say that the net neutrality proceeding “is one that could 
be employed by regimes that don’t agree with our perspectives about essentially
 avoiding regulation of the Internet and trying to be sure not to do anything to
damage its dynamism and its organic development. It could be employed as a
 pretext or as an excuse for undertaking public policy activities that we would
 disagree with pretty profoundly.”

Of course, many others were saying much the same at the time, but Ambassador Verveer was subjected to a harsh attack by Public Knowledge’s Harold Feld for deviating from what Mr. Feld considered to be the established Democratic party line. He wondered how someone as experienced as Mr. Verveer “manage[d] to get so off message at precisely the wrong time.”

I happen to think that Mr. Verveer’s job was not primarily to stay “on message,” but rather to serve the American people by explaining the risks of adopting an ill-advised policy. I have known Phil Verveer since we served together at the FCC in the late 70s and early 80s, and I have a high regard for his qualifications and his dedication to public service. At the time of Mr. Feld’s attack, I defended him. And shortly thereafter, in the context of responding to another of Mr. Feld’s blogs, this time urging FCC Chairman Julius Genachowski to act quickly to adopt net neutrality regulations “to fire up the base before the election,” I called Mr. Verveer a “stellar public servant.”

Nothing has changed my view that Phil Verveer is a stellar public servant. But I do wish he would avail himself of the opportunity once again to explain that the concerns he expressed in 2010, when he was responsible for coordinating international communications policy on behalf of the U.S., are still valid today. Regardless of whatever good intentions may be expressed, if the U.S. government adopts new net neutrality mandates, especially in conjunction with classifying Internet providers as “telecommunications” carriers, other countries may well use such action as an excuse or pretext for, in Ambassador Verveer’s words, “undertaking public policy activities that we would
 disagree with pretty profoundly.”

In other words, despite any protestations to the contrary uttered by U.S. officials, the FCC’s actions regulating Internet providers will speak louder than its words. Other countries, with obvious designs on exerting more control over Internet communications, and over international entities that play a role in managing Internet communications, will seize upon the FCC’s action as a justification.

Scott Cleland is right that this would not be good for Silicon Valley.

I would go further: When then-FCC Chairman Bill Kennard in 1999 rejected dumping what he called the telephone world’s “whole morass of regulation” on the then-emerging cable broadband systems, he concluded, “That is not good for America.”

Dumping the telephone world’s “whole morass of regulation” on broadband Internet providers still would not be good for America today. Indeed, it ought to be unthinkable.

[Originally published at The Free State Foundation]

Categories: On the Blog

‘This is not an Education Problem. This is a Government Problem’

October 22, 2014, 11:15 AM

The world needs more teachers like Susan Bowles. The kindergarten teacher at Lawton Chiles Elementary School in Gainesville risked her job to stand up for what she believes in.

What she believes is that conducting standardized testing three times a year, some of it required to be computerized, is simply not in the best interests of the kindergarten students she teaches.

Despite the risk of losing her job after 26 years of teaching, Bowles felt compelled to speak out.

And something amazing happened. Instead of her being fired or reprimanded, the policy was changed. The community rallied around Bowles after she took a stand. Now, K–2 grade students will not be required to take the FAIR tests that Bowles refused to administer.

In the letter Bowles wrote to parents, she explained that even though she would be in breach of contract, she couldn’t in good conscience give the test to her students. The FAIR testing would have meant kindergarten students being tested on a computer using a mouse, Bowles said.

Although many of her students are well-versed in using tablets or smart phones, most had not used a desktop computer before. Once an answer is clicked, even if a mistake was made and a student accidentally clicked the wrong place, there is no way to go back to correct it. This means the data that would have been collected would not have been accurate.

“While we were told it takes about 35 minutes to administer, we are finding that in actuality it is taking between 35-60 minutes per child,” Bowles wrote. “This assessment is given one-on-one. It is recommended that both teacher and child wear headphones during the test. Someone has forgotten there are other five-year-olds in our care.”

The problem is not with the people she works for, Bowles said. “This is not an education problem. This is a government problem,” she wrote.

Bowles was not directly named in the letter to parents from officials changing the testing policy, but the letter does mention the recent attention surrounding the issue.

Bowles was brave in facing down the school administration, state and local officials, and teachers unions who continually protect the status quo and each other. She stood up by herself with no way of knowing what the consequences would be.

Bowles told me she feels lucky to have had the opportunity to speak her mind, because her husband was supportive and her children are grown. After hearing the policy had changed, Bowles said, she “hugged, laughed, cried, and did a happy dance” with other teachers who had been waiting outside her classroom because they had already heard the news.

“I was surprised and pleased that they actually backtracked on the FAIR, suspending it for one year,” said Bowles, noting tension over standardized testing has increased because of Common Core. “Of course, the fear is it will be back next year with a few tweaks.

“This fight should continue — not just regarding the excessive testing that takes away from our children’s learning, but also for the standards that have been adopted that are not developmentally sound, at least for elementary students,” said Bowles. “I can speak for the elementary grades that any developmental psychologist or early childhood educator would tell you that these standards are inappropriate.”

Two bills have been introduced recently to decrease the federal footprint on standardized testing. Education Secretary Arne Duncan has spoken about the possibility of over-testing.

The hope is that these changes aren’t just lip service. Parents, teachers, and legislators will have to continue to fight for students and against the education establishment. The contrasting approaches of the federal government and Susan Bowles regarding how children should be educated suggest we all should support more local control rather than failing federal mandates.

Heather Kays (hkays@heartland.org) is a research fellow with The Heartland Institute and is managing editor of School Reform News.

 

[Originally published at The Tampa Tribune]

Categories: On the Blog

Roy Spencer: Why 2014 Won’t Be the Warmest Year on Record

October 22, 2014, 11:14 AM

Dr. Roy Spencer of the University of Alabama in Huntsville, a frequent presenter at Heartland’s climate conferences, is an invaluable and prominent voice among scientists who are righly skeptical about the hypothesis of man-caused climate change. In a post today, Dr. Spencer throws a lot of cold water on the idea that 2014 is shaping up to be the “warmist year on record.”

Much is being made of the “global” surface thermometer data, which three-quarters the way through 2014 is now suggesting the global average this year will be the warmest in the modern instrumental record. I claim 2014 won’t be the warmest global-average year on record … if for no other reason than this: thermometers cannot measure global averages — only satellites can. The satellite instruments measure nearly every cubic kilometer – hell, every cubic inch — of the lower atmosphere on a daily basis. You can travel hundreds if not thousands of kilometers without finding a thermometer nearby.

(And even if 2014 or 2015 turns out to be the warmest, this is not a cause for concern…more about that later).

The two main research groups tracking global lower-tropospheric temperatures (our UAH group, and the Remote Sensing Systems [RSS] group) show 2014 lagging significantly behind 2010 and especially 1998:

With only 3 months left in the year, there is no realistic way for 2014 to set a record in the satellite data.

Granted, the satellites are less good at sampling right near the poles, but compared to the very sparse data from the thermometer network we are in fat city coverage-wise with the satellite data.

In my opinion, though, a bigger problem than the spotty sampling of the thermometer data is the endless adjustment game applied to the thermometer data. The thermometer network is made up of a patchwork of non-research quality instruments that were never made to monitor long-term temperature changes to tenths or hundredths of a degree, and the huge data voids around the world are either ignored or in-filled with fictitious data.

Furthermore, land-based thermometers are placed where people live, and people build stuff, often replacing cooling vegetation with manmade structures that cause an artificial warming (urban heat island, UHI) effect right around the thermometer. The data adjustment processes in place cannot reliably remove the UHI effect because it can’t be distinguished from real global warming.

Satellite microwave radiometers, however, are equipped with laboratory-calibrated platinum resistance thermometers, which have demonstrated stability to thousandths of a degree over many years, and which are used to continuously calibrate the satellite instruments once every 8 seconds. The satellite measurements still have residual calibration effects that must be adjusted for, but these are usually on the order of hundredths of a degree, rather than tenths or whole degrees in the case of ground-based thermometers.

And, it is of continuing amusement to us that the global warming skeptic community now tracks the RSS satellite product rather than our UAH dataset. RSS was originally supposed to provide a quality check on our product (a worthy and necessary goal) and was heralded by the global warming alarmist community. But since RSS shows a slight cooling trend since the 1998 super El Nino, and the UAH dataset doesn’t, it is more referenced by the skeptic community now. Too funny.

You’re going to want to read the whole thing.

Categories: On the Blog

Council’s New Bill to Boost Smoking

October 22, 2014, 9:49 AM

New York City was the first major city to ban vaping, the use of e-cigarettes, wherever cigarette smoking is banned. Cities around the country have followed suit.

As a result, smokers who are trying to quit are forced to take their e-cigarettes outside together with the smokers. Now the council may go further and ban the sale of flavored e-cigarettes.

Councilman Costa Constantinides’ bill, introduced this month, would be a blow to smokers who want a less harmful alternative that actually tastes good.

Flavors are critically important because they make e-cigarettes attractive to smokers who are trying to quit. Smokers who’ve switched from smoking to vaping regularly report that they enjoy the various flavors of e-cigarettes, often more than the flavor of burning tobacco.

In fact, in a survey published last November in the International Journal of Environmental Research and Public Health, participants reported that e-cig flavors, including fruit flavors, were “very important” to them in their effort to quit or reduce smoking.

No, the ban wouldn’t immediately send folks back to smoking. They’d just buy their e-cigs online and outside of the city. So the initial impact would be to hurt legitimate businesses that are trying to offer their customers an appealing and less harmful alternative to cigarettes.

The real harm of the bill comes later, when other legislatures follow our example. I’ve testified against bans on public vaping at City Council meetings around the country.

Almost every time, the supporters cite the impressive fact that New York City passed a similar law. In fact, it’s probably their best argument. The same would happen if Constantinides’ poorly considered bill becomes law.

As is often the case, those calling for a ban to restrict the choices of adults tell us that their goal is to protect children.

“These flavors are direct marketing to children,” Constantinides said when introducing the bill. “They appeal to children, and we’re taking them out of that market.”

That’s absurd. Legislatures around the country, including the City Council, have already banned the sales of all e-cigs to minors. The Food and Drug Administration’s proposed rule does the same.

The council and the Health Department should get back to basics and make sure stores don’t sell e-cigarettes, flavored or not, to kids.

Perhaps while they are at it, they might be able to stop the greater threat, sales of actual cigarettes to kids. It’s all too common here, for all the Health Department’s aggression in other areas.

Flavor-ban proponents also argue that until all the science is in, it’s better to be safe than sorry. But that “precautionary principle” approach doesn’t apply so simply here.

The World Health Organization, the FDA and scientists at leading anti-smoking groups such as the Legacy Foundation have all recognized that while e-cigarettes have risks, they also have potential to help people reduce their harm dramatically by switching from smoking.

Specifically, the Legacy Foundation’s David Abrams lauds e-cigarettes as a “disruptive technology,” telling the Washingtonian, “I think we’re missing the biggest public-health opportunity in a century if we get [the regulations] wrong.”

He adds, “We’ve got to thread this needle just right. We’ve got to both protect kids and non-users and use it as a way to make obsolete the much more lethal cigarette.”

The FDA has proposed rules to govern e-cigarettes, but hasn’t opted to ban flavors. Instead, the agency is looking into not only the science of e-cigs, but how products such as flavored cigarettes are being used.

It won’t have to look far: Vapers report that the appeal of flavors have made the much more lethal cigarette obsolete, at least to them.

Councilman Constantinides’ proposed ban would not only “thread this needle” the wrong way, it would stick smokers not in the finger, but in the lungs, by suggesting flavored cigarettes present a risk in the same category as smoking.

 

[Originally published at Pundicity]

Categories: On the Blog

Crony Socialists Looking to Ban Online Gambling Don’t Seem to Realize It’s a WORLD WIDE Web

October 22, 2014, 9:35 AM

I used to bet football games, and play occasional hands of cards (I’m mostly retired).  I did so mostly online – via a company based in Costa Rica.

Did I know the owner of the Costa Rican company?  No.  Did I know anyone in Costa Rica?  No.  Did I have any particular affinity for that nation?  No.  To quote Joe Walsh (the James Gang, solo and Eagles musician, not the former Congressman):

Aint never been there – they tell me its nice. 

The Costa Rican company got my money because the World Wide Web allowed me to shop large swaths of the planet for the business I liked best.  Free trade is a fabulous thing.  The Internet is a fabulous thing.  The latter makes the former even easier – which is a fabulous thing.

Unless you have an old school brick-and-mortar business model – and insist on not changing as the world does. For most stubborn businesses – that spells doom.  If you’re an industry titan and can afford to pony up to politicians – you look to get the government to protect you from Reality.  It’s called Crony Socialism (it’s not Crony Capitalism – because it has nothing to do with capitalism).

And with this particular federal government – with a $4 trillion-a-year budget, countless huge regulatory hammers poised over every nook and cranny of the economy and politicians looking to deal – very bad things can happen.

It’s usually Democrats – the Huge Government Party – who use the Leviathan as a government-money-and-regulation fundraising weapon.  For instance – remember the “green” “energy” (which isn’t green and provides nearly no energy) portion of the abysmally failed 2009 “Stimulus?”  80% of DOE Green Energy Loans Went to Barack Obama Donors. It did nothing for energy or the economy.  But it did a lot for Obama Backers. Sadly, Republicans – allegedly Less Government’s representatives – are not immune to the pernicious infection that is Crony Socialism.

Meet casino magnate Sheldon Adelson.  Who has generated a personal net worth of $37 billion in large part with his brick-and-mortar gambling houses.  Who sees U.S. online gambling as a threat to his empire – but doesn’t seem to see that the Internet extends WAY beyond our borders.

Who has been a huge Republican donor – he gave $49.8 million to center-right Super PACs just in 2012.  And who is now looking for Crony Socialist legislative payback – someone to save him and his older-school businesses from the Reality that is the WORLD WIDE Web. Sadly, some Elephants are willing to oblige. The bill to ban most online gambling for money was introduced by Sen. Lindsey Graham (R-S.C.) and Rep. Jason Chaffetz (R-Utah) earlier this year.

How exactly will said bill ban online gambling?  By reworking a half-century-plus old law – to apply it to the 21st Century Internet economy.  Behold the 1961 Federal Wire Act. Here’s Senator Graham on video: Uhhh, yeah.  We’re going back to the wiring act.  The Wire Act prevented transfer of money for online gambling.”

Stop right there.  How did a 1961 law prevent online…anything?  There was no online.  The Senator continues: “Even if you wanted to do online gambling you would have to have regulations, right?” Stop right there.  The Senator’s bill doesn’t regulate online gambling – it bans it.  And my personal online Costa Rican experience was just outstanding – without regulations.  It was most likely much more outstanding precisely because there were no regulations.

Senator Graham seems a bit confused.  What could be contributing to his un-clarity? After Casino Mogul Throws Him Big Money Fundraiser, Graham Seeks Internet Gambling Ban. Big-time contributions.  The solution to Crony Socialism is Less Government.  If the Leviathan wasn’t quite so enormous, the cronyism from which we constantly suffer wouldn’t be quite so prevalent.

The less power and money government has to throw around – the less money will be campaign-contributed to swing government the contributors’ way.

It’s sad – and angering-ly pathetic – when allegedly Less Government Republicans look to expand the Leviathan further still to get some of that Crony Socialism money for themselves.

[Originally posted on Red State]

Categories: On the Blog

Aging Metropolitan Areas Ranked

October 22, 2014, 8:21 AM

America is getting older, as medical science prolongs life expectancy and the fertility rate hovers at or even below the replacement rate. One metric for gauging the nation’s aging is the median age – the age at which one half the population is younger and the other half is older. In 2000, the median age in the United States was 35.3. By 2013, the median age had increased to 37.5.

But the nation’s aging is by no means uniform. Each of the nation’s 52 major metropolitan areas (over 1 million population), got older between 2000 and 2013. However, the difference was substantial, from a relatively small 0.4 years to a more than 10 times larger 4.6 years.

Metropolitan Areas Aging the Least

Nine of the 10 metropolitan areas that have aged the least attracted more residents from other parts of the country than they lost between 2000 and 2013 (net domestic migration data from the Census Bureau annual population estimates). This is consistent with data showing that younger households move more. The Current Population Survey reports indicates that households with householders less than 35 years of age are more than 2.5 times as likely to move between states as those led by householders from 35 to 64. The real competition for migrants between metropolitan areas is for younger households,

Oklahoma City aged the slowest of any major metropolitan area between 2000 and 2013. In 2000, the median age was 34.2 years, which increased to 34.6 in 2013, an increase of only 0.4 years. Orlando, the second slowest aging metropolitan area, added nearly three times as much to its median age, which is now at 36.7 year, up 1.1 years from 2000. Indianapolis and San Antonio tied at third with increases of 1.3 years, while Kansas City and Washington tied for 5th, at 1.4 years. The top ten was rounded out by Nashville, Houston, Tampa-St. Petersburg and Riverside-San Bernardino (Table 1, complete data is at demographia.com).

Metropolitan Areas Aging the Most

Detroit aged faster than any other major metropolitan area, as its median age rose from 35.4 to 40.0 years, an increase of 4.6 years. Cleveland aged 4.0 years, Los Angeles 3.5 years, Rochester 3.4 years and Providence 3.3 years. The bottom ten also included Salt Lake City, Hartford, Cincinnati, San Jose, Pittsburgh and Buffalo (the later three in a tie, which makes the bottom 10 a bottom 11). Eight of the bottom 11 cities were in the Northeast and Midwest. The three others, Los Angeles, Salt Lake City and San Jose have median ages below the national average, but Los Angeles and San Jose will be older than average within a decade if current trends continue. All of the 11 cities experienced domestic migration losses between 2000 and 2013.

Youngest Metropolitan Areas

The youngest major metropolitan area is Salt Lake City (31.8 years), which held that title both in 2000 and 2013. Salt Lake City, however, was among the fastest aging cities, as is noted above. Riverside-San Bernardino is in hot pursuit, having gained 1.3 years on Salt Lake City, and with a 2013 median age of 33.3. Riverside-San Bernardino has aged less because it has become a refuge for many younger households from nearby Los Angeles, where house prices are stratospheric compared to household incomes. Texas holds the next four positions, with Austin at 33.5 and Houston at 33.6. Dallas-Fort Worth and San Antonio are tied for 5th at 34.2 (Table 2)

Oldest Metropolitan Areas

All of the five oldest cities had median ages of 40 years or more. The oldest metropolitan area in 2013 was Pittsburgh (42.8), followed by Tampa St. Petersburg (41.9), Cleveland (41.3), Buffalo (40.8) and Hartford (40.5). As in the case of the fastest aging cities, the oldest cities are mainly from the slower growing Northeast and Midwest.

Sustaining Urban Economies by Attracting Younger Households

Unless there is a substantial increase in birth rates (which no one expects), metropolitan areas will continue to age. But, as before, some will age more rapidly than others.

Part of the reason some slow growth or declining “Rust Belt” Northeastern and Midwestern cities have aged faster is strong outward domestic migration, with its large share of younger households. However, metropolitan areas with high household incomes have also experienced strong net domestic out-migration. This is evident in Boston, New York, San Francisco, and San Jose.

Other cities, like Houston, Dallas-Fort Worth or Indianapolis can be more attractive, because their lower cost of living can leave households with more discretionary income and a better quality of life. Much of this cost of living advantage is the result of lower house prices relative to incomes.

This association between better housing affordability and greater net domestic migration is identified in research by Harvard economists Peter Ganong and Daniel Shoag. Nobel Laureate Paul Krugman also observed the connection between net domestic migration and housing affordability in a recent New York Times column entitled “Wrong Way America.”

There are other reasons that people move. However, the common thread among movers is aspiration — seeking better lives. People move to places they can afford and where they can hope for a higher standard of living. Metropolitan areas that meet the needs of younger households, with a better quality of life and job creating sustainable economies, are likely to age more slowly

 

[Originally published at Huffington Post]

Categories: On the Blog