Somewhat Reasonable

Syndicate content Somewhat Reasonable | Somewhat Reasonable
The Policy and Commentary Blog of The Heartland Institute
Updated: 23 min ago

Stop the Waste of Tax Dollars by the EPA

May 01, 2014, 8:37 PM

The Environmental Protection Agency sent out this news release April 30 asking for public comment supporting their efforts to promote renewable energy solar and wind as replacements for fossil fuels in electricity production. (The release is also pasted below.) This is in support of EPA programs to stop fossil fuels use by declaring carbon dioxide from burning fossil fuels as a catastrophic air pollutant.

I can assure you this news release was sent to assigned representatives of the Sierra Club, Natural Resources Defense Council, etc. which will generate possibly hundreds of thousands of comments to support this program. EPA used this approach to obtain one million letters of support for its ruling to stop use of coal for generating electricity in future power plants.

The EPA was charged with seeing the nation had clean water and air back in 1970. It is not an agency to promote energy policy. Give them a dose of their own medicine by submitting comments denouncing their actions to cleanenergy@epa.gov. Share this blog post with friends who have similar thoughts.

————

CONTACT:
Enesta Jones (News Media Only)
jones.enesta@epa.gov
202-564-7873
202-564-4355

FOR IMMEDIATE RELEASE
April 30, 2014

EPA Solicits Public Comments on Action Plan for RE-Powering America’s Land

WASHINGTON – The U.S. Environmental Protection Agency (EPA) is seeking public comments on the draft action plan for its RE-Powering America’s Land Initiative. The plan guides EPA’s efforts over the next two years to encourage renewable energy development on current and formerly contaminated lands, landfills and mine sites when such development is aligned with the community’s vision for the site. The cleanup of contaminated land and the production of renewable energy will provide long-term improvements to air quality in communities, while protecting public health.

In 2010, the RE-Powering America’s Land Initiative published its first management plan to provide a useful framework to engage stakeholders on the potential to site renewable energy on contaminated lands and track progress. This second action plan, Action Plan 2.0, identifies activities planned for the next two years.

The agency will solicit comments for 30 days. Comments on the proposed plan are due by Friday, May 30.To submit a comment, please send to cleanenergy@epa.gov

A copy of the draft Action Plan 2.0: http://www.epa.gov/oswercpa/action_plan.htm

Categories: On the Blog

Fake Journalist Lee Fang Foiled Again

April 30, 2014, 5:11 PM

Topless Facebook model Lee Fang

Earlier this month, sometime Democratic Party activist and topless Facebook model Lee Fang (rhymes with “bong”), claimed to be a reporter and approached Heartland Institute President Joseph Bast after a speech about climate change in Colorado. He proceeded to ask a series of questions about an essay titled Five Lies About Tobacco that Joe wrote back in 1998. Joe stands by it, and rightly maintains it gets better with age.

Fang, who was wearing a shirt while interviewing Bast, has now released a video of his attempted ambush at something called “Republic Report.” I’m not linking to it, because Fang is a fake journalist.

From a July 31, 2012 piece at the Washington Free Beacon by Adam Kredo:

One of the Nation magazine’s newest hires, Lee Fang, is a far-left Democratic operative with a history of publishing error-riddled and misleading reports.

As Fang begins his stint at the historic left-wing publication, questions still surround a series of half-baked reports that were filed during his time as a writer for the Center For American Progress Action Fund’s Think Progress blog, and for the Republic Report. Both organizations advocate for greater transparency in politics, even as neither fully discloses its donors….

As an investigative blogger for CAPAF’s Think Progress blog, Fang took a hit for secretly coordinating his coverage with Democrat-aligned special interest groups. Fang continued his questionable reporting methods after joining the Republic Report alongside convicted felon Jack Abramoff soon after his release from prison.

Fang is laughably incompetent, producing the dumbest story about Koch Industries ever written. (And that’s a fierce competition!) When still at Think Progress, Fang  wrote “How Koch Industries Manipulates the Oil Market for Profit.” You can read the story for yourself, but you really only need to read one of the funniest breakdowns of journalistic incompetence I’ve ever read: “Contango Confusion” by Powerline’s John Hinderaker.

Unfortunately, young Mr. Fang has neither the business experience nor the intelligence to understand the issues about which he writes. The result is that nearly every sentence is a howler. Among other things, while a contango market is the main subject of Fang’s post, he doesn’t know what the phrase means.

Fang labels Koch Industries as “oil speculators.” But, explains Hinderaker, Koch Industries is not really in the oil speculation business.

“Koch buys and sells physical oil; it transports oil; it refines oil. It does some unhedged trading too, but in that field it is a minor player compared to, say, Goldman Sachs. If Think Progress wants to attack petroleum speculators, Goldman Sachs should be in the dock–except that Goldman Sachs is a top contributor to Barack Obama and the Democratic Party.”

Koch has little business in the extraction process. Instead, Koch focuses on shipping crude oil, refining it, distributing it to retailers — then speculating on the future price. With control of every part of the market, Koch is able to bet on future prices with superior information.

Huh? Koch sells oil to retailers, and “then” speculates on the future price? Isn’t that a little late? One wonders whether these people even read what they write before publishing it.

And what is this about “control of every part of the market”? Fang just made that up. The oil business is highly competitive, and Koch Industries is, in international terms, a small player. Let’s take refining: the U.S. Energy Information Administration publishes data on America’s biggest refineries. Koch owns three of the 141 largest refineries in the United States; its biggest weighs in at number twelve. So how, exactly, does Koch “control every part of the market”?

Young Mr. Fang continues:

In 2008, Koch called attention to itself for “contango” oil market manipulation. A commodity market is said to be in contango when future prices are expected to rise, that is, when demand is expected to outstrip supply.

This is incorrect. “Contango” is not “market manipulation.” On the contrary, it is the natural state of most markets. It simply means that at a given time, the price of a forward or futures contract is trading above the present spot price. This is what you would expect, given the time value of money. Occasionally, for various reasons, this usual condition may not hold; then we have what is called “backwardation.” A contango market has nothing to do with any expectation; rather, if the futures price is higher than the spot price, as is normally the case, it is a contango market. It is quite remarkable, really, that anyone would try to write a blog post about a contango market without even knowing what the term means.

Hinderaker goes on and on about Fang’s economic incompetence, and it’s worth reading every word of that post – and discounting every word Fang has ever written.

Categories: On the Blog

VIDEO: The Organic Food Certification Racket

April 30, 2014, 1:13 PM

Mischa Popoff is one of the most formidable opponents of the organic food industry, as this video of his speech to the Far West Agribusiness Association clearly shows. The simple fact that he is carrying to the masses is that the organic industry, particularly the organic certification industry, is a big racket.

There was once a time, decades ago, when organic farming was about increasing consumer choice. The organic farmers chose to compete in the free market and sought to carve a niche in the marketplace for higher-end, organic produce.

But the farmers were hijacked by hardline activists. “Activists took over from the farmers…their first stop was with Big Government in the 1990s under the Clinton Administration, “ Popoff explains, “This is the same activism that led to ethanol subsidies, windmills, and solar power.”

Those activists were, and still are, unconcerned with science, sustainability, or actually feeding the hungry multitude. For them it’s all about pushing their radical agenda. And because they cannot do this in the world of public information, they had to turn to sowing disinformation and to pushing their agenda in government. “You can do anything in Washington,” says Popoff, “The science, the public backing for it, none of that matters. Just go to Washington and talk to the right people.”

The radical activists have transformed the organic food movement from a pro-choice, free market enterprise, into a state-run racket. As Popoff says, “It’s no longer about choice.”

The organic industry has become an extorter of government rents at the expense of the taxpayers. This is particularly true of organic processors, which, like organic farms, must be certified. Popoff shows that the ludicrously large number of these processors in North America far exceed the capacity of the whole organic production industry of the continent.  But surely, for these processors to stay in business they need to fill their capacity. How do they do this?

The answer is simple. They import “organic” produce from the rest of the world. There is nothing inherently evil about importing foodstuffs, organic or otherwise. But when the government subsidizes those businesses with taxpayer money, then there is reason for outrage. That is exactly what happens with these processors, which grow fat on subsidies at the expense of we, the people.

The sham that is the organic food industry needs to be fought at every level. It is critical that people stand up to the racketeers and deny them the ability to continue their corrupt shenanigans.

Categories: On the Blog

The Right Needs a New Message on Income Inequality

April 30, 2014, 10:06 AM

Few French economists have achieved the kind of adulation Thomas Piketty has experienced recently from the media and the left. Within the context of the American political scene, Piketty’s dour predictions for the future of capitalism and his call for a “utopian” global wealth tax fit perfectly with the left’s frame of an inequality message.

This political frame glosses over much of the subtlety of Mr. Piketty’s book, but it plays to the strengths of the old-school class-warfare terminology used by Paul Krugman and others. It’s a message that ABC News’s George Stephanopoulos suggested this weekend is catching conservatives “flat-footed.”

But the left faces several challenges to its message on income inequality: First, the Democratic Party largely owns the current regulatory system and the relationship between government and Wall Street (not just with Dodd-Frank but with a host of policies under President Barack Obama); second, the left offers few politically realistic answers to the inequality challenge it frames (proposals to raise the minimum wage and enact crushing, anti-growth taxes on high earners and inheritances are provocative but unlikely to go anywhere); third, their most public champion after the coming election cycle is not the populist Elizabeth Warren but Hillary Clinton, a presidential candidate as closely tied to America’s 1% as, well, Mitt Romney.

And when it comes to measuring the health of America’s economy, the truth is that income inequality is simply not a significant problem.

Several recent studies have shown that U.S. economic mobility is very good: Most Americans will move up and down the income ladder over the course of their lives, reflecting little to none of the class stratification and inheritance concerns warned about by inequality mavens.

But as a political matter, it’s certainly possible that Republicans could be flat-footed in responding to these charges.

What the right should learn from the Piketty pother is that it needs an updated economic message that speaks to the challenges of the times. For decades, the GOP economic agenda has amounted to “lower taxes” – but for many Americans, high taxes are less of a concern than anti-growth economic policies. Those on the right should be prepared to make the case that the warped relationship between Wall Street and Washington needs to be fixed, that socialized risks and privatized profits are fundamentally unfair, and that Mr. Piketty’s equality-focused policy solutions, and those of the left, would hurt income mobility and systematically destroy wealth and growth.

There is an enormous opportunity here for a message of free-market fairness — if the right has the wherewithal to seize it. Otherwise, Republicans could slide into the trap of debating new entitlements and renewed redistribution, the kind where the left will always outbid them. As Friedrich Hayek wrote, “There is all the difference in the world between treating people equally and attempting to make them equal.” The former, not cheaper versions of the latter, represents the way forward for the right.

[Originally published by the Wall Street Journal]

Categories: On the Blog

The Lower Cost of a Truly Limited Government

April 30, 2014, 10:00 AM

A demonstration of just how far the United States has moved from its original founding principles is seen in the fact that in all the jousting over ObamaCare, the general rise in “entitlement” spending, and the burden of government regulation over American enterprises, there is one question that seems rarely to be asked: What should be the size and scope of government, and what would it cost if government were cut down more to the size delineated in the original Constitution?

Whenever, occasionally, this question is asked, the answer seems to be very far from what the Founding Fathers had in mind, if one is in anyway familiar with their conception of limited government and individual liberty.

Thinking the Clinton Years were “Limited Government”

For example, in 2012 two books appeared by “conservative,” free market-oriented economists who were clearly trying to influence the terms of the political debate during that presidential election year.

For example, in 2012 two books appeared by “conservative,” free market-oriented economists who were clearly trying to influence the terms of the political debate during that presidential election year.

They were, “First Principles: Five Keys to Restoring America’s Prosperity” by John B. Taylor and “Why Capitalism?” by Allan H. Meltzer. John Taylor is a professor of economics at Stanford University and a highly regarded monetary theorist who is generally critical of Keynesian “activist” fiscal and central banking policy. He served in President George W. Bush’s Council of Economic Advisors and as Treasury Undersecretary for International Affairs.

Allan Meltzer is a professor of political economy at Carnegie Mellon University, and an internationally acclaimed expert on the history of the Federal Reserve, as well as one of America’s leading monetary theorists who, also, has long been critical of “easy money” policies by America’s central bank.

They both believe strongly in the value and importance of a competitive economic system that fosters entrepreneurship, innovation, and rising standards of living. They also don’t think that government has the knowledge, wisdom or ability to direct a complex market order.

Their books contain insightful and wise analysis of how and why America has gotten into its present dismal situation. Plus, as “insiders” in the halls of Washington, D.C. at one time or another, they have many interesting examples of the manipulation, corruption and inefficiencies to be found in American politics.

So what, in their views, should the current size of government be cut back to? Since they were both writing “political” books in an election year, perhaps they were fearful of seeming to be too “radical” in a more limited government direction. But the fact is that for both of them it seems that getting government cut down to its dimensions during the 1980s and 1990s would more or less set everything straight.

For many of us who lived in the 1990s during the presidencies of the older Bush and then Bill Clinton, however, government taxing, spending, and regulating all seemed to be far too high and extensive, given an older conception of what a free America could and should be like.

In other words, two prominent and respected market-oriented economists made the case for using as a benchmark of having a freer America the interventionist-welfare state of merely twenty years ago.

A More Reasonable Benchmark for Judging “Big Government”

So from a more clearly classical liberal perspective, what might be used as a more reasonable standard or benchmark of a limited government and freer market society in judging the size and scope of government today?

The first edition of The World Almanac was published in 1868. The entire federal government fit on one page in that first edition; half of that page was taken up with listing the names of the U.S. ambassadors to foreign countries.

The executive branch of the federal government included only seven departments: Treasury, State, War, Navy, Interior, Attorney General, and the Postmaster General. And this was after the significant growth in the federal government during the recently fought American Civil War (1861-1865)!

Today, the listing of all departments, bureaus and agencies of the federal government takes up at least seven or eight pages of very small print in The World Almanac. The administrative units of the federal government that regulate, control, supervise, plan and oversee virtually every aspect of American life now number in the hundreds. The United States government has sometimes been portrayed as a giant octopus whose numerous tentacles are wrapped around everything that any American does in the market, social, or personal spheres of life.

The Cost of a More Constitutionally Downsized Government

Let us suppose that government were to be “downsized” to what it was in 1868, as listed in that first edition of The World Almanac. What would be the cost of government and the tax burden on the American citizenry? In making such an estimate, let us recall that the departments of war and the navy are now part of one Department of Defense. Let us also presume that the government’s post office monopoly is abolished and all forms of mail delivery are fully left to private competitive enterprise, so there is no longer a Postmaster General.

That would leave five executive level departments comprising the entire federal government: defense, justice, interior, treasury, and state. In terms of their combined expenditures in 2013, together the cost of the federal government would come to about $900 billion, if that was all for which Washington was responsible.

Of course, this presumes that in a more limited government America, the current activities of these departments would not be radically reduced to be more consistent with the “original intent” of the Founding Fathers in the Constitution.

If we add the interest paid on the national debt in 2013 ($233 billion) to the cost of this smaller government, the total then would be $1.1 trillion, or less than one-third of what the federal government actually spent in fiscal year 2013.

The Washington spent over $28,200 per American household in 2013. If government were to be reduced to its 1868 size, that dollar spending per household would shrink to $9,800.

Approximately 140 million Americans filed tax returns in 2012. Median household income in 2013 was about $51,000. If the current progressive income tax were transformed into a flat income tax with a rate of about 16 percent, then the average tax burden per taxpayer would be around $8,000. (Repeal of the federal income tax might also be introduced at some point, of course!)

Plus, there would be no budget deficit, and no other taxes of any sort would have to be imposed to cover the costs of running this very much smaller federal government.

Limited Government Means Ending the Welfare State

Of course, the first reaction to these numbers is, no doubt: But what happens to all that other government spending, especially the “entitlement” programs (Social Security and Medicare, in particular) that in the 2013 fiscal year ate up about 50 percent of the government’s nearly $3.5 trillion of total spending?

Clearly, these programs would have to be phased out, privatized, “denationalized,” removed from the controlling and redistributing hands of government. Even if there were the political will to move in that direction, it would no doubt take some time to remove them from the functions of a truly constitutionally limited federal government.

But the point of this seemingly unrealistic exercise is precisely to mark off that point on the political horizon that should be the goal towards which friends of freedom should want to see America move.

This may seem terribly “fantastic” to many in America today. How could government ever be, well, that small?

In fact it was, and only one hundred years ago. In 1913, the year before the beginning of the First World War and the introduction of the federal income tax amendment to the Constitution, all levels of government – federal, state, and local – taxed less than 8 percent of the country’s Gross Domestic Product. In other words, 92 percent of all income received remained in the hands of those who had earned it in the private sector market place.

There was no welfare state, no “entitlement” programs. Americans took it for granted that helping those who had fallen on “hard times,” and who were in “need” and “deserving” of such assistance would receive it through private philanthropy and voluntary community charity. A careful reading of the history of that earlier time shows that the private benevolence of free individuals worked very well.

Freedom Means the “Let-Alone Principle”

It is also worth recalling that there was a time when most Americans did not think that government was supposed to be responsible for them. The vast majority of Americans viewed themselves as self–governing and self-responsible individuals.

Simon Newcomb (1835-1909) was a prominent American astronomer and noted free market economist who taught at Johns Hopkins University in the second half of the nineteenth century. In 1870, he published an article in which he summarized what he took to be the common-sense ideal of what he called, “The Let-Alone Principle.” Newcomb said:

“That each individual member of society should be left free to seek his own good in the way he may deem best, and required only not to interfere with the equal rights of his fellowmen . . .

“The let-alone principle may be regarded either as a declaration of rights or as a maxim of political policy. In the first case, the principle declares that society has not the right to prevent an individual who is capable of taking care of himself from seeking his own good in the way he deems best, so long as he does not infringe on the rights of his fellowmen.

“In the second case, the principle forms the basis of a certain theory of governmental policy, according to which the political system is most conducive to the public good in which the rightful liberty of the individual is least abridged . . .

“It needs only a consideration of first principles to make it plain that the main object of government is the protection of minorities, especially those most powerless minorities, individuals . . . It makes little difference to the minority or to any particular individual whether [his] rights are disregarded by a despot, a highwayman, or a majority of his fellow-citizens, wielding the powers of government . . .

“The real point in dispute between the friends and the opponents of free government and individual liberty is simply this: Is man a being to be taken care of, or is he able when protected in his rights to take care of himself better than any governing power – congress, king, or parliament – can take care of him? The advocates of universal freedom claim that, if each individual is protected in the enjoyment of his individual rights as a responsible member of the community, he can take care of himself, and manage his own affairs and his share of the public affairs better than any other one else can do these for him.”

Simon Newcomb added that government “interference is so apt to lead to unforeseen complications, – that the best course for a government to follow is, to adhere to the let-alone policy as a matter of principle.”

Losing Sight of the Value of Self-Responsible Freedom

The danger, therefore, was drifting into the false and dangerous belief that individuals could not and should not be self-responsible, and that government could and should take paternalistic responsibility for people, instead.

Another prominent American free market economist who expressed this in the late nineteenth century was J. Laurence Laughlin (1850-1933), who founded the economics department at the University of Chicago. In 1887, Laughlin warned:

“Socialism, or the reliance on the state for help, stands in antagonism to self-help, or the activity of the individual. That body of people is certainly the strongest and the happiest in which each person is thinking for himself, is independent, self-respecting, self-confident, self-controlled, and self-mastered. When a man does a thing for himself he values it infinitely more than if it is done for him, and he is a better man for having done it . . .

“If, on the other hand, men constantly hear it said that they are oppressed and down-trodden, deprived of their own, ground down by the rich, and that the state will set all things right for them in time, what other effect can that teaching have on the character and energy of the ignorant than the complete destruction of all self-help? They think that they can have commodities that they have not helped to produce. They begin to believe that two and two make five . . .

“The danger of enervating results flowing from dependence on the state for help should cause us to restrict the interference of legislation as far as is possible, and should be permitted only when there is an absolute necessity, and even then it should be undertaken with hesitation.”

Laughlin added, “The right policy is a matter of supreme importance, and we should not like to see in our country the system of interference as exhibited in the paternal theory of government existing in France and Germany.”

Unfortunately, America did import the theory and policy of political paternalism from the collectivist trends then growing stronger in the late nineteenth and early twentieth centuries in Europe. They became the basis and rationale for a far bigger government in the United States beginning in the Progressive Era in the early decades of the twentieth century and accelerating in the New Deal days of the Roosevelt administration in the 1930s.They have continued ever since, up to our own time, under both Democrats and Republicans.

But the ideological wind is out of the sails of the interventionist welfare state. It continues to exist in America and indeed around the world not because most people really believe that government can solve all their ills and make a paradise on earth, but more out of pure political inertia due to a lack of rightly reasoned principles for a rebirth of a philosophy of individual rights that would logically lead to and necessitate a truly limited government.

Our task, however daunting it may seem at times, is to offer a new vision of a free society grounded in the concept of individual rights that can once again capture the excitement and confidence of our fellow citizens. When that is accomplished the size and cost of government, over time, will be reduced accordingly. And Americans will live in and value a far freer and more prosperous country.

[Originally published at EpicTimes]

Categories: On the Blog

Time for Organic Activists to Stop Spreading Lies

April 30, 2014, 9:38 AM

Wouldn’t making it in America be easy if you could just pass laws to put your competition out of business? That’s precisely what’s being attempted by anti-GMO organic activists across America today. Rather than win one consumer at a time in the market, attempts are being made to either label foods containing genetically-modified ingredients like a pack of cigarettes, or to simply ban them outright.

For the campaign to ban GMOs outright, we turn to Dr. Lanita Witt, an organic farmer in Oregon. And for the campaign to label GMOs – in spite of the complete lack of evidence that they cause any harm to humans, animals or the environment – we turn to Senator David Zuckerman, an organic farmer and state legislator from Vermont.

Activists like Wit and Zuckerman never tire of pretending that genetically-modified organisms (GMOs) pose a threat to organic farms and the very health of the American public, citing “alarming impacts on industrial agriculture” along with concern “about the long-term health of our nation’s soils, water, flora and fauna.”

But, stop and think. If there was any chance whatsoever that GMO crops might put organic farmers like Wit and Zuckerman at risk, why didn’t organic stakeholders like Wit and Zuckerman say so in their standards for organic production? And why has there never been a single organic farmer who was de-certified, let alone faced disciplinary action, for alleged “contamination” of his crops by GMOs?

The USDA National Organic Program (NOP) makes no mention whatsoever of GMOs contaminating or in any way undermining the organic integrity of organic crops. Full stop. Either people like Dr. Witt and Sen. Zuckerman are ignorant of the actual rules of organic production in America, or they are willfully ignoring federal laws on organic production that were written, edited and finalized by American organic stakeholders during the Clinton Administration.

There is no basis to Wit’s claim that GMO crops “put our family farmers at risk,” or that they endanger, as Zuckerman claims, the “health of our nation’s soils, water, flora and fauna.” In fact, such statements could very well be interpreted as defamatory being that they are based neither upon science nor, as mentioned, the very laws for organic production that organic stakeholders like Witt and Zuckerman helped write! Such statements are, at the very least, a form of false advertising for the tax-subsidized American organic movement.

The organic industry has grown exponentially over the very same time as the use of GMO crops on American farms has grown. So why lie and pretend GMOs pose some sort of risk? Clearly if there was any threat posed by GMOs to organic farming in America, the American organic industry wouldn’t today be worth more than all of Major League Baseball combined. If anything, it would appear that the existence of GMOs is good for the organic industry.

As Zuckerman himself admits, campaigns to force the labelling of GMO foods, alongside attempts to ban them outright, are “all, for lack of a better word, organic.” Ha ha — how droll, Mr. Zuckerman. But in all seriousness, is this really what being organic in America has come to mean? Attacking technologies that you disagree with?

The organic industry is really just a federal marketing system, as Clinton’s Secretary of Agriculture Dan Glickman stressed: “Let me be clear about one thing. The organic label is a marketing tool. It is not a statement about food safety. Nor is ‘organic’ a value judgment about nutrition or quality.”

On behalf of the hundreds-of-thousands of American farmers who choose to grow GMO crops, Dr. Witt and Sen. Zuckerman should stop spreading fear over this perfectly-safe and highly-beneficial form of agricultural technology.

Instead of attacking their competition with misguided and decidedly unscientific political gambits, Witt and Zuckerman should quietly return to tending to their organic crops, and stand on their own merit. Who knows? They might even enjoy not being so darn negative all the time.


[Originally published at the Daily Caller]

Categories: On the Blog

Ben Carson: I Never Thought I Would Live Past Age 25

April 30, 2014, 1:29 AM

Dr. Benjamin Carson was the latest guest of The New York Group, which is run by Mallory Factor, a friend of The Heartland Institute via his “Shadowbosses” presenation and the Heartland Daily Podcast. If you haven’t bookmarked the New York Group site, do so. They have great guests who provide excellent conversations about proper and practical ways to think about our liberty.

In the latest discussion — which you can see below — Dr. Ben Carson talked in his opening remarks about he hated beign poor, how he rejected that mindset, how reading as a child pulled him away from povery, how he earned his way to Johns Hopkins, how innovated surgery, the immoral selfishness of a $17 trillion, national debt, how he refuses to “submit to the PC police,” how pop culture affects the lives of Americans in a negative way, and many more topics.

After his opening remarks, Carson took questions from a panel of intellectuals.

David Webb: Asked about education and Carson’s education scholarships. Carson talked abou the scholarship programs he gives to underprivileged kids, and why conservatives have to put “wheels” behind such privately funded programs.

Peggy Noonan: Asked what is the most misunderstood aspect about conservatism. Carson’s answer: The great compassion of conservatives — and, yes, even during the “robber baron” era.

James Taranto: Asked was the Supreme Court right to affirm a right to bear arms in DC, Chicago, and other urban settings? Carson says he has “evolved” on this over the years. People need to defend themselves. Full stop.

David Webb: Asked about Carson’s rough upbringing in Detroit. Carson answered: “God has a sense of humor” and turned the knives he was threatened with in his youth into the scalpels of his life as a surgeon.

Audience question: What about pop cultre? Carson said: I’ve spoken to the makers of pop culture and hip-hop. What if you told young ladies that if they get pregnant early, they end their education? And that’s a terrible thing. Put that message in your music.

There’s much more, including Carson’s urging for folks to visit: carsonscholars.org and SaveOurHealthcare.org

Watch  below:

Categories: On the Blog

Repeal Jones Act Before Exporting Oil

April 29, 2014, 10:17 PM

For the past 40 years, in response to the OPEC embargo of 1973, crude petroleum exports from the U.S. have been severely restricted. Back then, we referred to oil as “liquid gold” and felt we needed to hoard our limited supplies. But because of the “shale revolution,” U.S. oil output is at its highest level in more than 25 years. In 2013 alone, production jumped by more than 1 million barrels a day, and output is projected to jump another 1 million in 2014.

This newfound abundance has come primarily from the application of horizontal drilling and hydraulic fracturing in the many shale plays currently under development, most notably the Eagle Ford in South Texas and the Bakken in North Dakota. In response, recent months have witnessed a virtual explosion of debate and commentary about the United States getting into the business of exporting crude oil.

Some politicians and pundits claim that exporting oil will divert us from the path toward “energy independence.” Others argue that exporting oil will weaken our energy security since we’re still a net importer. Still others claim that keeping domestic oil at home will help lower gasoline and diesel prices. These arguments are baseless.

What’s more, it’s hard to envision a political scenario that would result in our inability to import oil. Over the past year, we’ve seen political unrest in Iraq, Libya, Bahrain, Syria and other petroleum exporting countries, but there has been little change in oil prices.

But removing the ban on oil exports will be a pyrrhic victory for consumers unless the Jones Act is repealed as well. A section of The Merchant Marine Act of 1920, the Jones Act requires that any ship carrying goods or commodities in U.S. waters between U.S. ports be built, registered, owned and crewed by American citizens or permanent residents. Though originally intended to improve the nation’s maritime security, today the Jones Act is simply a form of protectionism for America’s shipping industry and seafaring unions.

More important, the Jones Act distorts the allocation of America’s crude oil resources, increases our dependence on imports, and drives up energy prices for businesses and households in the Northeast. For example, today we have a glut of “light sweet crude” being produced in the Eagle Ford play of South Texas because Gulf Coast refineries are geared principally to processing heavy grades of crude oil. Unfortunately, because there are no crude oil pipelines connecting the Gulf Coast to the Northeast, where most refineries are designed to process sweet crude, those facilities must rely on imported oil that is typically more expensive. East Coast refineries are also receiving light crude from the Bakken by rail tank car, a costly and risky way to move oil over long distances.

In theory, crude oil could be shipped by Jones Act tankers from the Gulf to the Northeast. But the cost would be prohibitive, about $4 per barrel. By contrast, shipping oil by foreign-flagged carriers would cost only about $1.20 per barrel.

Repealing the Jones Act would generate a broad range of economic benefits, not only for residents of the Northeast but for all Americans. Refineries on the East Coast would have access to cheaper domestically-produced crude oil, which would lower the cost of gasoline, diesel and fuel oil for households and businesses. The resulting drop in imported oil would enhance U.S.’ energy security while at the same time improving our balance-of-payments.

Boosting the demand for domestic oil will also help sustain the energy boom that has created hundreds of thousands of jobs in recent years against the backdrop of a less-than-robust economic recovery from the Great Recession. Should repeal of the Jones Act be accompanied by, or followed by, the removal of restrictions on U.S. crude oil exports, the positive economic impacts would be magnified.

The U.S. has become a global energy powerhouse. Let’s start acting like one by removing anti-competitive and anti-growth relics like the Jones Act and the ban on crude oil exports.

[First published at the San Antonio Express-News.]

Categories: On the Blog

Heartland Institute Experts React to Supreme Court Ruling on EPA Air Pollution Rule

April 29, 2014, 9:43 PM

The U.S. Supreme Court on Tuesday ruled 6–2 to affirm the Environmental Protection Agency’s ability via the so-called “Cross-State Air Pollution Rule” to regulate power-plant emissions when those emissions have the potential to hurt downwind air-quality. The ruling in EPA v. EME Homer City Generation will affect about 1,000 power plants in 28 states.

The following statements from environment policy and legal experts at The Heartland Institute – a free-market think tank – may be used for attribution. For more comments, refer to the contact information below. To book a Heartland guest on your program, please contact Director of Communications Jim Lakely at jlakely@heartland.org.

—–

“Through geographical luck, emissions from East Coast states like Connecticut and Massachusetts drift over the Atlantic Ocean where there are no states demanding abatement or compensation. Then, hypocritically, these same East Coast states complain about emissions crossing into their state borders from upwind states. EPA is all too happy to take advantage of these hypocritical complaints as an excuse to expand the agency’s power through new rules and restrictions.

“It is a shame that the U.S. Supreme Court continues to empower EPA to issue nonsensical interpretations of statutes with the primary goal of amassing more money and power.”

James M. Taylor
Senior Fellow for Environmental Policy
The Heartland Institute
jtaylor@heartland.org

—–

“The Supreme Court’s ruling is very unfortunate. The justices are not scientists at any level and cannot imagine how totally unscientific, and in fact erroneous, are the EPA standards required for air quality. If they held a modicum of reasonableness from a scientific standpoint, the court’s decision would not be terrible. The dispersion of chemicals in the air is such that at any reasonable distance that could be harmful to human health — a few hundred yards away — is clearly innocuous a few miles away.

“The justices optimistically believe that EPA knows what it is doing. Actually, the EPA does know what it is doing — which is to do the bidding of environmental extremists who wish at every level to stifle economic progress in the name of public health.

“It is a sad state of affairs that the extreme alarmists, without a scientific leg to stand on, are winning for now. Hopefully, the day will come when an administration will fill EPA with scientists instead of anti-progress greens.”

Jay Lehr
Science Director
The Heartland Institute
jlehr@heartland.org

—–

“Clean air and clean water are, of course, good things, but so are constitutional government and the rule of law. Agree or disagree with today’s decision, it is probably best encapsulated by some thoughts from the first and last paragraphs of Justice Scalia’s dissent:

“ ‘Too many important decisions of the Federal Government are made nowadays by unelected agency officials exercising broad lawmaking authority, rather than by the people’s representatives in Congress. With the statute involved in the present cases, however, Congress did it right. … EPA’s utterly fanciful ‘from each according to its ability’ construction sacrifices democratically adopted text to bureaucratically favored policy. Addressing the problem of interstate pollution in the manner Congress has prescribed … is a complex and difficult enterprise. But ‘[r]egardless of how serious the problem an administrative agency seeks to address, . . . it may not exercise its authority’ in a manner that is inconsistent with the administrative structure that Congress enacted into law.’ Brown & Williamson, 529 U. S., at 125 (quoting ETSI Pipeline Project v. Missouri, 484 U. S. 495, 517 (1988)).’ ”

David L. Applegate
Policy Advisor, Legal Affairs
The Heartland Institute
media@heartland.org

—–

“April seems to be the month in which the Supreme Court devotes itself to decisions that have no basis in real science and can do maximum damage to the economy. Invariably, the cases are brought by the Environmental Protection Agency and are decided in its favor.

“In April 2007, the court decided that carbon dioxide, the second most essential gas for all life on the planet was a ‘pollutant,’ the definition the EPA had applied to it in order to regulate it. Now comes word that the court had concluded that the EPA may regulate power-plant emissions that blow across state lines as per a 2011 regulation, the Cross-State Air Pollution Rule.

“Not content to put nearly 150 or more coal-fired power plants out of commission, the court’s rule now gives the EPA authority to do the same thing to about a thousand power plants in the eastern half of the U.S. that will have to adopt new pollution controls or reduce operations.

“In effect, the court has just agreed to a regulation that represents a major increase in the cost of electricity in 28 states. The EPA’s claims that this will save lives they attribute to the alleged pollution is as bogus as all the rest of their justifications, the purpose of which is to undermine the nation’s economy in every way it can.”

Alan Caruba
Founder, The National Anxiety Center
Policy Advisor, The Heartland Institute
acaruba@aol.com

—–

The Heartland Institute is a 30-year-old national nonprofit organization headquartered in Chicago, Illinois. Its mission is to discover, develop, and promote free-market solutions to social and economic problems. For more information, visit our Web site or call 312/377-4000.

Categories: On the Blog

Two Cheers for Illinois Taxpayers!

April 29, 2014, 1:55 PM

Monday, April 28, 2014, purportedly marked Tax Freedom Day for Illinois taxpayers.

Coming thirteen days after state and federal income tax returns were initially due, Tax Freedom Day, according to the Illinois Policy Institute’s Senior Budget and Tax Policy Analyst Benjamin VanMetre,  marks the point in the year when Illinoisans have worked long and hard enough in the aggregate to cover their share of state, federal and local taxes “and can start keeping their hard-earned money.”  About a third of Illinois residents’ efforts this year – 118 days’ worth out of the calendar year’s 365, in other words – went just to paying taxes.

Like all statistics, though, even if technically accurate, this one is misleading.

Due to differences in income, home ownership, and spending habits, no two Illinois taxpayers likely pay the same percentage of their income in taxes.  (Put otherwise, no “average taxpayer” actually exists.)  Low-income renters pay relatively (and nominally) less in income and property taxes and possibly relatively more (but nominally less) in sales taxes.  High-income homeowners likely pay both relatively and absolutely more in income and property taxes and possibly relatively less (but nominally more) in sales taxes.  They aren’t likely simply to average out.  Many people pay a whole lot more, and thus effectively longer.

As in the federal system, a relatively smaller number of higher-income working individuals pays a disproportionate share of total taxes.  These taxes fund not only reasonably necessary government services but also wealth-transfer programs including pensions to former state officials, teachers, and other public employees who once arguably provided government services but no longer do so.  A relatively larger number of lower-income taxpayers pays a smaller percentage of total taxes and, in some cases, receives net payments from the government, therefore being effectively taxed at a negative tax rate.

Putting aside for a moment how their money is spent, things are about to get worse for Illinois taxpayers.  The – ahem! cough!  cough! – “temporary” 67% increase in the state’s flat rate income tax of a couple of years ago from 3% to 5% of adjusted gross income is about to take one of two directions:  a drop to a flat 3.75% effective January 1, 2015 – still 25% higher than its 3% predecessor – or, more likely, a change to a “progressive” income tax system for persons who earn income in Illinois.

The “progressive” tax increase being sold as a solution to Illinois’ increasingly desperate financial straits is likely only to exacerbate them.  If enacted, it would decrease the Illinois income tax rate from the scheduled reduced 3.75% to 2.9% of adjusted gross income only for those making the pathetically small amount of $12,500 per year or less; increase the rate to 4.9% for those making up to $180,00 per year; and hike the rate on those above $180,000 to 6.9%.

Why 2.9% and 4.9% instead of 3% and 5%, respectively?  Because that way the politicians in Springfield can cynically claim they’ve “reduced” tax rates for a majority of Illinois taxpayers even though everyone making over $12,500 per year in Illinois will be paying more than they were three years ago.  (Those in the 4.9% bracket will pay 63.33% more than they did before the “temporary” tax increase, and the fortunate few earning over $180,000 will pay nearly two-and-a-third times as much.)

Fortunately, this proposal would require an amendment to the Illinois constitution, and is drawing fire even from some perhaps unlikely quarters like the teachers’ unions.   Still, don’t be surprised to see some sort of personal income tax increase in Illinois come 2015, even as the state continues to refuse to rein in wasteful spending.

On second thought, make that only one cheer for Illinois taxpayers.

Categories: On the Blog

A Key Ingredient in the Left’s Wins: Persistence

April 29, 2014, 11:32 AM

In the late, great Harold Ramis’ cinematic classic “Animal House,” perpetual Faber College student John Blutarsky succinctly summed up the Left’s approach to policy.

“Over? Did you say ‘over?’ Nothing is over until we decide it is!  Was it over when the Germans bombed Pearl Harbor? Hell no!…

“It ain’t over now, ’cause when the going gets tough, the tough get going. Who’s with me? Let’s go! Come on!….”

Note the Left’s characteristic (historical) accuracy.

Bluto fictitiously rode this philosophy to the Senate.  Several Leftists have in real-life taken it all the way to the White House.  Too many to count have infested Congress and elected offices all over the country.

The Left has made an art form out of the maxim “If at first you don’t succeed – try, try again.”  They perpetually push terrible, government-expanding policies – and no number of failures deters them from pushing until they win.

It is largely why they fight so hard to protect power grabs already won – no matter how huge the failures.

Trustees Say Long-Run Medicare, Social Security Deficit is $66 Trillion

Reform Or Go Broke: Medicaid, Medicare Must Change

They in so many instances fought so hard and waited so long to impose them.

Their incessant, infernal screeching is largely for effect – the Left is actually very patient in waiting to attain new government power grabs.  A quintessential example – the abomination that is ObamaCare.

The Affordable Care Act – ObamaCare – was signed into law in 2010.  Was President Barack Obama and this modern cadre of Leftists the first ever to a want top-down, government health care takeover?  Of course not.

The Bill Clinton Administration floated in 1993 HillaryCare.  President Harry Truman in 1945proposed full-on federal government medicine.  And piecemeal government-medicine expansion began all the way back in the mid-19th Century.

The going got tough – and the Left kept going.  Do Establishment Republicans show this sort of stamina and intestinal fortitude?  Not so much.

McMorris Rodgers Says ACA Likely to Stay

“McMorris Rodgers…part of the Republican leadership in the House…said the framework established by the law likely will persist and reforms should take place within its structure.”

The ObamaCare framework will certainly persist if the leadership of the Party that is supposed to be for repeal – publicly gives up on repealing it.

Why offer the exact same “Mend it – don’t end it” message as the desperate-to-get-out-from-under-ObamaCare Democrats?  How does that delineate you – and help you defeat them in, say, November 2014?

Especially when the law has never, ever enjoyed public support.

After Four Years, ObamaCare Is Still Unpopular

ObamaCare’s Unpopularity Rises Among Uninsured

Oh, Look, ObamaCare’s Unpopular Again

Republicans have tried without success for repeal of a terrible, majority-opposed law for…four years.  So sure, just throw in the towel.

The Left pushed for full-on government medicine for (at least) seven decades.  And they are consistently persistent.

It’s Groundhog Day – Again: Government Taking Third Stab at Net Neutrality Power Grab

This is the (Left’s) third attempt at this particular power garb.  It’s becoming fetishistic.

There was 2007.

Court Backs Comcast Over FCC on ‘Net Neutrality’

And there was 2010.

Verizon Wins Net Neutrality Court Ruling Against FCC

“Yet here we remain – stuck in government overreach Groundhog Day.”

Already twice bitten – they are not shy.  Onward they push.

“The price of freedom is eternal vigilance.”

The cost of a lack of vigilance is soaring and searing – and we’ve been paying it in skyrocketing increments for decades.

1913 federal budget: $715 million.

2013 federal budget: $3.45 trillion.

We here in Reality must be at least as vigilant and persistent in reducing government as the Left is at expanding it.

We have a lot of undoing to do.

Categories: On the Blog

The Multi-Speed Internet is Getting More Faster Speeds

April 29, 2014, 10:49 AM

The Internet has long had multiple speeds. And it constantly gets faster speeds via technological and commercial innovation, competition, and investment.

The Internet also has long met people’s diverse needs, wants and means for speed, with different technologies, pricing, and content delivery methods, and it will continue to do so.

Net neutrality activists’ latest rhetoric that opposes the FCC’s court-required update of its Open Internet rules, by implying that there haven’t been “slow and fast lanes” on the Internet before, is obviously factually wrong and misleading, both for consumers receiving content and for entities sending content.

Many in the media have fallen for this mass “fast lane” deception without thinking or questioning it.

First, isn’t it odd that those who routinely complain that the Internet is not fast enough oppose genuine FCC efforts to make the Internet faster?

Moreover, isn’t it ironic that the net neutrality activists — who have long criticized the FCC for the U.S. falling behind in the world in broadband speeds, and long advocated for municipalities to create giga-bit fast lanes for some communities — vehemently oppose FCC efforts to create “faster lane” Internet for those entities that need it and are willing to pay for it?

Do net neutrality activists really want a faster Internet, or do they want a utopian Internet speed limit designed to enforce Internet speed equality by preventing anyone from going faster on the Internet than anyone else?

Second, Internet consumers have long had their choice of multiple Internet speeds.

Different technologies inherently offer different speeds. Fiber is faster than cable, cable faster than DSL and DSL faster than dial-up. Wireless speeds are very different depending on the protocol, LTE, 4G, 3G, 2G, etc., and depending on the amount of spectrum available and the number of people using a given cell at any given time.

Since people naturally have diverse wants, needs and means, the Internet marketplace has long offered consumers different market prices for different Internet access speeds, different amounts of usage, and different discounts for buying more than one service in a bundle.

The Internet also offers consumers different speeds for free-of-charge Internet access, depending on what public or private institution’s free WiFi one wants to use and how many others avail themselves of that free-of-charge Internet access at any given time.

Consumers know there is not one Internet speed. They know there are many speeds depending on whether one wants to pay nothing, something, or varying options of paying more.

Third, Internet content providers also have long had their choice of multiple speeds. Since the late 1990’s there has been a vibrant and diverse marketplace of content delivery networks (CDNs) that content providers of most all sizes can pay for to ensure faster Internet delivery of their content.

This is generally accomplished by geographically-locating server farms closer to consumers in order to ensure their content can arrive faster and avoid congestion. But CDN competition continues to encourage a wide variety of innovations to ensure faster Internet delivery.

Importantly, the largest corporate users of downstream bandwidth, Netflix and Google-YouTube, which together consume half of North America’s downstream bandwidth per Sandvine, have long paid others to ensure faster Internet service for their customers, than others would get without the additional cost they pay to achieve faster Internet speed and performance for their users.

Finally, net neutrality activists’ obsession with imposing an Internet speed limit to guarantee equality of transmission, completely ignores that content providers have other technical attributes besides speed that they must have to compete and best serve their customer bases over the Internet.

Cloud companies’ customers tolerate near no downtime, so they may need and want to pay for specialized higher quality-of-service than Internet “best efforts” can deliver. Video streamers and video conference call providers may need guaranteed faster speeds and specialized quality-of-service to prevent jitter or buffering problems.

Gaming companies and Voice-over-Internet (VoIP) providers may need prioritized specialized services to ensure real-time quality-of-service without latency. Health care providers may need specialized services for a variety of very-high bandwidth and usage needs to immediately send and receive massive files like MRIs with no downtime.

In sum, the Internet has long had multiple speeds, faster and slower Internet lanes, to naturally meet the diverse needs, wants and means of those who use the Internet.

Imagine how silly it would sound if some activists said it was unfair and anti-competitive to have faster delivery of mail and packages? That next-day service or priority-expedited-delivery by the U.S. Postal Service, Fedex, UPS, or DHL, should be banned by government because it would stifle the next innovative company in a garage, because they could not afford it and thus it would infringe on their freedom of speech?

The dispute here is not over freedom of speech; it’s over opposing definitions of “free.” Net neutrality activists define “free” here as no-cost use of the Internet, where others define “free” as the freedom to access the legal Internet content of their choice.

Internet speed limits, designed to force a zero-price on all downstream traffic via FCC price regulation, are unnecessary and antithetical to a faster Internet.

[Originally published at Precursor Blog]

Categories: On the Blog

Largest World Cities: 2014

April 29, 2014, 9:01 AM

The recently released 10th edition of Demographia World Urban Areas provides estimated population, land area and population density for the 922 identified urban areas with more than 500,000 population. With a total population of 1.92 billion residents, these cities comprise approximately 51 percent of the world urban population. The world’s largest cities are increasingly concentrated in Asia, where 56 percent are located. North America ranks second to Asia, with only 14 percent of the largest cities (Figure 1). Only three high income world cities are ranked in the top ten (Tokyo, Seoul and New York) and with present growth rates, Tokyo will be the lone high-income representative by the middle 2020s.

Demographia World Urban Areas is the only regularly published compendium of urban population, land area and density data for cities of more than 500,000 population. Moreover, the populations are matched to the urban land areas where sufficient data is available from national census authorities.

The City

The term “city” has two principal meanings. One is the “built-up urban area,” which is the city in its physical form, encompassing virtually all of the land area encircled by rural land or bodies of water. Demographia World Urban Areas reports on cities as built-up urban areas, using the following definition (Note 1).

An urban area is a continuously built up land mass of urban development that is within a labor market (metropolitan area or metropolitan region). As a part of a labor market, an urban area cannot cross customs controlled boundaries unless the virtually free movement of labor is permitted. An urban area contains no rural land (all land in the world is either urban or rural).

The other principal definition is the labor market, or metropolitan area, which is the city as the functional (economic) entity. The metropolitan area includes economically connected rural land to the outside of the built-up up urban area (and may include smaller urban areas). The third use, to denote a municipal corporation (such as the city of New York or the city of Toronto) does not correspond to the city as a built-up urban area or metropolitan area. This can – all too often does –   cause confusion among analysts and reporters who sometimes compare municipalities to metropolitan areas or to built-up urban areas.

A Not Particularly Dense Urban World

Much has been made of the fact that more than one-half of humanity lives in urban areas, for the first time in history. Yet much of that urbanization is not of the high densities associated with cities like Dhaka, New York, or even Atlanta.

The half of the world’s urban population not included in Demographia World Areas lives in cities ranging in population from the hundreds to the hundreds of thousands (see: What is a Half-Urban World). In the high income world, residents of large urban areas principally live at relatively low densities, with automobile oriented suburbanization accounting for much of the urbanization in Western Europe, North America, Japan and Australasia. This point was well illustrated in research by David L. A. Gordon et al at Queen’s University (Kingston, Ontario), released last year which concluded that the metropolitan areas of Canada are approximately 80 percent suburban.

Population

There are now 29 megacities, with the addition in the last year of London. London might be thought of as having been a megacity for decades, however the imposition of its greenbelt forced virtually all growth since 1939 to exurban areas that are not a part of the urban area, keeping its population below the 10 million threshold until this year (Demographia World Urban Areas Table 1).

The largest 10 contain the same cities as last year, though there have been ranking changes.Tokyo, with 37.6 million residents, continues its half century domination, though its margin over growing developing world cities is narrowing, especially JakartaManila became the fifth largest urban area in the world, displacing Shanghai, while Mexico City moved up to 9th, displacing Sao Paulo (Figure 2).

Land Area

Often seen as the epitome of urban density, the urban area of New York continues to cover, by far, the most land area of any city in the world. Its land area of nearly 4,500 square miles (11,600 square kilometers) is one-third higher than Tokyo’s 3,300 (8,500 square kilometers). Los Angeles, which is often thought of as defining low-density territorial expansion ranks only fifth, following Chicago and Atlanta, with their substantially smaller populations (Figure 3). Perhaps more surprisingly is the fact that Boston has the sixth largest land area of any city in the world. Boston’s strong downtown (central business district) and relatively dense core can result in a misleading perception of high urban density. In fact, Boston’s post-World War II suburbanization is at urban densities little different than that of Atlanta, which is the world’s least dense built-up urban area with more than 3 million population. Now, 29 cities cover land areas of more than 1,000 square miles or 2,500 square kilometers (Demographia World Urban Areas Table 3).

Urban Density

All but two of the 10 densest cities are on the Indian subcontinent. Dhaka continues to lead in density, with 114,000 residents per square mile (44,000 per square kilometer).  Hyderabad (Pakistan, not India) ranks a close second. Mumbai and nearby Kalyan (Maharashtra) are the third and fourth densest cities. Hong Kong and Macau are the only cities ranking in the densest ten outside the subcontinent (Figure 4). Despite its reputation for high urban densities, the highest ranking city in China (Henyang, Hunan) is only 39th (Demographia World Urban Areas Table 4).

Smaller Urban Areas

Demographia World Urban Areas Table 2 includes more than 700 additional cities with fewer than 500,000 residents, mainly in the high income world. Unlike the main listing of urban areas over 500,000 population, the smaller cities do not represent a representative sample, and are shown only for information.

Density by Geography

Demographia World Urban Areas also provides an average built-up urban area density for a number of the geographical areas. Africa and Asia had the highest average city densities, at 18,000 per square mile (7,000 per square kilometer), followed by South America. Europe was in the middle, while North America and Oceania have the lowest average city densities (Figure 5).

Some geographies, however, had much higher average urban densities. Bangladesh was highest, at 86,800 per square mile (33,000 per square kilometer), nearly five times the Asian average. Other geographies above 30,000 per square mile (11,500 per square kilometer) included Pakistan, the Democratic Republic of the Congo, the Philippines, India and Colombia, the only representative from the Western Hemisphere (Demographia World Urban Areas Table 5).

———————————————–

Note 1: Urban areas are called also called “population centres” (Canada), “built-up urban areas” (United Kingdom, “urbanized areas’ (United States), “unités urbaines” (France)  and “urban centres” (Australia). The “urban areas” of New Zealand include rural areas, as do many of the areas designated “urban” in the People’s Republic of China, and, as a result, do not meet the definition of urban areas above.

Note 2: Demographia World Urban Areas is a continuing project. Revisions are made as more accurate satellite photographs and population estimates become available. As a result, the data inDemographia World Urban Areas is not intended for comparison to prior years, but is intended to be the latest data based upon the best data sources available at publication.

[Originally published at New Geography]

Categories: On the Blog

Something Rotten in the State of Kentucky

April 28, 2014, 11:28 AM

Last week the presidential hopes of Senator Rand Paul took a serious blow. The Kentucky House of Representatives allowed a bill to die without a vote that would have permitted candidates to run for more than one elected office at at time. The bill could be revisited in the next legislative session, which begins in January 2015, but the House does not appear eager to pass the bill at all. And even if it did, Senator Paul would already be months behind other Republican contenders for the presidency in starting on the campaign trail.

The ability to run for multiple offices seems like hardly a serious issue. After all, nowhere in the bill does it suggest that, if elected to both offices, an individual would actually serve in both. All it means is that there might need to be a special election, or perhaps an appointment by the governor.

The decision to block Senator Paul’s dual run is pure tribal politics. The Democrat-controlled House is happy to cause problems for a senator who is popular amongst his constituents and is one of the chief contender for his party’s presidential nomination. Forcing a choice allows them score a victory whatever the choice may be. If he chooses to eschew the presidency and run for the Senate, the House will have scuppered a potentially powerful presidential contender. If he chooses to run for the White House, the Democrats can run a candidate for the Senate unburdened by the challenge of incumbency.

There is certainly no lack of precedent for permitting candidates to run for multiple offices, from both sides of the political aisle. In 2000, Joseph Lieberman won his Senate seat in Connecticut while losing his run for vice president alongside Al Gore. Similarly, in 2012, Paul Ryan won his House seat while being defeated beside Mitt Romney.

Forcing the choice makes for bad outcomes. It thieves the public of valuable potential choices and can seriously cost political movements. Barry Goldwater ran for president in 1964, but could not run for his Senate seat. When he lost the race, he was out of the Senate and the country was deprived of one of its most competent legislators.

The choice to run for president is a monumental one. It demands a huge amount of sacrifice in terms of time, resources, and privacy. Making that choice even harder means the public suffers for greater lack good of candidates to choose between.

Categories: On the Blog

Should California Dictate US Energy Policies?

April 28, 2014, 10:42 AM

California loves to be seen as the trendsetter on energy and environmental policies. But can we really afford to adopt their laws and regulations in the rest of America? Heck, can the once Golden State afford them itself? The path to hell is paved with good intentions, counter-productive policies – and hypocrisy.

The official national unemployment rate is stuck at 6.7% – but with much higher rates for blacks and Hispanics that remains the lowest in 35 years. Measured by gross national product, our economy is growing at an abysmal 1.5% or even 1.0% annual rate.

Meanwhile, California’s jobless rate is higher than in all but three other states: 8.1% – and with far worse rates as high as 15% for blacks, Hispanics and inland communities. First the good news, then the insanity.

Citigroup’s Energy 2020: North America report estimates that the United States, Canada and Mexico could make North America almost energy independent in six years, simply by tapping their vast recoverable oil and gas reserves. Doing so would help lower energy and consumer prices, insulate the three nations from volatile or blackmailing foreign suppliers, and spur job creation based on reliable, affordable energy, says the U.S. Energy Information Administration.

Driving this revolution is horizontal drilling and hydraulic fracturing. According to Citigroup, IHS Global Insights, the EIA and other analysts, “fracking” technology contributed 2.1 million jobs and $285 billion to the US economy in 2013, while adding $62 billion to local, state and federal treasuries! Compare that to mandates and subsidies required for expensive, unreliable, job-killing wind, solar and biofuel energy.

Fracking also slashed America’s oil imports from 60% of its total petroleum needs in 2005 to just 28% in 2013. It slashed our import bill by some $100 billion annually.

By 2020 the government share of this boom is expected to rise to $111 billion. By 2035, U.S. oil and natural gas operations could inject over $5 trillion in cumulative capital expenditures into the economy, while contributing $300 billion a year to GDP and generating over $2.5 trillion in cumulative additional government revenues.

A Yale University study calculates that the drop in natural gas prices (from $8 per thousand cubic feet or million Btu in 2008, and much more on the spot market, to $4.00 or so now) is saving businesses and families over $125 billion a year in the cost of heating, electricity and raw material feed stocks.

The only thing standing in the way of a US employment boom and economic and industrial renaissance, says Citigroup, is politics: continued or even more oppressive anti-hydrocarbon policies and regulations.

Here’s the insanity. Fully 96% of this nation’s oil and gas production increase took place on state and private lands. Production fell significantly on federal lands under President Obama’s watch, with the Interior Department leasing only 2% of federal offshore lands and 6% of its onshore domain for petroleum, then slow-walking drilling permits, according to the Institute for Energy Research.

The President continues to stall on the Keystone pipeline, while threatening layers of expensive carbon dioxide and other regulations, to prevent what he insists is “dangerous manmade climate change.” His EPA just adopted California’s expensive all-pain-no-gain rules for sulfur in gasoline, and the Administration and environmentalists constantly look to the West Coast for policy guidance.

Governor Jerry Brown says 30 million vehicles in California translate into “a lot of oil” and “the time for no more oil drilling” will be when its residents “can get around without using any gasoline.” However, that rational message has not reached the state’s legislators, environmental activists or urban elites.

California’s oil production represents just 38% of its needs – and is falling steadily, even though the state has enormous onshore and offshore natural gas deposits, accessible via conventional and hydraulic fracturing technologies. The state imports 12% of its oil from Alaska and 50% more from foreign nations, much of it from Canada, notes Sacramento area energy consultant Tom Tanton.

Of course, California’s ruling elites are also opposed to drilling and fracking – and leading Democrats are campaigning hard to impose a temporary or permanent ban, on the ludicrous grounds that fracking causes birth defects, groundwater contamination and even earthquakes.

Its record is far worse when it comes to electricity. The Do-As-I-Say state imports about 29% of its total electricity from out of state: via the Palo Verde nuclear power plant in Phoenix, coal-fired generators in the Four Corners area, and hydroelectric dams in the Southwest and Pacific Northwest, Tanton explains.

Another 50% of its electricity is generated using natural gas that is also imported from sources outside California. Instead, the Greener-Than-Thou State relies heavily on gas imported via pipelines from Canada, the Rockies and the American Southwest, to power its gas-fired turbines. Those turbines and out-of-state sources also back up its forests of unreliable bird-killing wind turbines.

That’s certainly one way to preen and strut about your environmental consciousness. Leach off your neighbors, and let them do the hard work and emit the CO2.

These foreign fuels power the state’s profitable and liberal Silicon Valley and entertainment industries – as well as the heavily subsidized electric and hybrid vehicles that wealthy elites so love for their pseudo-ecological benefits, $7,500 tax credits, and automatic entry into fast-moving HOV lanes.

Meanwhile, California’s poor white, black, Hispanic and other families get to pay $4.23 per gallon for regular gasoline, the second highest price in America – and 16.2 cents per kWh for residential electricity, double that in most states, and behind only New York, New England, Alaska and Hawaii.

However, the state’s eco-centric ruling classes are not yet satisfied. Having already hammered large industrial facilities with costly carbon dioxide cap-and-trade regulations, thereby driving more jobs out of the state, on January 1, 2015 they will impose cap-and-trade rules on gasoline and diesel fuels. That will instantly add more than 12 cents per gallon, with the price escalating over the coming years.

Regulators are also ginning up tough new “low-carbon fuel standards,” requiring that California’s transportation fuels reduce their “carbon intensity” or “life-cycle” CO2 emissions by 10% below 2010 levels. This will be accomplished by forcing refiners and retailers to provide more corn-based ethanol, biodiesel and still-nonexistent cellulosic biofuel.

These fuels are much more expensive than even cap-tax-and-traded gasoline – which means the poor families that liberals care so deeply about will be forced to pay still more to drive their cars and trucks.

In fact, Charles River Associates estimates that the LCFS will raise the cost of gasoline and diesel by up to 170% (!) over the next ten years, on top of all the other price hikes.

In the meantime, China, India, Brazil, Indonesia, Germany and a hundred other countries are burning more coal, driving more cars and emitting vastly more carbon dioxide. So the alleged benefits to atmospheric CO2 levels are illusory, fabricated and fraudulent.

Of course, commuters who cannot afford these soaring prices can always park their cars and add a few hours to their daily treks, by taking multiple buses to work, school and other activities.

There’s more, naturally. A lot more. But I’m out of space and floundering amid all the lunacy.

Can we really afford to inflict California’s insane policies on the rest of America? In fact, how long can the Left Coast afford to let its ruling classes inflict those policies on its own citizens?

 

[Originally published at Townhall.com]

Categories: On the Blog

Time Magazine Gets it Wrong on the Suburbs

April 28, 2014, 10:34 AM

Time Magazine’s Sam Frizell imagines that the American Dream has changed, in an article entitled “The New American Dream is Living in a City, Not Owning a House in the Suburbs.” Frizell further imagines that “Americans are abandoning their white-picket fences, two-car garages, and neighborhood cookouts in favor of a penthouse view downtown and shorter walk to work.” The available population data shows no such trend.

Frizell’s evidence is the weak showing in single family house building permits last month and a stronger showing in multi-family construction.

This is just the latest in the “flocking to the city” mantra that is routinely mouthed without any actual evidence (see: Flocking Elsewhere: The Downtown Growth Story). The latest Census Bureau estimates show that net domestic migration continues to be negative in the core counties (which include the core cities) of the major metropolitan areas (those with more than 1,000,000 residents). The county level is the lowest geographical level for which data is available.

At the same time, there is net domestic inward migration to the suburban counties. Moreover, much of the net domestic migration to metropolitan areas has been to the South and Mountain West, where core cities typically include considerable development that is suburban in nature (such as in Austin, Houston and Phoenix). As the tepid “recovery” has proceeded, net domestic migration to suburban counties has been strengthened (see: Special Report: 2013 Metropolitan Area Population Estimates), as is indicated in the Figure.

There is no question but that core cities are doing better than before. It helps that core city crime is down and that the South Bronx doesn’t look like Berlin in 1945 anymore. For decades, many inclined toward a more urban core lifestyle were deterred by environments that were unsafe, to say the least. A principal driving force of this has been millennials in urban core areas. Yet, even this phenomenon is subject to over-hype. Two-thirds of people between the ages of 20 and 30 live in the suburbs, not the core cities, according to American Community Survey data.

To his credit, Frizell notes that the spurt in multi-family construction is “not aspirational,” citing the role of the Great Recession in making it more difficult for people to buy houses. As I pointed out in No Fundamental Shift to Transit: Not Even a Shift, 2013 is the sixth year in a row that total employment, as reported by the Bureau of Labor Statistics was below the peak year of 2007. This is an ignominious development seen only once before in the last 100 years (during the Great Depression).

In short, urban cores are in recovery. But that does not mean (or require) that suburbs are in decline.

[Originally published at New Geography]

Categories: On the Blog

The Other Side of the Global Warming Story

April 28, 2014, 10:19 AM

[This article was originally published in the Alumni magazine of the Hotchkiss School] Many readers will surely agree with me that Hotchkiss launched us into successful and fulfilling careers, mine as a scientist, physicist and engineer continues actively 60 years past graduation. Among my specialties has been the study of climate, present, past and future, ever since the 1970s when the popular press was warning of an advancing ice age, and on into the 90s when global warming took over with unsubstantiated anecdotal evidence and mathematical models.  Models that are unable to calculate past temperatures when all the variables are known, or the future whether a year or 10 down the road. They continue to predict doom a century from now if mankind does not alter its proclivity to lighten man’s burden with inexpensive fossil fuel.

The pages of this amazing magazine have promoted this unwarranted fear and unmitigated arrogance as to man’s impact on his climate. Happily they have offered me the use of 750 words to try and set the record straight or at least level the playing field with some irrefutable scientific facts.

1 – While temperatures have fluctuated over the past 5000 years, today’s earth temperature is below the average for these past 5000 years.

2 – Temperature fluctuations during the current 300 year recovery from the Little Ice Age, which ended around the time Washington’s soldiers were freezing at Valley Forge, correlate almost perfectly with our sun’s changes in activity level.

3- The National Aeronautic and Space Agency (NASA) has determined that during the time the Earth was warming in the past century so was Mars, Pluto, Jupiter and the largest moon of Neptune.

4- We know that 200 million years ago when the dinosaurs walked the Earth, average Carbon Dioxide concentration in the atmosphere was 1800 parts per million, more than four times higher than today.

5- If greenhouse gases were responsible for increases in global temperature, then atmospheric physics shows that higher levels of our atmosphere would show greater warming than lower levels.  This was not found to be true during the 1978 to 1998 period of .3 degrees centigrade warming.

6- 900.000 years of ice core temperature records and carbon dioxide content records show that CO2 increases follow rather than lead increases in Earth temperature which is logical because the oceans are the primary source of CO2 and they hold more CO2 when cool than when warm, so warming causes the oceans to release more CO2.

7- The effect of additional CO2 in the atmosphere is limited because it only absorbs certain wave lengths of radiant energy.  As the radiation in that particular wave length band is used up, the amount left for absorption by more of the gas is reduced.

8- While we hear much about one or another melting glaciers, a recent study of 246 glaciers around the world  indicated a balance between those that are losing ice, gaining ice, and remaining in equilibrium.

9- It is amusing that the polar bear has become the symbol of global warming while its North American population has increased from 5000 in 1960 to more than 25,000 today.

Although the court of public opinion already weigh climate change as a very low economic priority, the media continues to uncritically accept and vigorously promote shrill global warming alarmism. The United States government budgets $6 billion a year for climate research supporting a growing industry of scientists and university labs that specialize in the subject.  It all adds up to a significant institutionalization of the impulse to treat carbon as a problem.

Climate change is not a scientific problem that found political support.  It is about eco-activists and politicians who found a scientific issue they feel can leverage them into power and control. The environment is a great way to advance a political agenda that favors central planning and an intrusive government.  What better way to control someone’s property than to subordinate one’s private property rights to environmental concerns.

While the most extreme environmental zealots may be relatively few in number, they have managed to gain undue influence by exploiting the gullibility of many ordinary and scientifically untrained people, willing to believe that the planet needs saving from man’s excesses.  Perhaps it is a psychological throwback to those earlier civilizations that offered human sacrifices to the gods, to assuage their sins and spare them from punishment in the form of drought, flood, famine or disease. There are certainly many parallels between modern environmentalism and religion.

Categories: On the Blog

Response to a Critic of Climate Realists and the NIPCC Reports

April 28, 2014, 12:02 AM

Keith Peterman is a professor at York College and the proprieter of Global Hot Topic blog. Judging from the photo he has on his college website, he seems like a fun and inspiring teacher.  G

The other day, Peterman decided to weigh in against letter-writer Missy Updyke at the York Daily News. Updyke committed a global warming party foul by citing the work of the Nongovernmental International Panel on Climate Change (NIPCC) to counter the idea that the United Nations IPCC deserves to be treated as Moses holding stone tablets when it comes to what is happening to the climate.

They don’t, and kudos to Updyke for citing the definitive scientific organization out there countering the politicized science of the IPCC.

Typically with the MSM, climate-alarmist Peterman was given a lot more words to counter climate-realist Updyke’s letter (801 words for him; 330 word for her). You can read Peterman’s lettert here. And below is the rebuttal we put beneath his very long counter “argument”:

This essay contains a nice catalogue of myths, making it a useful vehicle for setting the record straight.

* Unlike the bogus “97 percent agree” meme, the latest serious and scientific examination of the views of climate scientists shows no “consensus” that human activity is causing catastrophic, runaway global warming.

* It is unfortunate that Peterman did not take better advantage of his attendance at ICCC-3. We have 18 presentations on record which included examinations of the actual scientific data. In fact, the eight conferences (with a ninth slated for July) feature hundreds of such presentations. Many side sessions don’t include standing ovations, but probably deserve them.

* None of the NIPCC reports — ZERO — have been funded with corporate money. They are funded by family foundations that have no interest in the energy sector. The NIPCC reports are vigorous, pulling from the peer-reviewed literature. They then go through through another peer-review process before being published. There are now some 4,000 pages in the Climate Change Reconsidered series with tens of thousands of citations to the peer-reviewed literature. Judge it for yourself.

* Ahh … the “Koch Brothers” meme — so precious to so many on the left and those who buy into the debating tactic of demonization. The Heartland Institute is not funded by the Koch Brothers. The Charles G. Koch Foundation donated $25,000 in 2012 to support what Heartland was already doing on health policy. That was the first Koch-connected donation to Heartland in a decade, and is completely irrelevant to this discussion. It’s a shame Peterman spent so many precious words on that false point.

Besides, Heartland has policies in place that prevent funders from interfering with our research — just as the science journal “Nature,” the National Academy of Sciences, and other entities do. Those entities accept great sums of money from the fossil fuel industry. Does Peterman question their reports? The scientists who participate in the IPCC reports also receive BILLIONS of dollars from governments around the world. Heartland getting $25,000 from the “Koch Brothers” is irrelevant in this debate, but those billions might not be.

* Also: The UN, the sponsor of both the WMA and UNEP, is quintessentially a political body best known for its corruption and not its scientific rigor. We thank Missy Updyke for hitting some of those points.

* Heartland is PROUD to be able to pay scientists to help write and review its reports — the same way “Nature,” the NAS, and IPCC pay professionals to compile its reports. If you don’t have scientists on your staff, how can you review and publish scientific research?

* If you want some balance from the politicized “science” of the IPCC reports — information that adheres to the scientific method of skepticism, and doesn’t put its conclusions before the observations — visit Climate Change Reconsidered.

* We also invite Peterman and anyone who is interested in learning about what the data says about our climate — not computer models, but actual data — to attend the 9th International Conference on Climate Change. That invitation is also extended to Keith Peterman. The scientists who attend and present are open to all challenges — from Peterman and anyone else. If Peterman contacts me, I’ll waive the very modest attendance fee.

Y’all should come to Vegas for Heartland’s climate confrence, too. I hear there’s lots to do there.

Categories: On the Blog

The Least Efficient Part of Government

April 27, 2014, 12:54 PM

Spectrum management is the least efficient part of the federal government.

That’s a big national problem because radio spectrum is the essential fuel of the mobile revolution of smart-phones, tablets, video streaming and the Internet of things.

The worst-resource management in the federal government unnecessarily burdens one of the most modern, dynamic and innovative parts of the American economy – mobile.

This is not a partisan issue, it’s a good government issue that badly needs fixing – fast. The status quo is indefensible.

Fortunately, Congress is listening.

The House Energy and Commerce Committee has asked all interested parties for input into how to best modernize U.S. spectrum policy as part of the committee’s broader legislative effort to update the Communications Act next Congress. Comments are due this week.

The inability of the federal government’s spectrum system to make available sufficient federal spectrum for auction, just to keep pace with skyrocketing mobile usage, threatens to become an unnecessary growth cap on the increasingly mobile-driven 21st century economy.

How did the government’s spectrum management system become the anachronism it is today?

It was designed for a bygone era – the analog 1900s, not the mobile broadband 21st century.

The obsolete 1992 law in question simply codified a 1978 executive order that established the National Telecommunications and Information Administration (NTIA), which only partially covered spectrum.

To put that in perspective, that means the federal government’s basic spectrum management approach predates the first commercialization of cell phones in the U.S. in 1982 – by four years.

At that time most all radio spectrum was used or controlled by government agencies with very little spectrum used directly by consumers or the private sector. That means that spectrum management was, and remains largely today, organized for the convenience of government agencies – not for the benefit of American consumers or the private sector.

The year after that 1992 law fossilized spectrum policy in the analog 1900s, Congress passed a provision in the 1993 Reconciliation Act that authorized spectrum auctions for about 120 MHz of spectrum for personal communications services. In perspective, that auction commercialized 4 percent of the most commercially valuable wireless spectrum between 400 MHz and 3 GHz.

That PCS auction catapulted wireless competition, consumer use, and growth forward. How much? The number of American wireless subscribers exploded more than 3,000 percent from 11 million connections then, to 335 million today.

Currently only about 15 percent of the nation’s choice spectrum suitable for commercial broadband is available to the private sector.

Unfortunately in the past few years, the government agencies that control the lion’s share of choice spectrum have largely refused to clear more spectrum for private auction. Their offer is to let the private sector “share” some of their spectrum scraps as second class spectrum citizens.

So why is spectrum management the least efficient part of government?

First, the system now can take 9-12 years to make spectrum available for auction. Contrast that with the speed of change in the private sector, where in less than seven years after the invention of the smartphone, about 200 million Americans use one of these new spectrum-hungry devices.

Second, there is no national policy that recognizes spectrum is a scarce valuable resource that must be put to its highest and best use for the nation.

Third, in the current ad hoc spectrum system, literally no one has the authority to ensure spectrum is efficiently and effectively utilized by the government.

Fourth, America’s spectrum system scandalously has none of the normal or standard government accountability processes – i.e., decision-making responsibility, budget, accounting, audit, etc. – that is required of every other valuable government resource like land, buildings, vehicles, personnel, money, telecom services, oil, etc.

Finally, it is obvious that no one is minding the government’s spectrum store because no one in government is requiring government agencies to share spectrum better among themselves.

The current claim of a “transformative” spectrum policy that promotes government-private sector sharing of spectrum is less a policy-improvement, and more akin to a policy-surrender to unaccountable government agencies who are unwilling to use their spectrum more efficiently so Americans may benefit.

America currently has a scandalously dysfunctional spectrum system that remains organized for the convenience of government agencies’ 1900s needs, not the 21st century needs of consumers, taxpayers and businesses, who need the spectrum as much or more than the government agencies do.

The bipartisan solution is simple – good government.

Like any other trillion dollar resource, someone must be responsible and accountable so that it is put to its highest and best use in both government and in the private sector, and also so that it is not vulnerable to waste, fraud and abuse like it is now.

Congress needs to update U.S. spectrum law to protect our nation’s mobile communications future. Without more spectrum available to the private sector, the mobile revolution of smartphones, tablets, video streaming and the Internet of things are at increasing risk of hitting an unnecessary wall in long-term mobile growth.

Congress must legislate to ensure that someone minds the nation’s trillion-dollar spectrum store.

 

[Originally published at Daily Caller]

Categories: On the Blog

It’s Groundhog Day: Government Taking Third Stab at Net Neutrality Power Grab

April 26, 2014, 12:40 PM

This is at once obnoxious and pathetic.

Wheeler Tees Up Net Neutrality for May Meeting

FCC Chairman Tom Wheeler said Wednesday that he would be circulating a draft of the FCC’s new network neutrality rules to the other commissioners Thursday (April 24).

This is their third attempt at this particular power garb.  It’s becoming fetishistic.

There was 2007.

Court Backs Comcast Over FCC on ‘Net Neutrality’

And there was 2010.

Verizon Wins Net Neutrality Court Ruling Against FCC

Yet here we remain – stuck in government overreach Groundhog Day.

We haven’t yet seen the Net Neutrality power grab order – but the fact that they’re trying again at all is at once obnoxious and pathetic.

Not yet having seen the order hasn’t stopped the Left from going apoplectic.  Because the Left never allows the facts to get in the way of a good beating.

The FCC Tosses Net Neutrality Out the Window

So This is How Net Neutrality Dies, Under a Democratic President

Thanks to the FCC Net Neutrality is Dead

Obama’s Second FCC Chairman Fails on Net Neutrality

Stop the FCC from Breaking the Internet

Not at all over the top.

To all of which I say:

It’s disappointing – but not surprising – that the Obama Administration is yet again going to impose the Internet power grab that is Network Neutrality.

The federal government is twice bitten – and not shy.  Having twice had their Net Neutrality impositions unanimously rejected by the courts – they remain undaunted in their desire to over-regulate the Web.

And it’s always a pleasure to watch the Leftist likes of Free Press and Public Knowledge preemptively freak out – over an order they haven’t even yet seen.

They are as always the perfect tool.  The Leftist screeching visual aides that allow an overreaching government to pretend “See, we’re finding middle ground – both sides are angry with us” as they grab even more Leviathan-sized hands-full of the once-private sector.

Don’t drive angry.

[Originally published at PJ Tatler]

Categories: On the Blog