At a luncheon at The Heartland Institute yesterday, FreedomWorks President Matt Kibbe, talked about this latest book, Don’t Hurt People and Don’t Take Their Stuff: A Libertarian Manifesto. In his book, Kibbe attempts to define libertarianism to people who are ignorant of it and asking about it.
“I wanted to translate the ideas of liberty to connect to the people,” Kibbe said. “To the people that look at the Democratic Party and think I’m not one of them and look at the Republican Party and think I sometimes agree with them but I’m not one of them either.”
According to Kibbe, modern efforts to self-educate on liberty increases the need for publishing works like his latest book and much of what Heartland produces. However, finding a comprehensive work on libertarianism is not always that easy — or at least, easy to read.
Before delving into his book, Kibbe explained his own journey with liberty beginning at age 13. From searching through used bookstores for books of Ayn Rand and Adam Smith — then finding the Rand-inspired rock band Rush — Kibbe found his way to the economics department at Grove City College in Pennsylvania where he discovered he was not the only one who had been inspired by these thinkers. “Today, it’s so easy to find those books and ideas … you just Google it,” Kibbe said.
In Don’t Hurt People and Don’t Take Their Stuff, Kibbe condenses the ideas of Adam Smith, Friedrich Hayek, and Ludwig von Mises to efficiently explain the libertarian movement to the public. Kibbe specifically summarized Smith’s Theory of Moral Sentiments into two basic ideas, which became the title of his book: Don’t hurt people and don’t take their stuff. Aside from serving as a catchy title, these ideas are also the first two of the six rules of liberty Kibbe further outlines.
Kibbe discussed how today, people do not directly steal from one another, but elect politicians as means to outsource stealing through a third party. “The government is transferring wealth form the politically unconnected to the politically connected in Washington,” Kibbe said. This not only serves as an example of government encroachment but also a violation of the basic rule of man, treat everybody like everybody else.
According to Kibbe, the rules of liberty stem from this basic rule, and that the freedoms enumerated by the Constitution apply to everyone, no matter their race, religion or socio-economic status. An individual has to protect their liberty by taking responsibility and working to stand up to the government.
“The government goes to those who show up. If we don’t show up, the power goes to those who do, who may corrupt the power. We have to show up.”
(Kibbe is also the author of Hostile Takeover: Resisting Centralized Government’s Stranglehold on America and co-author of Give Us Liberty: A Tea-Party Manifesto.)
Once upon a time, oh, say 20 years ago, the talk was that the Pacific would be in a constant state of El Nino. Though this was an admission that the antics of the tropical Pacific control a large part of the global temperature, the idea of the El Nino and a forever warming planet was a global warming proponent’s dream come true. Because they ignored climate cycles and did not understand what Weatherbell.com meteorologist Joe D’Aleo, who also runs the climate blog “ICECAP,” showed plainly – that in the colder cycles of the Pacific, the La Ninas outdo El Ninos and vice versa – they assumed this would continue forever.
As the earth adjusted to the warmth supplied by this natural cycle, the warmth that was occurring, combined with the change of the Atlantic cycle to warmer, lead to a marked decrease in Arctic sea ice. It reached a crescendo in 2007, the year of the death spiral along with forecasts of no summer ice in 2014. Through it all, our side of the AGW argument said this is a natural phenomena, and once the AMO flipped, the summer sea ice, which is the most obvious talking point for those advocating the Arctic death spiral, would come back. As always, the Southern Hemisphere ice, because it was above normal, was ignored.
So here we are, with the summer of 2014 approaching. Much is being made of the coming El Nino, including for the fifth time since 1997, the dream of many of a “Super Nino” to get the badly busting global temp forecast back on track. We believe strongly this a classic Multivariate ENSO (MEI) bounce back event that spikes quickly then retreats, as we are back in the period that favors this. We can plainly see this cycle by looking at the MEI chart below.
The theory is not rocket science. It simply says the strongest events are after prolonged warm run ups, which happen when the Pacific in the overall sense is cool. You can plainly see the cyclical nature of the overall MEI and the spikes that occur, both when it’s been warm and cold. As I have said a thousand times, the explanation for the behavior of the oceans lies with Dr. William Gray’s ideas.
But here we are with the talk of a Super Nino, yet the far bigger event climate-wise is the increasingly positive summer sea ice anomaly being forecast that’s getting more impressive by the week. When combined with the major positive anomaly in the Southern Hemisphere, this offers a chance, in the summer of Al Gore’s no Arctic ice cap, for a record high global ice anomaly.
Heck of a way to run a global warm up, eh? There’s a chance of a record high global ice anomaly because of an above average summer sea ice anomaly in the north and what appears to be a a Southern Hemispheric sea ice that is heading for a record high itself. As of this writing, the Southern Hemisphere looks like this:
The north, as you can see, is below average, and you see the two summer sea ice minimums that lead to the hysteria. But while they were happening there was robust sea ice in the south (and I am all for thinking globally).
Average all this out, and here’s what you get.
Again, this is not rocket science. Given where we are globally now with the Arctic still below average, a forecast for the winter around Antarctica as depicted on the graph below would mean it’s likely each anomaly in their winter would remain well above average.
Should the northern ice cap expand to above average, the global average would have to go up, perhaps breaking the record. And you have to love all this, as it would occur in the summer touted by Al Gore to perhaps see the Arctic ice cap disappear. Ouch, that is going to leave a mark. If only someone would actually watch it!
The reason for the increase in the Arctic ice is because the north Atlantic, at least for the time being, has cooled. Most of the reason for the decrease in ice is not because of the warmth of the winters but because the warm cycle in the north Atlantic attacks the ice cap at the warm time of the year – both with warmer air temperatures and the warmer current below! But what happens when that changes for good? There were times in the 1950s when Arctic sea ice was very low, and though I have no satellite measurements, we do have panic reports like this from 1957.
20 years ago the idea of a constant El Nino warming the planet was a big deal, which is why we see the current fervor about the threat of a Super Nino. But the other, greater story is this canary in the coal mine; that the AMO will flip to cold for good by 2020, as Dr. Gray has opined, because of the cyclical nature of the oceans. This means that the darling of the warming crowd a few years ago – sea ice – will be the lipstick on a pig it always has been.
Think about it.
Super Ninos galore – NOT.
Ice caps decreasing. How did that work out given the Southern Hemisphere?
And now this?
Yet what are we hearing about? A likely overhyped event to get attention and whip up fervor, while the event that actually means something is ignored.
Now let me ask you this question. If we have a world with less than average global sea ice meaning all this warmth, what should be the natural progression of thought to the same person that pushed this missive as to what two above normal caps mean?
A question they probably do not want to answer.
Joe Bastardi is chief forecaster at WeatherBELL Analytics, a meteorological consulting firm.
© Copyright 2014 The Patriot Post
[Originally published at The Patriot Post]
Theirs is now an all-out push to have the Federal Communications Commission (FCC) “Reclassify” the Internet – so as to then impose the utterly unpopular Network Neutrality.
“Reclassification” means unilaterally shoving the Web out from under the existing light-touch Title I rules the 1996 Telecommunications Act placed upon it – which have allowed it to blossom into the free speech-free market Xanadu we all know and love.
And then slamming it into the Title II heavy-regulatory uber-structure that has for the last seventy-plus years crushed with regs and taxes landline telephones – that well-known bastion of technological and economic innovation.
Does the FCC have the authority to Reclassify? Of course not. But this is the Barack Obama Administration - when has that ever stopped them?
The 1996 Act gave the Web the freedom every person and entity needs to thrive – and it has thrived beyond any and everyone’s wildest dreams.
Because it left the Left bereft of a regulatory hook – by which they can reel it in sport-fish-style – they now want the government to seize the Web, enmeshing it in a landline regulatory nightmare mess.
So they now want the government to seize the Web – enmeshing it in a landline regulatory nightmare mess.
The people calling for this are ridiculously Leftist. They bear a striking resemblance to – and the heinous patchouli aroma of – the Occupy Wall Street radicals who in 2012 illegally befouled public places all across our nation.
Only these people are far more organized – and thus far more dangerous.
Call this iteration #OccupyTheInternet. Led by the tiny band of Merry Media Marxists known as Free Press.
Say nothing short of Title II classification for Internet access will do.
This is not Free Press’ first attempt at ridiculous Government Internet Power Grab Street Theatre. The last time involved spatulas.
Which was even more absurd than it sounds.
Who is Free Press? For what do they stand? Meet Free Press co-founder and current Board member Robert McChesney.
In addition to teaching college (Heaven help us) and having co-founded Free Press, he was the editor (2000-2004) and is a current board member of Monthly Review, which he himself describes as “one of the most important Marxist publications in the world, let alone the United States.”
McChesney describes their Internet objective thus:
“(T)he ultimate goal is to get rid of the media capitalists in the phone and cable companies and to divest them from control.”
How very Hugo Chavez of them.
These clowns are of course getting support from the Congressional Communist Progressive Caucus.
The leaders of the Congressional Progressive Caucus are drafting a letter asking the FCC to reclassify broadband as a telecommunications service, a move that would give the agency more flexibility on net neutrality but may be legally or politically difficult.
Reps. Raul Grijalva and Keith Ellison plan to send the letter to the agency next week, and plan to send a dear colleague letter to fellow lawmakers in hopes of garnering more signatories. Their backing of reclassification is significant, since it endorses an alternative to Chairman Tom Wheeler’s proposal in addition to just criticizing the plan.
Good old Representative Grijalva.
Raul Grijalva’s first documented ties to the Communist Party USA date from 1993, when then-Pima County Board of Supervisors member Grijalva penned an article on NAFTA for the Party’s People’s Weekly World (now People’s World)’s November 13 issue.
These Marxists and their publications.
How do those of us over here in Reality view this government takeover of the Internet? Not quite as highly. Back in 2010 the following Democrat-festooned assemblage united against the same FCC power grab:
299 members of Congress - a large bipartisan majority - have asked you to not reclassify the Internet, and wait for Congress to first write law. (And that was BEFORE the 2010 Republican wave election.)
More than 150 organizations, state legislators and bloggers have asked you to not reclassify the Internet, and wait for Congress to first write law.
Seventeen minority groups – that are almost always in Democrat lockstep – have asked you to not reclassify the Internet, and wait for Congress to first write law.
And many additional normally Democrat paragons have also asked you to not reclassify the Internet, and wait for Congress to first write law. Including:
- Large unions: AFL-CIO, Communications Workers of America (CWA), International Brotherhood of Electrical Workers (IBEW).
- Racial grievance groups: League of United Latin American Citizens (LULAC), Minority Media and Telecom Council (MMTC), National Association for the Advancement of Colored People (NAACP) and the Urban League.
- Anti-free market environmentalist group the Sierra Club….
Even (former) Massachusetts Democrat Senator (and now Secretary of State) John Kerry…at one point said you do not have the authority.
I would imagine very little has changed for these folks – since nothing about the Left’s Internet assault has.
A massive, overwhelming bipartisan swath of Washington and the nation – or a tiny, uber-radical, Communist-riddled cadre of government expansionists?
Our first glimpse at an answer is Thursday’s Net Neutrality vote.
[Originally published at Human Events]
The White House has released its latest National Climate Assessment. An 829-page report and 127-page “summary” were quickly followed by press releases, television appearances, interviews and photo ops with tornado victims – all to underscore President Obama’s central claims:
Human-induced climate change, “once considered an issue for the distant future, has moved firmly into the present.” It is “affecting Americans right now,” disrupting their lives. The effects of “are already being felt in every corner of the United States.” Corn producers in Iowa, oyster growers in Washington, maple syrup producers in Vermont, crop-growth cycles in Great Plains states “are all observing climate-related changes that are outside of recent experience.” Extreme weather events “have become more frequent and/or intense.”
It’s pretty scary sounding. It has to be. First, it is designed to distract us from topics that the President and Democrats do not want to talk about: ObamaCare, the IRS scandals, Benghazi, a host of foreign policy failures, still horrid jobless and workforce participation rates, and an abysmal 0.1% first quarter GDP growth rate that hearkens back to the Great Depression.
Second, fear-inducing “climate disruption” claims are needed to justify job-killing, economy-choking policies like the endless delays on the Keystone XL pipeline; still more wind, solar and ethanol mandates, tax breaks and subsidies; and regulatory compliance costs that have reached $1.9 trillion per year – nearly one-eighth of the entire US economy.
Third, scary hyperventilating serves to obscure important realities about Earth’s weather and climate, and even in the NCA report itself. Although atmospheric carbon dioxide levels have been rising steadily for decades, contrary to White House claims average planetary temperatures have not budged for 17 years.
No Category 3-5 hurricane has made landfall in the United States since 2005, the longest such period since at least 1900. Even with the recent Midwestern twisters, US tornado frequency remains very low, and property damage and loss of life from tornadoes have decreased over the past six decades.
Sea levels are rising at a mere seven inches per century. Antarctic sea ice recently reached a new record high. A new report says natural forces could account for as much ashalf of Arctic warming, and warming and cooling periods have alternated for centuries in the Arctic. Even in early May this year, some 30% of Lake Superior was still ice-covered, which appears to be unprecedented in historical records. Topping it off, a warmer planet and rising CO2 levels improve forest, grassland and crop growth, greening the planet.
Press releases on the NCA report say global temperatures, heat waves, sea levels, storms, droughts and other events are “forecast” or “projected” to increase dangerously over the next century. However, the palm reading was done by computer models – which are based on the false assumption that carbon dioxide now drives climate change, and that powerful natural forces no longer play a role. The models have never been able to predict global temperatures accurately, and the divergence between model predictions and actual measured temperatures gets worse with every passing year. The models cannot even “hindcast” temperatures over the past quarter century, without using fudge factors and other clever tricks.
Moreover, much of the White House and media spin contradicts what the NCA report actually says. For example, it concludes that “there has been no universal trend in the overall extent of drought across the continental U.S. since 1900.” Other trends in severe storms, it states, “are uncertain.”
Climate change, Johnstown Floods, Dust Bowls, extreme weather events and forest fires have been part of Earth and human history forever – and no amount of White House spin can alter that fact. To suggest that any changes in weather or climate – or any temporary increases in extreme weather events – are due to humans is patently absurd. To ignore positive trends and the 17-year absence of warming is abominable.
Fourth, sticking to the “manmade climate disaster” script is essential to protect the turf, reputations, funding and power of climate alarmists and government bureaucrats. The federal government doles out some $2.6 billion annually in grants for climate research – but only for work that reflects White House perspectives. Billions more support subsidies and loans for renewable energy programs that represent major revenue streams for companies large and small, and part of that money ends up in campaign war chests for (mostly Democrat) legislators who support the climate regulatory-industrial complex.
None of them is likely to admit any doubts, alter any claims or policies, or reduce their increasingly vitriolic attacks on skeptics of “dangerous manmade global warming.” They do not want to risk being exposed as false prophets and charlatans, or worse. Follow the money.
Last, and most important, climate disruption claims drive a regulatory agenda that few Americans support. Presidential candidate Obama said his goal was “fundamentally transforming” the United States and ensuring that electricity rates “necessarily skyrocket.” On climate change, President Obama has made it clear that he “can’t wait for an increasingly dysfunctional Congress to do its job. Where they won’t act, I will.” His Environmental Protection Agency, Department of the Interior, Department of Energy and other officials have steadfastly implemented his anti-hydrocarbon policies.
Chief Obama science advisor John Holdren famously said: “A massive campaign must be launched to … de-develop the United States … bringing our economic system (especially patterns of consumption) into line with the realities of ecology and the global resource situation.… [Economists] must design a stable, low-consumption economy in which there is a much more equitable [re]distribution of wealth.”
(The President also wants to ensure that neither a Keystone pipeline approval nor a toned-down climate agenda scuttles billionaire Tom Steyer’s $100-million contribution to Democrat congressional candidates.)
This agenda translates into greater government control over energy production and use, job creation and economic growth, and people’s lives, livelihoods, living standards, liberties, health and welfare. It means fewer opportunities and lower standards of living for poor and middle class working Americans. It means greater power and control for politicians, bureaucrats, activists and judges – but with little or no accountability for mistakes made, damage done or penalties deliberately exacted on innocent people.
A strong economy, modern technologies, and abundant, reliable, affordable energy are absolutely essential if we are to adapt to future climate changes, whatever their cause – and survive the heat waves, cold winters, floods, droughts and vicious weather events that will most certainly continue coming.
The Obama agenda will reduce our capacity to adapt, survive and thrive. It will leave more millions jobless, and reduce the ability of families to heat and cool their homes properly, assure nutritious meals, pay their rent or mortgage, and pursue their American dreams.
America’s minority and blue collar families will suffer – while Washington, DC power brokers and lobbyists will continue to enjoy standards of living, housing booms and luxury cars unknown in the nation’s heartland. Think Hunger Games or the Politburo and nomenklatura of Soviet Russia.
Worst, it will all be for nothing, even if carbon dioxide does exert a stronger influence on Earth’s climate than actual evidence suggests. While the United States slashes its hydrocarbon use, job creation, economic growth and international competitiveness, China, India, Brazil, Indonesia – and Spain, Germany, France and Great Britain – are all increasing their coal use … and CO2 emissions.
President Obama and White House advisor John Podesta are convinced that Congress and the American people have no power or ability to derail the Administration’s determination to unilaterally impose costly policies to combat “dangerous manmade climate disruption” – and that the courts will do nothing to curb their executive orders, regulatory fiats and economic disruption.
If they are right, we are in for some very rough times – and it becomes even more critical that voters learn the facts and eject Harry Reid and his Senate majority, to restore some semblance of checks and balances.
Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.
[Originally published at Canada Free Press]
The fortunes of U.S. core cities (municipalities) have varied greatly in the period of automobile domination that accelerated strongly at the end of World War II. This is illustrated by examining trends between the three categories of “historical core municipalities” (Figure 1). Since that time, nearly all metropolitan area (the functional or economic definition of the city) growth has been suburban, outside core municipality limits, or in the outer rings of existing, core municipalities.
Approximately 26 percent of major metropolitan area population is located in the core municipalities. Yet, many of these municipalities include large areas of automobile orientation that are anything but urban core in their urban form. Most housing is single-detached, as opposed to the much higher share of multi-family in the urban cores, and transit use is just a fraction of in the urban cores.
Even counting their essentially suburban populations, today’s core municipalities represent, with a few exceptions, a minority of their metropolitan area population. The exceptions (San Antonio, Jacksonville, Louisville, and San Jose) are all highly suburbanized and have annexed land area at a substantially greater rate than they have increased their population.
According to the 2010 census, using the 2013 geographic definitions, core cities accounted for from five percent of the metropolitan area population in Riverside-San Bernardino to 62 percent in San Antonio (Figure 2).
These kinds of differences are not limited to the United States. For example, the city (municipality) of Melbourne, Australia has little more than two percent of the Melbourne metropolitan area population. Indeed, the city of Melbourne is only the 23rd largest municipality in the Melbourne metropolitan area and has a population smaller than a single city council district in Columbus, Ohio.
These virtually random variations in core city sizes lead to misleading characterizations. For example, locals sometimes point out that San Antonio is the 6th largest city in the United States. True, San Antonio is the 6th largest municipality in the United States, but the genuine, classically defined city – the broader metropolitan area that is the urban organism – ranked only 26th in size in 2010. The suburbs and exurbs, as defined by municipal jurisdictions, are smaller than average in San Antonio, but the city itself stretches in a suburban landscape up to more than 15 miles (24 kilometers) beyond its 1950 borders.
Core municipality mayors have been known to travel around the as representatives of their metropolitan areas. In some cases core municipality mayors represent constituencies encompassing the entire metropolitan area (such as Auckland or soon to be major metropolitan Honolulu). Others have comparatively small constituencies. For example, the mayor of Paris presides over only 18 percent of the metropolitan area population, the mayor of Atlanta 8 percent, the mayor of Manila 6 percent, Melbourne 2 percent and Perth, Australia just 0.5 percent (Figure 3).
Core Municipalities in the United States
A remnant of U.S. core urbanization is evident within the city limits of municipalities that were already largely developed in 1940 and have not materially expanded their boundaries. These are the Pre-World War II Core & Non Suburban category of core municipalities. Between 1950 and 2010 these core municipalities lost a quarter of their population, dropping from 24.5 million residents to 19.3 million (Figure 4). All but Miami lost population. Despite improved downtown population fortunes, the last decade saw a small further decline of 0.2 percent overall. Only two legacy cities, New York and San Francisco, now exceed their peak populations of the mid-20th Century.
Again, this is the typical pattern internationally. Throughout the high-income world, the urban cores that have not expanded their boundaries and had little greenfield space for suburban development have had declining in population for years. My review of 74 high income world core municipalities that were fully developed in the 1950s and have not annexed materially showed that only one had increased in population by 2000 (Vancouver). Since that time, a few that had experienced more modest declines have recovered to record levels, such as Munich and Stockholm. Most others, such as London, Paris, Milan, Copenhagen and Zurich remain below their peak populations.
In the United States, most of the strong growth has taken place in the “Pre-World War II & Suburban” classification, doubling from 10.1 million residents to 20.4 million since 1950. These include core cities with strong pre-war cores, but which have either annexed large areas or already contained large swaths of rural territory at that time (like Los Angeles, with its San Fernando Valley, which was largely agricultural) that later became heavily populated.
Many of these core cities experienced population declines within their 1950 boundaries (such as Portland, Seattle and Nashville between 1950 and 1990). Los Angeles, however, has been the exception. The more highly developed central area (as defined by the city Planning Department) within the city limits has increased in population by one-third since 1950. The continuing suburbanization of the city of Los Angeles, however, is indicated by the fact that the central area’s share of city population has fallen from 68 percent to 47 percent.
The “Post-World War II & Suburban” core cities are much smaller and their metropolitan areas are nearly all suburban. These include metropolitan areas like Phoenix and San Jose. The population of these metropolitan areas has increased more than seven fold, from 700,000 to 5.2 million.
Land Area: The differences between the three historical core municipality classifications are most evident in land area. Among the “Pre-World War II & Non-Suburban” cores, land areas were almost unchanged from 1950, with much of the difference reflected in Chicago’s O’Hare International Airport annexation. In contrast, the “Pre-World War II & Suburban” cores more than tripled in size, adding an area larger than Connecticut to their city limits. The percentage increase was even larger in the “Post-World War II & Suburban” cores which covered 10 times as much land in 2010 as in 1950 (Figure 5).
Population Density: Over the 60 year period, the population density of the “Pre-World War II & Non-Suburban” cores dropped from 15,300 per square mile to 11,400 (5,900 per square kilometer to 4,400). The “Pre-World War II & Suburban” and “post-World War II & Suburban” cores started with much lower densities and then fell farther. The core city densities in these municipalities are approximately one-half the population densities of Los Angeles suburbs (Figure 6).
The Need for Caution
All of this indicates the importance of caution with respect to core versus suburban and exurban comparisons. For example, Atlanta, which represents only 8 percent of the urban organism (metropolitan area) in which it is located is not comparable to San Antonio, with its 62 percent of the metropolitan population. These distinctions are important when we talk about different regions.
Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.
Chicago photo by Bigstock.
[Originally published at New Geography]
I confess that I am more than a bit mystified at the way FCC Chairman Tom Wheeler and his Democrat colleagues, seemingly, are moving ever closer in the direction of embracing a Title II reclassification of Internet access services. No matter how loud the banging of pots and pans outside the FCC’s headquarters, it would be terribly unsound as a matter of policy to subject Internet services to the same Title II public utility regulatory regime that applied to last century’s POTS (“plain old telephone”) service.
The irony of the Free Press organization urging protesters to bring pots to the FCC to make noise in the cause of imposing on today’s Internet providers the same public utility regulation that applied to Ma Bell’s POTS-era service seems to have escaped the protesters.
But put aside my mystification as to why Chairman Wheeler and his Democrat colleagues would want to align themselves with such a backwards-looking policy.
What also mystifies me is how little discussion there has been concerning the likelihood of success, or not, that a Title II reclassification would be sustained. As a said in my May 9 blog, “Pots and Pans and the Neutrality Mess,” the “FCC’s legal case would be fairly problematic.”
Here is the way I explained why this is so:
“While it is true enough that, under established administrative law principles, an agency may change its mind, it nevertheless must provide a well-reasoned explanation for such a change. Pointing to the number of protesters banging on pots and pans outside the FCC’s doors is not likely to suffice. Neither is pointing to the agency’s disappointment at already having been twice rebuffed by the DC Circuit under alternative theories.
The main reason the FCC’s case for sustaining a Title II challenge would be problematic is this: In defending its decision to classify Internet service providers as information service providers – thereby removing them from the ambit of Title II regulation – the Commission argued that, from a consumer’s perspective, the transmission component of an information service is integral to, and inseparable from, the overall service offering. This functional analysis of ISPs’ service offerings was the principal basis upon which the Supreme Court upheld the FCC classification determination in 2005 in its landmark Brand X decision.
I don’t think the integrated, inseparable nature of ISPs’ service offerings, from a functional standpoint, and from a consumer’s perspective, has changed since the Brand X decision, so it won’t be easy for the Commission to argue that it is changing its mind about the proper classification based on changed consumer perceptions of the service offerings’ functionality. And to the extent that the Brand X Court cited favorably to the FCC’s claims concerning the then-emerging marketplace competition and the dynamism in the broadband marketplace, those factors, if anything, today argue even more strongly for a non-Title II common carrier classification.
I understand the role that so-called Chevron deference can play in upholding agency decisions. Indeed, it played an important role in the Court’s decision in Brand X. But invoking Chevron deference won’t relieve the FCC of the need to provide persuasive reasoning in support of an abrupt about-face on a point the agency litigated – successfully – all the way up to the Supreme Court.”
As I’ve been puzzling over the lack of comment concerning the lawfulness of a potential FCC switcheroo regarding Title II, I reviewed once again the FCC General Counsel’s Memorandum dated May 6, 2010, in which Austin Schlick, the then-GC, set out to bolster the case for a Title II reclassification of Internet services should the Commission choose to adopt that course. Of course, the then-commissioners did not choose the Title II route.
Nevertheless, given its clear intent to bolster the legal justification for a Title II reclassification, the General Counsel’s memorandum is instructive. As I acknowledged in my blog last Friday, Mr. Schlick rightly observes that the FCC may well receive substantial Chevron deference for a reclassification determination and that an agency is entitled to change its mind if it offers persuasive reasoning for doing so.
I agree with these points of administrative law. But I think if Mr. Schlick’s memo is read closely, it indicates that it will not be so easy for the Commission to supply such persuasive reasoning. This is because, as Mr. Schlick readily acknowledges, in his opinion for the Supreme Court in Brand X, Justice Thomas declared: “The entire question is whether the products here are functionally integrated (like the components of a car) or functionally separate (like pets and leashes). That question turns not on the language of the Act, but on the factual particulars of how Internet technology works and how it is provided, questions Chevron leaves to the Commission to resolve in the first instance….”
Having already resolved in the first instance the question of “the factual particulars of how Internet technology works and how it is provided,” it won’t necessarily be so easy for the Commission now to do an about-face. For as Mr. Schlick went on to say, an agency reassessment of the classification issue would have to include:
“[A] fresh look at the technical characteristics and market factors that led Justice Scalia to believe there is a divisible telecommunications service within broadband Internet access. The factual inquiry would include, for instance, examination of how broadband access providers market their services, how consumers perceive those services, and whether component features of broadband Internet access such as email and security functions are today inextricably intertwined with the transmission component. If, after studying such issues, the Commission reasonably identified a separate transmission component within broadband Internet access service, which is (or should be) offered to the public, then the consensus policy framework for broadband access would rest on both the Commission’s direct authority under Title II and its ancillary authority arising from the newly recognized direct authority.”
In other words, as Mr. Schlick understood, it won’t suffice for the Commission simply to bemoan the fact that the D.C. Circuit twice has held that the agency lacked authority for its earlier forays into net neutrality regulation. Instead, the Commission will need to show, as a factual matter, from a functional standpoint and from the consumer’s perspective, why its earlier technical analysis concerning the integrated nature of Internet service – that is, the inseparability of the transmission and information services components – is no longer “operative.”
Mr. Schlick quotes heavily from Justice Scalia’s dissenting analysis to bolster his case. But Justice Scalia’s analysis was accepted by only two other Justices. He was on the losing side of a 6-3 decision.
I am not saying that the Commission could not prevail if it ever decides to go the Title II route – as unwise as such a decision would be. But I am not aware that the functional nature of Internet access services has changed since the Commission initially classified Internet access as an information services. Nor am I aware that consumers perceive the way these services are offered, from a functional standpoint, any differently today than they did at the time of the agency’s initial classification determination.
That being so, I remain mystified at how little discussion there has been concerning the lawfulness, or not, of a potential Title II reclassification.
[Originally published at The Free State Foundation]
This year marks one hundred years since the beginning of the First World War in the summer of 1914. The Great War, as it used to be called, brought great devastation in its wake. Millions of human lives were lost on the battlefields of Europe; vast amounts of accumulated wealth were consumed to cover the costs of combat; and battles and bombs left a large amount of physical capital in ruins. But the “war to end war,” as it was called, also resulted in another weapon of economic mass destruction – an orgy of paper-money inflations.
One of these tragic episodes that is worth recalling and learning from was the disintegration of the Austro-Hungarian Empire and the accompanying Great Austrian Inflation in the immediate postwar period in the early 1920s.The Habsburg Monarchy and the Coming of World War I
In the summer of 1914, as clouds of war were forming, Franz Joseph (1830–1916) was completing the 66th year of his reign on the Habsburg throne. During most of his rule Austria-Hungary had basked in the nineteenth-century glow of the classical-liberal epoch. The constitution of 1867, which formally created the Austro-Hungarian “Dual Monarchy,” ensured that every subject in Franz Joseph’s domain had all the essential personal, political, and economic liberties of a free society.
The Empire encompassed a territory of 415,000 square miles and a total population of over 50 million. The largest linguistic groups in the Empire were the German-speaking and Hungarian populations, each numbering about 10 million. The remaining 30 million were Czechs, Slovaks, Poles, Romanians, Ruthenians, Croats, Serbs, Slovenes, Italians, and a variety of smaller groups of the Balkan region.Austria-Hungary’s Wartime Inflation and Postwar Political Disintegration
Like all the other European belligerent nations, the Austro-Hungarian government immediately turned to the printing press to cover the rising costs of its military expenditures in the First World War. At the end of July 1914, just after the war had formally broken out, currency in circulation totaled 3.4 billion Austrian crowns. By the end of 1916 it had increased to over 11 billion crowns. And at the end of October 1918, shortly before the end of the war in early November 1918, the currency had expanded to a total of 33.5 billion crowns. From the beginning to the close of the war the Austro-Hungarian money supply in circulation had expanded by 977 percent. A cost-of-living index that had stood at 100 in July 1914 had risen to 1,640 by November 1918.
But the worst of the inflationary and economic disaster was about to begin. Various national groups began breaking away from the Empire, with declarations of independence by Czechoslovakia and Hungary, and the Balkan territories of Slovenia, Croatia, and Bosnia being absorbed into a new Serb-dominated Yugoslavia. The Romanians annexed Transylvania; the region of Galicia became part of a newly independent Poland; and the Italians laid claim to the southern Tyrol.
The last of the Habsburg emperors, Karl, abdicated on November 11, 1918, and a provisional government of the Social Democrats and the Christian Socials declared German-Austria a republic on November 12. Reduced to 32,370 square miles and 6.5 million people—one-third of whom resided in the city of Vienna—the new, smaller Republic of Austria now found itself cut off from the other regions of the former empire as the surrounding successor states (as they were called) imposed high tariff barriers and other trade restrictions on the Austrian Republic. In addition border wars broke out between the Austrians and the neighboring Czech and Yugoslavian armies.Postwar Austria and Socialist Redistributive Policies
Within Austria the various regions imposed internal trade and tariff barriers on other parts of the country, including Vienna. The rural regions hoarded food and fuel supplies, with black marketeers the primary providers of many of the essentials for the citizens of Vienna. Thousands of Viennese would regularly trudge out to the Vienna Woods, chop down the trees, and carry cords of firewood back into the city to keep their homes and apartments warm in the winters of 1919, 1920, and 1921. Hundreds of starving children were seen every day begging for food at the entrances of Vienna’s hotels and restaurants.
The primary reason for the regional protectionism and economic hardship was the policies of the new Austrian government. The Social Democrats imposed artificially low price controls on agricultural products and tried to forcibly requisition food for the cities from the countryside. The rural population resisted the food-requisitioning police units sent from Vienna, sometimes opposing the confiscation of their harvests with armaments.
The only thing that saved even more starvation was the effectiveness of a huge black market that got around the network of price controls and the provincial government restrictions that attempted to prevent the exporting of food to Vienna. Housewives in the Vienna would refer to, “My smugger,” meaning the regular black market provider of the essentials of life, and of course at prices far above the artificial prices set by the socialist government in the capital.The Costs of Austrian Socialism and Hyperinflation
By 1921 over half the Austrian government’s budget deficit was attributable to food subsidies for city residents and the salaries of a bloated bureaucracy to manage an expanding welfare state. The Social Democrats also regulated industry and commerce, and imposed higher and higher taxes on the business sector and the shrinking middle class. One newspaper in the early 1920s called Social Democratic fiscal policy in Vienna the “success of the tax vampires.”
The Austrian government paid for its welfare state subsidies and expenditures through the monetary printing press. Between March and December 1919 the supply of new Austrian crowns increased from 831.6 million to 12.1 billion. By December 1920 it increased to 30.6 billion; by December 1921, 174.1 billion; by December 1922, it was 4 trillion; and by the end of 1923, it had increased to 7.1 trillion crowns. Between 1919 and 1923, Austria’s money supply had increased by 14,250 percent.
Prices rose dramatically during this period. The cost-of-living index, which had risen to 1,640 by November 1918, had gone up to 4,922 by January 1920; by January 1921 it had increased to 9,956; in January 1922 it stood at 83,000; and by January 1923 it had shot up to 1,183,600. The hypothetical consumer basket of goods that had cost 100 crowns in 1914 cost over one million crowns less than nine years later.
The foreign-exchange value of the Austrian crown also reflected the catastrophic depreciation. In January 1919 one dollar could buy 16.1 crowns on the Vienna foreign-exchange market; by May 1923, one dollar traded for 70,800 crowns.
At first the black marketeers in Vienna would accept the depreciating Austrian crown as payment for smuggled goods from the rural areas. But by the autumn of 1923, they would only sell for other commodities considered of higher and more tradable value that increasingly worthless paper money. A gold watch bought four sacks of potatoes; fifty cigars of a superior quality purchased four pounds of pork or ten pounds of lard.
During the worst of the inflation, the Austrian central bank printing presses were working night and day churning out the vast quantities of the currency. At the 1925 meeting of the German “Verein für Sozialpolitik” (the Society for Social Policy), Austrian economist Ludwig von Mises told the audience:
“Three years ago a colleague from Germany, who is in this hall today, visited Vienna and participated in a discussion with some Viennese economists . . . Later, as we went home through the still of the night, we heard in the Herrengasse [a main street in the center of Vienna] the heavy drone of the Austro-Hungarian Bank’s printing presses that were running incessantly, day and night, to produce new banknotes. Throughout the land, a large number of industrial enterprises were idle; others were working part-time; only the printing presses stamping out banknotes were operating at full speed.”Ludwig von Mises and Ending the Austrian Inflation
Finally in late 1922 and early 1923 the Great Austrian Inflation was brought to a halt. This was due to a great extent to the efforts of Ludwig von Mises. Mises was a senior economic analyst at the Vienna Chamber of Commerce. He worked tirelessly to persuade those in political power that the food subsidies had to end. Finally, in 1922, he was able to arrange for several prominent business associations and the association of labor unions in Vienna to call for the elimination the government’s costly food subsidies at the controlled prices.
To ameliorate the differential effects that inflation was having on often raising the prices of goods before any rise in money wages, Mises proposed and had accepted a price indexation scheme linked to the value of gold, so that money wages on average would rise at the same rate as the general level of prices was going up. This would take pressure off the government to have to compensate with expensive food subsidies in the face of the rising cost of living, and which could be funded with no means other than further and further increases in the paper money supply.
Then Mises succeeded in persuading the Austrian Chancellor, Ignaz Seipel, that continuation of the inflation would lead to the economic and political ruin of the country. Mises warned Seipel that with an end to the inflation there would be a “stabilization crisis” during which the Austrian economy would have to go with an adjustment period. The market would have to rebalance itself due to the distortions and misdirection of labor, capital and resources that the inflation had brought about.
Seipel accepted that fact that the readjustment consequences were necessary if a worse disaster was to be avoided from a total collapse of the Austrian monetary system. The Austrian government appealed for help to the League of Nations, which arranged a loan to cover a part of the state’s expenditures. But the strings attached to the loan required an end to food subsidies and a 70,000-man cut in the Austrian bureaucracy to reduce government spending.
At the same time, the Austrian National Bank was reorganized, with the bylaws partly written by Ludwig von Mises. A gold standard was reestablished in 1925; a new Austrian shilling was issued in place of the depreciated crown; and restrictions were placed on the government’s ability to resort to the printing press again.Austria’s Short-Lived Stability before Depression and Nazi Annexation
Unfortunately, Austria’s economic recovery was short-lived. In the second half of the 1920s, the Austrian government again increased expenditures, borrowed money to cover its deficits and raised taxes on the business sector and higher income individuals. This resulted in economic stagnation.
In 1931, Ludwig von Mises co-authored a report for the Austrian government that showed that fiscal policy had resulted in capital consumption. Business taxes, social insurance taxes and workers’ wages had increased so much between 1925 and 1929 relative to the rise in selling prices for manufactured goods that many enterprises had not had enough after-tax revenues to replace physical capital used up in production. Misguided Austrian fiscal policy had resulted in a partial “eating of the seed corn.”
With the coming of the Great Depression in the early 1930s Austria suffered a new financial crisis due to banking mismanagement. An attempted “bailout” to save some of Vienna’s leading banks created even more fiscal havoc with the Austrian government’s budget and a partial moratorium on payment of Austria’s international debt. Loans arranged through the League of Nations provided temporary stopgap remedies to the fiscal crisis.
But overshadowing even all of the economic chaos was a political crisis in 1933. A procedural voting dispute in the Austrian Parliament lead the Austrian Chancellor, Engelbert Dollfuss, to suspend the country’s constitution and impose a one-party fascist-type dictatorship. In 1934, Austrian Nazis inspired by Hitler’s coming to power in Germany a year earlier murdered Dollfuss in a failed coup attempt.
Four years later, in March 1938, Hitler ordered the invasion of Austria, and the country was annexed into Nazi Germany. Austria’s previous monetary and fiscal mismanagement soon paled in comparison to its fall into the abyss of Nazi totalitarianism and then the destruction of World War II.
For those who say that such things as a hyperinflation, economic chaos, capital consumption, and political tyranny “can’t happen here,” it is worth remembering that a hundred years go, in 1914, few in prewar Vienna could have imaged that it would happen there.
[Originally published at EpicTimes]