On the Blog

All States, and the Feds, Should Emulate Minnesota’s Civil Forfeiture Reform

Somewhat Reasonable - October 07, 2014, 2:43 PM

Imagine police seize your money, your car, even your house. Imagine this happens without you being convicted of a crime or even charged with one. Imagine being told you must sue the government to get back your property and prove you did nothing wrong, and the government can do nothing – nothing – and still keep the property.

This happens thousands of times a year across the country. But it will soon happen less often in Minnesota, which has taken a small but important step toward ending one of the most abusive law enforcement practices in the nation. It’s a step the federal and other state governments should take to protect citizens from abusive police and prosecutors and restore a fundamental principle of life in these United States: that we are presumed innocent until proven guilty.

States and local governments have stolen billions of dollars of property from people who have never been convicted of a crime or charged with one. They’ve done it under a practice called “civil forfeiture.” It’s an outgrowth of the nation’s “war on drugs,” which has been raging and failing since President Richard Nixon launched it more than 40 years ago.

Civil forfeiture defenders say it’s another way to get at criminals—usually drug users or sellers—while helping to fund law enforcement. Under civil forfeiture, police and prosecutors may seize property, sell it, and use the proceeds to pad their budgets. To get back their property, forfeiture victims must spend thousands of dollars in legal fees to sue. In many instances, the legal costs would exceed the value of the property.

Politicians eager to look tough on crime decided to structure civil forfeiture so police and prosecutors may take property on the mere suspicion it could be linked to a drug crime or certain other nefarious activities. Police and prosecutors don’t have to prove anything. All they have to do is claim they “suspect” the person losing the property might have been planning to use it in an illegal way or might have used or obtained it illegally. Their “suspicions” often are so flimsy no arrest or criminal charge is made. They just take the property.

The final straw for Minnesota legislators came after the Minneapolis Star-Tribune broke a scandal in the state’s Metro Gang Strike Force by reporting on the brutality of its raids and the apparent police thefts of cash and other property, including at least 13 seized cars that went “missing.” A state court later ordered $840,000 in seized property returned to forfeiture victims, and the strike force was disbanded.

Effective August 1, a bill signed into law by Gov. Mark Dayton will require people in Minnesota to be convicted of a drug crime before their property can be seized through forfeiture.

Civil forfeiture for other reasons is still possible, but reining in property seizures under the pretext of drug activity is a good start.

Through civil forfeiture our local, state, and federal governments effectively declare we are presumed guilty until proven innocent. In other words, it is a system of tyranny by police and prosecutors. Americans should never accept tyranny, no matter what excuses people in government give for imposing it.

North Carolina is the only state with no civil forfeiture. Let us hope Minnesota and the rest of the states and the federal government get to where North Carolina is and ban all civil forfeitures.

[Originally published at Inside Sources]

Below is a video from HBO’s John Oliver’s show “Last Week Tonight” discussing Civil Forfeiture laws

 

Categories: On the Blog

‘If You Like Your Internet, You Can Keep Your Internet’ and Other Government Lies

Somewhat Reasonable - October 07, 2014, 12:49 PM

Conservatives and Libertarians inherently have little faith or trust in government. We know the institution is inherently flawed – and self-serving.

Government violates the Wallet Rule. Which is:

You go out on a Friday night with your wallet.  You go out the following Friday night with my wallet. On which night are you going to have more fun?

Government is always working with our wallet – theirs is empty until they first fleece ours. They will thus never spend our money as prudently, wisely or well as do we.

Government is just another organism.  Like any other, its first priority is self-preservation – its second self-expansion. And worse than just about any other – it will do whatever it takes to accomplish these priorities.

Including lie its collective face off.

The Barack Obama Administration is the most government-expansive administration in our nation’s history. To that end, they have used any means necessary – including lying its collective face off. For instance:

If you like your health care plan/doctor – you can keep your health care plan/doctor.

This Administration’s obsessive government expansion occurs in the face of it being just like any other administration and government entity – incessantly, serially incompetent at doing just about anything.

All of which has led people – well beyond conservative and libertarian circles – here:

CNN Poll: Trust in Government at All-Time Low

Gallup Poll: Trust In Government Problem-Solving Reaches New Low

Which brings us to the current debate over the government power grabbing huge now authority over the Internet.

President Obama’s Federal Communications Commission (FCC) – and its Obama-appointee Chairman Tom Wheeler – are contemplating fundamentally transforming how the government regulates the Web.  It’s called Title II Reclassification.

Title II is the uber-regulatory superstructure with which we have strangled landline phones – you know, that bastion of technological and economic innovation. Which do you find more impressive – your desktop dialer or your iPhone?

Title II regulations date back to the 1930s – so you know they’ll be a perfect fit for the ultra-modern, incredibly dynamic, expanding-like-the-universe World Wide Web.

This would be the most detrimental of all Information Superhighway road blocks. Rather than the omni-directional, on-the-fly innovation that now constantly occurs, Title II is a Mother-May-I-Innovate, top-down traffic congest-er. Imagine taking a 16-lane Autobahn down to just a grass shoulder.

But fret not, the regulators tell us.  They will wield just some – and not all – of their massive new powers.  They will practice “forbearance.”

“(F)orbearance” refers to  a special magic power that Congress gave the FCC…which gives the FCC the power to say “you know that specific provision of law that Congress passed? We decide it really doesn’t make sense for us to enforce it in some particular case, so we will “forbear” (hence the term ‘forbearance’) from enforcing it.”

Can we trust government to – forever and for always – leave regulatory powers on the table unused?

Can we trust this Administration – the most government-expansive ever – to do so?

Can we trust this particular FCC?

Coalition Warns FCC Chairman about FCC’s Increasing Politicization

In a letter sent today to Federal Communications Commission Chairman Tom Wheeler, a coalition of groups expressed concerns over the agency’s  loss of objectivity and impartiality in recent proceedings, especially the FCC’s ongoing Open Internet rulemaking.

The letter urges the Commission to keep partisan politics out of its decision-making process, to avoid spinning media coverage, and to focus on substance, not the total number of comments filed in controversial proceedings.

This FCC?

‘Most Transparent Ever?’ Behold the FCC’s Secret, Crony Socialist Meetings

This FCC?

FCC Chairman Won’t Allow His IG To Hire Any Criminal Investigators

We certainly can not.  In what can we trust?

Aesop Knew: Regulators Regulate – It’s Their Nature

When Bureaucrats Determine Their Own Limits – There Are No Limits

You bet.

So when the government tells us – as it ramps up new, massive government power grabs – “If you like your Internet – you can keep your Internet?”

Don’t you believe it.

[Originally published at Human Events]

 

Categories: On the Blog

Carbon Footprints: Good, Bad and Ugly

Somewhat Reasonable - October 06, 2014, 10:52 PM

Australians are supposed to feel guilty because some bureaucrat in the climate industry has calculated that we have a very high per capita “carbon footprint.” By “carbon footprint.” they mean the amount of carbon dioxide gas produced by whatever we do.  Every human activity contributes to our carbon footprint – even just lying on the beach breathing gently produces carbon dioxide.

Producing carbon dioxide is not bad – it an essential gas in the cycle of life, and beneficial for all life. There is no proof whatsoever that human emissions cause dangerous global warming.  Moreover, it is not per capita emissions that could affect the climate – it is total emissions, and on that measure Australia’s small contribution is largely irrelevant. This is just another PR weapon in the extreme green alarmist arsenal.

Even if carbon footprints were important, not all footprints are environmentally equal – some are good, some are bad and some are just plain ugly.

“Good” carbon footprints are the result of producing unsubsidised things for the benefit of others. An example is a grazier in outback Australia whose family lives frugally and works hard but has a high carbon footprint producing wool, mutton and beef from sustainable native grasslands and may use quad bikes, diesel pumps, electricity, tractors, trucks, trains, planes and ships to supply distant consumers. Many productive Australians with good carbon footprints produce food and fibres, seafood and timber, minerals and energy for grateful consumers all over the world. Activities like this create a large “per capita carbon footprint” for Australia. That so few people can produce so much is an achievement to be proud of.

A “bad” carbon footprint is produced when government subsidies, grants, hand-outs, tax breaks or mandates keep unproductive or unsustainable activities alive, leaving their footprint, but producing little useful in return. The prime examples are subsidised green energy and the government climate industry, but there are examples in all nationalised or subsidised industries and activities. (Russia and East Germany easily met their initial Kyoto targets by closing decrepit Soviet-era nationalised industries.)

An “ugly” carbon footprint is produced by green hypocrites who preach barefoot frugalism to us peasants while they live the opulent life style. Examples are the mansions, yachts and jet-setting of prominent green extremists such as Al Gore and Leonardo DiCaprio.

The ultimate ugly carbon hypocrites are those who organise and attend the regular meetings, conferences and street protests, drawing thousands of globe-trotting alarmists and “environmentalists” from all over the world by plane, yacht, car, bus, train and taxi to eat, drink, chant and dance while they protest about over-population, excessive consumption and heavy carbon footprints of “all those other people”.

Maybe they should lead by example and stop travelling, eating, drinking and breathing.

Categories: On the Blog

Celebrating The Work Of Nobel Prize Winning Economist, F.A. Hayek – A Man Who Has Made the 21st Century a Freer and More Prosperous Time

Somewhat Reasonable - October 06, 2014, 2:29 PM

Forty years ago, on October 9, 1974, the Nobel Prize committee announced that the co-recipient of that year’s award for economics was the Austrian economist, Friedrich A. Hayek. Never was there a more deserving recognition for one of the truly great free market thinkers of modern times.

The Nobel committee recognized his contributions, including “pioneering work in the theory of money and economic fluctuations and for [his] penetrating analysis of the interdependence of economic, social and institutional phenomena.”

Over a scholarly and academic career that spanned seven decades, Hayek was one of the leading challengers against Keynesian economics, a profound critic of socialist central planning, and a defender of the open, competitive free society.

The awarding of the Nobel Prize for Economics in 1974 represented capstone recognition to an intellectual life devoted to understanding the workings and superiority of social systems grounded in the idea and ideals of human freedom and voluntary association.

 

“Austrian” Influences on Hayek

Friedrich August von Hayek was born on May 8, 1899 in Vienna, Austria. He briefly served in the Austrian Army on the Italian front during World War I. Shortly after returning from the battlefield in 1918 he entered the University of Vienna and earned two doctorates, one in jurisprudence in 1921 and the other in political science in 1923. While at the university, he studied with one of the founders of the Austrian school of economics, Friedrich von Wieser.

But perhaps the most important intellectual influence on his life began in 1921, when he met Ludwig von Mises while working for the Austrian Reparations Commission. It is not meant to detract from Hayek’s own contributions to suggest that many areas in which he later made his profoundly important mark were initially stimulated by the writings of Mises. This is most certainly true of Hayek’s work in monetary and business-cycle theory, his criticisms of socialism and the interventionist state, and in some of his writings on the methodology of the social sciences.

In 1923 and 1924, Hayek visited New York to learn about the state of economics in the United States. After he returned to Austria, Mises helped arrange the founding of the Austrian Institute for Business Cycle Research, with Hayek as the first director.

Though Hayek initially operated the institute with almost no staff and only a modest budget primarily funded by the Rockefeller Foundation, it was soon recognized as a leading center for the study of economic trends and forecasting in central Europe. Hayek and the Institute were frequently asked to prepare studies on economic conditions in Austria and central Europe for the League of Nations.

 

Hayek as Opponent of Keynesian Economics

In early 1931, Hayek traveled to Great Britain to deliver a series of lectures at the London School of Economics. The lectures created such a sensation that he was invited to permanently join the faculty of the LSE. In the early fall of 1931 these lectures appeared in book form under the title Prices and Production. So widely influential did this book and his other writings become at the time that through a good part of the 1930s, Hayek was the third-most frequently cited economist in the English-language economics journals. (John Maynard Keynes and his Cambridge University colleague Dennis Robertson came in first and second.)

This began his decade-long challenge to Keynes’ emerging “new economics” of macroeconomics and its rationale for activist government manipulation through monetary and fiscal policy.

In 1931–1932, Hayek wrote a lengthy two-part review of Keynes’s Treatise on Money for the British journal Economica. It was considered a devastating critique of Keynes’ work, one that forced Keynes to rethink his ideas and go back to the drawing board.

At the same time, the Great Depression of the early 1930s served as the backdrop against which Hayek explained his own theory and criticized Keynes.

 

Monetary Mismanagement and the Great Depression

In Prices and Production (1931) and Monetary Theory and the Trade Cycle (1933) Hayek argued that in the 1920s the American Federal Reserve System had followed a monetary policy geared toward stabilizing the general price level. But that decade had been one of major technological innovations and increases in productivity. If the Federal Reserve had not increased the money supply, the prices for goods and services would have gently fallen to reflect the increased ability of the American economy to produce greater quantities of output at lower costs of production.

Instead, the Federal Reserve increased the money supply just sufficiently to prevent prices from falling and to create the illusion of economic stability under an apparently stable price level. But the only way the Fed could succeed in this task was to increase reserves to the banking system, which then served as additional funds lent primarily for investment purposes to the business community.

To attract borrowers to take these funds off the market, interest rates had to be lowered. Beneath the calm surface of a stable price level, interest rates had been artificially pushed below real market-clearing levels. That generated a misdirection of labor and investment resources into long-term capital projects that eventually would be revealed as unsustainable because there was not enough real savings available to complete and maintain them.

The break finally came in 1928 and 1929, when the Fed became concerned that prices in general were finally beginning to rise. The Fed stopped increasing the money supply, investment profitability became uncertain, and the stock market crashed in October 1929.

Hayek argued that the economic downturn that then began was the inevitable consequence of the investment distortions caused by the earlier monetary inflation. A return to economic balance required the writing down of unprofitable capital investments, a downward adjustment of wages and prices, and a reallocation of labor and other resources to uses reflecting actual supply and demand in the market.

But the political and ideological climate of the 1930s was one increasingly dominated by collectivist and interventionist ideas. Governments in Europe as well as the United States did everything in their power to resist these required market adjustments. Business interests as well as trade unions called for protection from foreign competition, as well as government support of various types to keep prices and wages at their artificial inflationary levels. International trade collapsed, industrial output fell dramatically, and unemployment increased and became permanent for many of those now out of work.

Throughout the 1930s Keynes presented arguments to justify activist monetary and fiscal policies to try to overcome the imbalances the earlier monetary manipulation and interventions had created. This culminated in Keynes’ 1936 book, The General Theory of Employment, Interest and Money, which soon became the bible of a new macroeconomics that claimed that capitalism was inherently unstable and could only be saved through government “aggregate demand management.”

Hayek and other critics of Keynesian economics were rapidly swept away in the euphoric belief that government had the ability to demand-manage a return to full employment.

 

Hayek as Critic of Socialist Central Planning

But while seemingly “defeated” in the area of macroeconomics, Hayek realized that what was at stake was the wider question of whether in fact government had the wisdom and ability to successfully plan and guide an economy. This also led him to ask profoundly important questions about how markets successfully function and what institutions are essential for economic coordination to be possible in a complex system of division of labor.

In 1935, Hayek edited a collection of essays titled Collectivist Economic Planning, which included a translation of Ludwig von Mises’ famous 1920 article, “Economic Calculation in the Socialist Commonwealth” on why a socialist planned economy was functionally unworkable. For the volume, Hayek wrote an introduction summarizing the history of the question of whether socialist central planning could work and a concluding chapter on “the present state of the debate” in which he challenged many of the newer arguments in support of planning.

This was followed by a series of articles over the next several years on the same theme: “Economics and Knowledge” (1937), “Socialist Calculation: The Competitive ‘Solution’” (1940), “The Use of Knowledge in Society” (1945), and “The Meaning of Competition” (1946). Along with other writings, they were published in a volume entitled, Individualism and Economic Order (1948).

 

Divided Knowledge and Market Prices

In this work Hayek emphasized that the division of labor has a counterpart: the division of knowledge. Each individual comes to possess specialized and local knowledge in his corner of the division of labor that he alone may fully understand and appreciate how to use. Yet if all of these bits of specialized knowledge are to serve everyone in society, some method must exist to coordinate the activities of all these interdependent participants in the market.

The market’s solution to this problem, Hayek argued, was the competitive price system. Prices not only served as an incentive to stimulate work and effort, they also informed individuals about opportunities worth pursuing. Hayek clearly and concisely explained this in “The Use of Knowledge in Society”:

“We must look at the price system as such a mechanism for communicating information if we want to understand its real function . . . The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action.”

In elaborating his point, Hayek wrote that “The marvel is that in a case like that of a scarcity of one raw material, without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation, are made to use the material or its products more sparingly.”

Hayek added: “I am convinced that if it [the price system] were the result of deliberate human design, and if the people guided by the price changes understood that their decisions have significance far beyond their immediate aim, this mechanism would have been acclaimed as one of the greatest triumphs of the human mind”

It was in this period, as well, that Hayek applied his thinking about central planning to current politics. In 1944 he published what became his most famous book, The Road to Serfdom, in which he warned of the danger of tyranny that inevitably results from government control of economic decision-making through central planning. His message was clear: Nazism and fascism were not the only threats to liberty. The little book was condensed in Reader’s Digest and read by millions, and resulted in Hayek going on a nationwide lecture tour in the United States that was a resounding success.

In 1949 Hayek moved to the United States and took a position at the University of Chicago in 1950 as professor of social and moral science. He remained there until 1962, when he returned to Europe, where he held positions at various times at the University of Freiburg in West Germany and the University of Salzburg in Austria.

 

The Spontaneous Order of Human Society

The realization that something so significant—the price system—was undesigned and not intended to serve the purpose it serves so well became the centerpiece of Hayek’s writings for the rest of his life. He developed the idea in several directions in another series of works, including, The Counter-Revolution of Science (1952); The Constitution of Liberty (1960); Law, Legislation and Liberty in three volumes (1973–1979); in various essays collected in Studies in Philosophy, Politics and Economics (1967) and New Studies in Philosophy, Politics, Economics and the History of Ideas (1978); and in his final work, The Fatal Conceit: The Errors of Socialism (1988).

His underlying theme was that most institutions in society and the rules of interpersonal conduct are, as the eighteenth-century Scottish philosopher Adam Ferguson expressed it, “the result of human action, but not the execution of any human design.” In developing this idea, Hayek consciously took up the task of extending and improving the notion of the “invisible hand” as first formulated by Adam Smith in The Wealth of Nations and refined in the nineteenth century by Carl Menger, the founder of the Austrian school of economics.

Hayek argued that many forms of social interaction are coordinated through institutions that at one level are unplanned and are part of a wider “spontaneous order.” To a large extent, he explained, language, customs, traditions, rules of conduct, and exchange relationships have all evolved and developed without any conscious design guiding them. Yet without such unplanned rules and institutions, society would have found it impossible to progress beyond a rather primitive level.

Another way of expressing this is that, in Hayek’s view, the unique characteristic of an advanced civilization is that no one mind (or group of minds) controls or directs it. In a small tribal society all members often share basically one scale of values and preferences; the chief or leader can know the potentialities of each member and can assign roles and duties so that the tribe’s physical and mental means can be applied more or less successfully to the common hierarchy of ends.

However, once the group passes beyond a simple level of development, any further social progress will require radical revision of the social rules and order: the complexity of social and economic activity will make it impossible for any individual to master the information necessary to coordinating the members of the group. Nor will the members continue to agree on preferences and values; their actions and interests will become more diverse.

An advanced society, therefore, must always be a “planless” society, that is, a society in which no one overall “plan” is superimposed over the actions and plans of the individuals making up the society. Instead, civilization is by necessity a “spontaneous order,” in which the participants use their own special knowledge and pursue their own individually chosen plans without a higher will or mind guiding them.

 

The Fallacy of Social Justice

The very complexity that makes it impossible to know all the information required to guide society, Hayek reasoned, makes it equally impossible to judge the “justice” or “worthiness” of an individual’s total actions. As a result, the popular call for “social,” or “distributive,” justice is inapplicable in a free society. Social justice requires not merely that individuals receive what is rightly theirs in general terms, but that individuals and groups also receive some stipulated distributional share of the society’s total output or wealth.

However, Hayek showed that in the market economy, distributions of income are not based on some standard of “deservedness,” but rather on the degree to which the individual has directly or indirectly satisfied consumer demand within the general rules of individual rights and property.

To attempt to distribute income shares by “deservedness” would require the government to establish some overarching standard for disbursing “social justice,” and would necessitate an economic system in which that government had the authority and the power to investigate, measure, and judge each person’s “right” to a share of the society’s wealth.

Hayek suggested that such a system would involve a return to the mentality and the rules of a tribal society: government would have to impose a single hierarchy of ends and would decide what each member should have and what should be expected from him in return. It would mean the end of the free and open society.

 

Hayek’s Appeal to Intellectual Humility

At the Nobel Prize ceremonies held in December 1974, at which the recipients received their awards, Hayek delivered a brief banquet dinner address in which he said that he wondered if there should be a Nobel Prize in a field like economics because the media often expects the award winner to deliver omniscient-like remarks on all the social and economic problems of the world.

The usefulness of Hayek receiving that Nobel Prize was that it enabled him to present a more formal lecture at the Nobel ceremonies on what he called “The Pretense of Knowledge,” a reminder that economists and policy-makers should remember that we all know far to little to presume to know enough to successfully plan and regulate the world through any political authority.

Thanks to his ideas, the 21st century can be a freer and more prosperous place in which to live, if we only we take to heart his appeal to intellectual humility, and allow each of us the liberty and latitude to plan our own lives with our individual limited knowledge, and rely upon the open market to coordinate all that we do through the competitive price system.

[Originally published at EpicTimes]

Categories: On the Blog

John Fund Criticizes AG Eric Holder on New Book Tour

Somewhat Reasonable - October 06, 2014, 12:47 PM

Part 1 published yesterday at Illinois Review, Thursday, October 1, recounted the stellar and above-board behavior of Attorney General Edwin Meese when serving President Reagan, as remembered by Joseph Morris, an honored member of the Chicago Federalist Society, who held the position of Reagan’s Assistant Attorney General during part of Meese’s tenure as Attorney general. Introductory comments by John Fund were included as a teaser to whet the appetite for further Fund reflections and trust-worthy opinions in what is now Part 2.

John Fund reflected on how It was only six years ago that Obama was “a citizen of the world.” His candidacy was like the third coming. However, now the tide seems to be turning. Fund’s observation was linked to NBC’s “60 Minutes” TV program that he had viewed the night before, Sunday, 9/28.  Fund drew a blank when he inquired whether any in attendance had viewed the program.

On the broadcast, Obama was asked by Steve Kroft why the U.S. had not anticipated the Islamic State’s threat. Although Obama acknowledged there had been an underestimation of what had been taking place in Syria, he proceeded to place blame on his Director of National Intelligence, James Clapper, and others in the intel community.

Shortly after the interview Ron Fournier, a supporter of President Obama, tweeted John Fund:

“I, me, my. It’s their fault.  “I, me, my.  It’s their fault.”   “I, me, my.  It’s their fault.”   “I, me, my. It’s their fault.”  “I, me, my. It’s their fault”

To further note:  Americans did turn away from watching Obama talk about the nation’s new war on ISIS on CBS’ “60 Minutes.”  The sympathetic news media mentioned the massive collapse in ratings for the program, but placed blame on the absence of a preceding football game.  It was not surprising that not one individual present to hear John Fund speak had watched Obama on “60 Minutes.”

Fund related an anecdote about James O’ Keefe, an American conservativeactivist who gained national attention for his release of video recordings of workers at ACORN offices in 2009. Entering a polling place in Washington, D.C, dressed as a scruffy-looking individual, James ‘O Keefe, seeing people in line, walked up and asked if Eric Holder was on the voter role.

A quick check showed Holder as a registered voter. Without further communication, O’Keefe was handed Holder’s ballot. Deciding to play along, O’Keefe asked if he had to show his ID and even offered to go out to his car to fetch it, only to be told that they weren’t permitted to ask for ID’s.  O’Keefe didn’t vote in Holder’s name, but, Holder, upon hearing about O’Keefe’s stunt responded later on that “You don’t need an ID to come to the Justice Department.”  He further explained that no one had to show an ID to go up to see him in his office whenever they wished to.

The Obama legacy of Al Sharpton

Another disappointing aspect of the Obama legacy was Holder, Obama, and Jarrett building up Al Sharpton as the most important civil rights leader in America today.  As the new black leader and one whom Obama leans on for advice, Al Sharpton sits in on meetings at the White House; Sharpton’s telephone number is on Eric Holder’s speed dial; Sharpton vacations near where Valerie Jarrett spends her time at Martha’s Vineyard; and Sharpton, now that Holder has tendered his resignation, is engaged in conversation as to Holder’s replacement.  All this, and Sharpton claims to have no income, claims to borrow all his suits from friends, and has never apologized for anything he has ever done of a shady and dishonest nature, i.e., the Tawana Brawley case of 1987 and the ‘Jena 6′ protest held in Jena, Louisiana in 2007.

Al Sharpton’s behavior has left this nation divided, politically disconnected, and cynical. Nearly 50 years after the March on Washington by Martin Luther King, race relations remain poor due to hucksters like Al Sharpton and Jesse Jackson.  Some liberals are finally getting it, included Margaret Carlson, who said when appearing in August of last year on PS’s Inside Washington:  “We’re gone from Martin Luther King to the Reverend A Sharpton, and as a leader . . . it’s very dispiriting.”  Rather than having the welcome mat rolled out for Al Sharpton at the White House, John Fund suggested that Sharpton should be run out of town.  According to Fund, only a small segment of blacks support Sharpton.

Questions entertained by John Fund

  • In referring jokingly to Valerie Jarrett as President Jarrett, John Fund revealed that Jarrett has a 24-hour secret service detail, something Obama’s Chief of Staff doesn’t even receive.  “But after all the President does play a lot of golf and is often delayed.”  Fund left it at that.
  • Regarding the possibility of finding out about the scandal involving Lois Lerner?  The Justice Department, having starting an investigation in May of 2013, seems to have no intention of following through. The same holds true for the host of other scandals associated with the Obama administration.
  • In response to a question about Holder’s replacement, Fund thought the individual would be made out of the same cloth as Holder.  After all, President Obama plans to issue a slew of executive orders after the November election. He will need someone to defend Obama’s elastic view of executive power divorced from the Constitution.

Related as fact by John Fund was how Tom Perez, as Secretary of Labor, makes use of a separate private computer in his agency which operates as a shadow government. Other officials have been caught using private aliases email accounts to correspond to hide sent messages. Two top EPA officials used email aliases accounts when corresponding with environmental groups: EPA Region 8 Administration James Martin, as well as EPA Administrator Lisa Jackton, who has since resigned.

The vast majority of inspector generals conferred by the House have been appointed by Democratic governors.  It is telling that forty-seven out of seventy-two signed a letter in August of this year that they can no longer do their jobs effectively because of their inability to get the documents they need to conduct their individual investigations.

The Inspector General Act of 1978, as amended, establishes the responsibilities and duties of an IG. The IG Act has been amended to increase the number of agencies with statutory IGs. In 1988 came the establishment of IGs in smaller, independent agencies and there are now 72 statutory IGs.

The primary job of an inspector general is to detect and prevent fraud, waste, abuse, and violations of laws and to promote economy, efficiency and effectiveness in the operations of the Federal Government.  Given the dissatisfaction of two-thirds of this nation’s inspector generals, it does not bode well for this nation, nor does it create the environment necessary to build back trust in government so lacking among the American people.

What happened to the promise candidate Obama made to the American people that his government would be the most open and transparent ever, if he were elected? This promise has fallen by the wayside. Perhaps it was never meant to be.

[Originally published at Illinois Review]

 

Categories: On the Blog

Municipalities GON Wild!

Somewhat Reasonable - October 06, 2014, 12:36 PM

Recently two towns, Chattanooga, Tennessee, and the City of Wilson, North Carolina, have petitioned the federal government, via the FCC, complaining that state laws are constraining them from the municipal provision of broadband services, that is, from building a government owned network (GON). That is, these municipalities want to expend resources to build and operate broadband systems, without following any of regulations that govern private sector providers. To overcome the state’s rightful authority the city governments have proposed that the FCC preempt state law and empower municipalities in ways that upset the political structure of the U.S.

While models of municipality creation vary widely around the world, in the United States how they are created is fairly clear. The U.S. Constitution empowers states as the primary political entity. The federal government itself is also creation of the states, and of the people, with the Constitution placing restraints on government broadly, at the agreement of the states. States are also empowered to arbitrarily create subdivisions, generically referred to as municipalities. Ultimately then, responsibility for the municipalities generally falls to the states.

FCC intervention into this relationship between states and municipalities would have profound negative effects as was explained in the FCC filing by Madery Bridge. Municipalities, untethered from responsibility to the state, could partake in risky schemes of tax funded adventurism placing the entire state and all its citizens at risk. And government owned networks have proven risky indeed.

For years, municipalities around the country have tried, and ultimately failed, to either set up their own communications networks or to partner with private companies to get into the business of broadband. The list of failures is long and keeps growing but includes Utah’s UTOPIA, Burlington, Vermont , Chicago, Seattle, Tacoma, , Minnesota’s FiberNet, the Northern Florida Broadband Authority, Philadelphia and Orlando. To be clear this is a minimal partial list and does not include the many systems that will not disclose whether they are already being bailed out with taxpayer’s money. The reasons for the failures are numerous, typically resulting in taxpayer funds being wasted. Some would nit-pick the details of the failures, but the fact remains that taxpayer money was put at risk, often without approval of taxpayers, and most often squandered.

Even still, some municipalities want to plow forward, heedless of the lessons, believing that they are somehow different. As mentioned, some have been frustrated by state laws in at least twenty states that were designed to prevent fiscal folly on behalf of the localities, laws that shield all citizens of the state from financial risk. Adopting the failed model of municipal provision of communications services is the wrong idea, as many municipalities across the country can attest.

Municipalities face many risks in building and operating broadband networks. As has been seen in the routine failures, governments chronically underestimate the cost of building out and maintaining networks, and chronically overestimate adoption rates.

Technology infrastructure investment, like most infrastructure investment, is not for the faint of heart or the partially committed. Municipalities and states across the country are constantly challenged by maintaining the relatively static infrastructure that they have already taken on, such as streets, sidewalks, bridges and buildings.

Technology is vastly more challenging. One must jump in with both feet, constantly updating the technology and business models. As online services grow more sophisticated, customers have become accustomed to regular upgrades, challenging the ability of governments to keep up with demand. Those challenges are multiplied a hundred fold when the complications of delivering video and voice are added. Video services alone are in a constant state of upgrade, either in providing more channels, more programming, or providing services to customers to allow them to customize their own video experience, such as video on demand.

Of course as a greater variety of more complicated technology and services is offered, the more expensive the building of the system and overall operations becomes. In turn even more taxpayer money is placed at risk, because when these systems fail it is not private investors who lose money but taxpayers across the state often without any say in the matter, and the vast majority of whom received absolutely no benefit. When local and state coffers are depleted because of these sorts of risky government bets, the cry is for more tax revenue or for an outright bailout.

In general, technological innovation continues to far outpace the speed of government, which simply cannot compete with the market. So, in the case where a municipal system is competing against a private system, about the time the municipal system is up and running, private networks will offer something better, cheaper, and faster. Even in cases where there is no private sector competition, government operated networks will never keep pace with public expectations. Broadband systems are not like a water public utility where the same pipes are used for one hundred years to deliver the same product in the same way.

The challenges of government owned networks and the preservation of free speech is also daunting. The theoretical became real in San Francisco, a city that often brags of its rich tradition of civil liberties. There, a municipal communications system was purposely shut down to prevent people from engaging in specific, legal communications. In a chilling statement, city officials pointedly said, “Cellphone users may not have liked being incommunicado, but BART officials told the SF Appeal, an online paper, that it was well within its rights. After all, since it pays for the cell service underground, it can cut it off.”

Whether San Francisco should be paying for municipal communications systems at all is a question for the city and state. The more pressing concern is the freedom of speech problems that arise when a municipality owns a communications system.

Importantly, rarely is it the case that government is trying to serve someone with no Internet access option. Rather the most common motivation for beginning the government owned network fantasy is economic development groups being swayed by traveling consultants. Their siren song is too hard for some communities to resist, and repeated past failures tend not to be mentioned.

Such localities could better use their time and resources by moving to provide clear and more rapid approval decision making for wireless facilities as wireless rapidly has and continues to be the favored method of accessing broadband. Policymakers should sponsor initiatives to encourage broadband deployment into unserved areas using incentives for private sector companies that risk their own capital.

Where state officials of any sort are calling for FCC action, their arguments are merely an attempt to end run the state’s political process and the will of the people. They seek to create public policy where they were not able to do so within their own state through proper channels. This is policy
making by the ruling class rather than by will of the people. State policies should be determined through state legislation or at least through state rule making.

Allowing the states to continue to experiment with how to broadband will be delivered to the greatest number of their residents is absolutely the right policy to pursue. The FCC should stand on the side of greater creativity and innovation, and the law, and not intervene in state law.

Categories: On the Blog

Pence 2.0: New, But Not Exactly Improved

Somewhat Reasonable - October 06, 2014, 11:40 AM

Many commentators on the right have Indiana Gov. Mike Pence on their 2016 presidential nominee shortlist. After all, they say, Pence has earned it.

Pence spent 12 years in Congress building his conservative street cred. He railed against Bush-era policies such as the bailouts and No Child Left Behind, and he launched a quixotic bid to replace John Boehner as minority leader in 2006.

It’s difficult for social conservatives or tax-cutting supply-siders not to love Mike Pence. Only such a self-proclaimed “happy warrior for conservatism” could buck his own party, become the third-highest-ranking Republican in the House, and set fundraising records while becoming the odds-on favorite to replace party darling Mitch Daniels as governor of the Hoosier State.

When asked about his White House ambitions, Pence brushes off the question and says he’s focused on Indiana. But his record says otherwise. The person now occupying the governor’s mansion in Indianapolis doesn’t look like the Mike Pence conservatives originally came to trust. Over the past 18 months, a Pence 2.0 has emerged—one who embraces big-government solutions while claiming to fight against them.

Consider, for example, the Common Core education standards, against which parents around the nation have protested as a nationalization of curriculum with politicized, dumbed-down requirements. In April, Pence penned a triumphant piece in the Indianapolis Star touting Indiana’s first-in-the-nation exit from the controversial program to impose national academic standards.

What happened next came courtesy of Pence 2.0. Indiana didn’t return to its rigorous pre-Common Core standards, described by The Heritage Foundation as “state-driven and, most importantly, supported by teachers and parents.

Instead, Pence 2.0 implemented Common Core by another name.

Writing at National Review, Stanley Kurtz says Indiana’s new standards are “nothing but a slightly mangled and rebranded version of what they supposedly replace.” Some education experts actually proclaimed the new standards to be even worse than Common Core. Indiana voters responded by ousting two Pence-backed Common Core supporters in the GOP legislative primary.

Unfortunately, Pence 2.0 is just getting started. Faced with a decision about Obamacare, the once-staunch fiscal conservative who led the charge against the law is currently in talks with the federal government to implement Obamacare’s massive Medicaid expansion.

Pence’s proposal, the “Healthy Indiana Plan 2.0,” would use Obamacare dollars to give Medicaid benefits to more than 375,000 able-bodied adults—more than three-quarters of whom have no children. In doing so, Pence 2.0 affirms there’s no principle limiting who should be eligible for government-funded welfare.

But to hear Pence 2.0 tell it, you would think his move to expand Medicaid is positively Reaganesque. In fact, he invoked the Gipper repeatedly while unveiling the plan at the American Enterprise Institute.

A growing number of health policy experts aren’t buying Pence’s revisionism. Writers at Forbes debunked Pence’s argument that his plan is a block grant, noting,

“By definition, Medicaid block grants give states a fixed, lump sum of federal dollars in exchange for broad autonomy in providing Medicaid benefits. Pence’s plan features neither of these elements. Under Pence’s ObamaCare expansion, Indiana will draw down increasing amounts of ObamaCare in exchange for adding more people to the Medicaid rolls.”

The Heritage Foundation calls the plan “disappointing.” And over at The Federalist, Dean Clancy writes, “Two camps are emerging among GOP governors: those who oppose the Obamacare expansion, and those who pretend to. Pence has now officially joined the pretender camp.”

People often say politicians’ campaign promises disappear when they start to govern. That’s certainly the case with Mike Pence 2.0, who has been rebranding big-government policies as conservative and apparently hoping voters won’t notice. Here’s hoping they do.

John Nothdurft (jnothdurft@heartland.org) is director of government relations for The Heartland Institute.

Categories: On the Blog

Are We Destined To Surf The UN-Net?

Somewhat Reasonable - October 05, 2014, 11:06 AM

Earlier this year, the Obama administration asked ICANN (Internet Corporation for Assigned Names and Numbers) to create a means of overseeing the Internet after U.S. governance is scheduled to end in another year. The administration decided not to maintain the current U.S. minimum-oversight role.

This decision led to a debate as to whether transferring control of the Internet root zone functions from the U.S. Department of Commerce to some yet-to-be-determined multi-stakeholder organization is a good thing.  Especially since some governments want to undermine the permissionless, free-speech Internet built under U.S. oversight.

ICANN is a non-profit organization created to manage, according to its bylaws, “the coordination of maintenance and methodology of several databases of unique identifiers related to the namespaces of the Internet, and ensuring the network’s stable and secure operation…including policy development for internationalization of the DNS system, introduction of new generic top-level domains (TLDs), and the operation of root name servers.”

And as described in the memorandum of understanding with the U.S. government,  “ICANN’s primary principles of operation have been described as helping preserve the operational stability of the Internet; to promote competition; to achieve broad representation of the global Internet community; and to develop policies appropriate to its mission through bottom-up, consensus-based processes.”

But things began changing rapidly shortly after a Commerce Department official downplayed the threat of top-down control of the Internet by authoritarian governments in the absence of U.S. oversight.

ICANN has advisory committees that advise on the needs of various stakeholders to the board. One of these is the Governmental Advisory Committee, made up of representatives of national governments from around the world. Right now all they do is offer advice; their recommendations can be ignored by a majority vote of the board.

But last month ICANN proposed changing its rules to say that government opinion must be followed unless two-thirds of the board objects. Not wasting a moment, a virtual leader of repressive authoritarian government, Iran, has proposed that government opinion be a mandate if a simple majority agrees. The Obama administration’s view is not just being ignored, but essentially mocked, as authoritarian governments move to make ICANN another puppet of government.

This is not the UN taking control of the Internet yet. There is, however, intense international pressure to have a UN-type organization take control of Internet governance, fundamentally changing ICANN into an international governmental regulatory agency. There is every reason to believe that the core ICANN functions, which the International Telecommunications Union (a small U.N. agency) is already coveting and which many countries are already lobbying to be turned over to U.N. control, will eventually be consumed. This change would end a long history of independent multi-stakeholder organizations, set up to do technical functions that are of interest to the global community, being absorbed into the U.N.

ICANN has agreed to provide a brief comment period before moving to make the change.  But realistically, this fight is just beginning, and the debate will likely go on for years—an ongoing battle between those who favor freedom for individuals to direct their own destiny and those who stand against such liberty.

The U.S. will need to hang tough, stay committed to its fundamental principles, recognize real threats instead of ignoring them, and work to convince the members of the international Internet community that they must stand steadfast against a block of repressive regimes. Another option is to sacrifice the open Internet, our freedoms, and the Internet industry. A third option may prove to be the only way if the U.S. fails —disconnecting from the global Internet.

[Originally published at Institute for Policy Innovation

Categories: On the Blog

A Rare Debate on the “Settled Science” of Climate Change

Somewhat Reasonable - October 04, 2014, 10:31 AM

CHICAGO, October 2, 2014 — In 1997 during the Kyoto Protocol Treaty negotiations in Japan, Dr. Robert Watson, then Chairman of the Intergovernmental Panel on Climate Change, was asked about scientists who challenge United Nations conclusions that global warming was man-made. He answered, “The science is settled…we’re not going to reopen it here.” Thus began one of the greatest propaganda lines in support of the theory of human-caused global warming.

On June 19 this year, the University of Northern Iowa held a debate on climate change titled, “Climate Instability: Interpretations of Scientific Evidence.” Dr. Jerry Schnoor of the University of Iowa presented an effective case for the theory of man-made warming and I presented the case for climate change driven by natural causes. The video contains 30 minutes of presentation by each side and then 30 minutes of questions and rebuttal, presented to a small audience of faculty and students.

Formal debates on the theory of human-caused warming are somewhat rare in our society today. Former Vice President Al Gore stated on the CBS Early Show on May 31, 2006:

…the debate among the scientists is over. There is no more debate. We face a planetary emergency. There is no more scientific debate among serious people who’ve looked at the science…Well, I guess in some quarters, there’s still a debate over whether the moon landing was staged in a movie lot in Arizona, or whether the earth is flat instead of round.

EPA Administrator Lisa Jackson declared to Congress in 2010, “The science behind climate change is settled, and human activity is responsible for global warming.” Even President Obama in his 2014 State of the Union address said, “But the debate is settled. Climate change is a fact.”

The Los Angeles Times announced last year that they will not print opinions that challenge the concept that humans are the cause of climate change. The BBC has taken a similar position. Many of our universities will not allow an open debate on climate change. The Department of Meteorology and Climate Science at San Jose State University posted an image last year of two professors holding a match to my book.

In contrast to the “no debate” positions of our political leaders, news media, and many universities, the event at the University of Northern Iowa was a breath of fresh air. Thanks to Dr. Catherine Zeman and the Center for Energy and Environmental Education at UNI for their sponsorship of an open debate on the “settled science” of climate change.

Steve Goreham is Executive Director of the Climate Science Coalition of America and author of the book The Mad, Mad, Mad World of Climatism: Mankind and Climate Change Mania.

[Originally published at Communities Digital News]

Categories: On the Blog

Sean Parnell: The Self-Pay Patient

Somewhat Reasonable - October 03, 2014, 2:54 PM

On August 6, 2014, Sean Parnell did a presentation about his new book, The Self-Pay Patient: Affordable Healthcare Choices in the Age of Obamacare as a part of The Heartland Institute’s Author Series. During the presentation, Parnell explained why he wrote the book, what it means to be a self-pay patient, why one might want to be a self-pay patient, and what the book means for the free-market healthcare movement.

Being a self-pay patient means operating without the necessity for the government. A self-pay patient may want to seek healthcare without the use of traditional insurance. They may want to pay in cash or use an insurance plan with a high deductible. A self-pay patient basically wants to operate in more of a free market in regards to healthcare.

So, why become a self-pay patient? Parnell gives three main reasons. One, many do not want the government involved in their healthcare. They don’t want their medical information in a national database or they don’t want the government telling them what type of insurance they need or what it must cover. Another reason is to simply eliminate the bureaucracy and the associated headache of dealing with a third party. The last main reason to become a self-pay patient Parnell discusses is that it is less expensive.

The lower cost of being a self-pay patient is the main focus of Parnell’s book and presentation. He gives many examples and resources of how this can save money. For example, Parnell refers to a man who saved thousands of dollars by traveling to a doctor in Oregon after comparing prices for a procedure. He also mentions a website that compiles price lists for those who are looking for cheaper options. These examples are more in line with a real free market that Parnell desires.

Patients are not the only ones who may want to operate in this manner. Parnell describes doctors and clinics that also cutout the middleman. These clinics would rather accept only cash, check or card instead of dealing with insurance and the government. In many cases, these clinics are intended to deal with a limited amount of procedures.

But what about dealing with major issues? Parnell describes various mechanisms a person could use to ensure coverage when dealing with a catastrophic event. One example of this is called “a healthcare sharing ministry.” This organization is a group that shares medical expenses. While it operates in a similar manner as a traditional insurance company, the savings could be substantial.

When listening to Parnell, it is obvious that he has a wealth of information regarding the healthcare system. The solutions that he provides are a breath of fresh air for those who are concerned with the continuing implementation of Obamacare. While we may still be far from a free market in healthcare, becoming a self-pay patient may get us a little closer to that goal.

Categories: On the Blog

Traffic Congestion in the World: 10 Worst and Best Cities

Somewhat Reasonable - October 03, 2014, 2:50 PM

The continuing improvement in international traffic congestion data makes comparisons between different cities globally far easier. Annual reports (2013) by Tom Tom have been expanded to include China, adding the world’s second largest economy to previously produced array of reports on the Americas, Europe, South Africa and Australia/New Zealand. A total of 160 cities are now rated in these Tom Tom Traffic Index Reports. This provides an opportunity to provide world 10 most congested and 10 least congested cities lists among the rated cities.

Tom Tom provides all day congestion indexes and indexes for peak hours (heaviest traffic peak morning and evening hour). The traffic indexes rate congestion based on the additional time necessary to make the trip compared to those under free flow conditions. For example, an index of 10 indicates that a 30 minute trip would take 10 percent longer, or 33 minutes. An index of 50 means that a 30 minute trip will, on average, take 45 minutes.

Congestion in Peak Hours: 10 Most Congested Cities

This article constructs an average peak hour index, using the morning and evening peak period Tom Tom Traffic Indexes for the 125 rated metropolitan areas with principal urban areas of more than 1,000,000 residents. The peak hour index is used because peak hour congestion is generally of more public policy concern than all day congestion. This congestion occurs because of the concentration of work trips in relatively short periods of time. Work trips are by no means the majority of trips, but it can be argued that they cause the most congestion. Many cities have relatively little off-peak traffic congestion.

The two most congested cities are in Eastern Europe, Moscow and Istanbul (which stretches across the Bosporus into Asia). Four of the most congested cities are in China, three in Latin America (including all that are rated) and one is in Western Europe (Figure 1).

Moscow is the most congested city, with a peak hour index of 126. This means that the average 30 minute trip in free flow conditions will take 68 minutes during peak hours. Moscow has a limited freeway system, but its ambitious plans could relieve congestion. The city has undertaken a huge geographical expansion program, with the intention of relocating many jobs to outside the primary ring road. This dispersion of employment, if supported by sufficient road infrastructure could lead to improved traffic conditions.

Istanbul is the second most congested city with a peak hour traffic index of 108. The average free flow 30 minute trip would take 62 minutes during peak hours.

Rio de Janeiro is the third most congested city him with a peak hour traffic index of 99.5. The average free flow 30 minute trip takes 60 minutes due to congestion during peak hours.

Tianjin, which will achieve megacity status in 2015, and which is adjacent to Beijing, is the fourth most congested city, with an index of 91. In Tianjin, the peak hour congestion extends a free flow 30 minute trip to 57 minutes.

Mexico City is the fifth most congested city, with a peak hour traffic index of 88.5. The average free flow 30 minute trip takes 57 minutes due to congestion.

Hangzhou (capital of Zhejiang, China), which is adjacent to Shanghai, has the sixth worst traffic congestion, with a peak period traffic index of 87. The average 30 minute trip in free flow takes 56 minutes during peak hours.

Sao Paulo  has the seventh worst traffic congestion, with a peak hour index of 80.5. The average 30 minute trip in free flow takes 54 minutes during peak periods. Sao Paulo’s intense traffic congestion has long been exacerbated by truck traffic routed along the “Marginale” near the center of the city. A ring road now is mostly complete, but the section most critical to relieving traffic congestion from trucks is yet to be opened.

Chongqing has the eighth worst traffic congestion, with a peak hour index of 78.5. As a result, a trip that would take 30 minutes in free flow conditions takes 54 minutes during peak hours.

Beijing has the ninth worst traffic congestion, with a peak hour index of 76.5. As a result a trip that should take 30 minutes in free flow is likely to take 53 minutes during peak hour. In spite of recent reports of its intense traffic congestion, Beijing rates better than some other cities. There are likely two causes for this. With its seventh ring road now planned, Beijing has a top-flight freeway system. Its traffic is also aided by its dispersion of employment The lower density government oriented employment core , is flanked on both side by major business centers (“edge cities”) on the Second and Third Ring Roads. This disperses traffic.

Brussels has the 10th worst peak hour traffic congestion, with an index of 75. A trip that would take 30 minutes at free flow takes 53 minutes in peak hour congestion.

Seven of the 10 most congested cities are megacities (urban areas with populations over 10 million). The exceptions are Hangzhou, Chongqing and Brussels. Brussels has by far the smallest population, at only 2.1 million residents, little more than one-third the size of second smallest city, Hangzhou.

Most Congested Cities in the US and Canada

The most congested US and Canadian cities rank far down the list. Los Angeles ranks in a tie with Paris, Marseille and Ningbo (China), at a peak hour congestion index of 65. It may be surprising that Los Angeles does rank much higher. Los Angeles has   been the most congested city in the United States, displacing Houston in the 1980s. The intensity of the Los Angeles traffic congestion is driven by its highest urban area density in the United States and important gaps in the planned freeway system that were canceled. Nonetheless, Los Angeles is aided by a strong dispersion of employment, which helps to make makes its overall work trip travel times the lowest among world megacities for which data is available). Part of the Los Angeles advantages is its high automobile usage, which shortens travel times relative to megacities with much larger transit market shares (such as Tokyo, New York, London and Paris).

Vancouver is Canada’s most congested city, with a pea period index of 62.5 and has the 27th worst traffic congestion, in a tie with Stockholm. Vancouver had exceeded Los Angeles in traffic congestion in the 2013 mid-year Tom Tom Traffic Index report.

Least Congested Cities

All but one of the 10 least congested large cities in the Tom Tom report are in the United States. The least congested is Kansas City, with a peak period index of 19.5, indicating that a 30 minute trip in free flow is likely to take 36 minutes due to congestion. Kansas City has one of the most comprehensive freeway systems in the United States and has a highly dispersed employment base. US cities also occupy the second through the sixth least congested positions (Cleveland, Indianapolis, Memphis, Louisville and St. Louis). Spain’s Valencia is the seventh least congested city, while the eighth through 10th positions are taken by Salt Lake City, Las Vegas and Detroit.

Cities Not Rated

There are a number of other highly congested cities that are not yet included in international traffic congestion ratings. Data in the 1999 publication Cities and Automobile Dependence: A Sourcebook indicated that the greatest density of traffic among rated cities was in SeoulBangkokand Hong Kong. Singapore, Kuala LumpurJakartaTokyo, Surabaya (Indonesia), while Zürich and Munich also had intense traffic congestion. Later data would doubtless add Manila to the list. The cities of the Indian subcontinent also experience extreme, but as yet unrated traffic congestion. It is hoped that traffic indexes will soon be available for these and other international cities.

Determinants of Traffic Congestion

An examination (regression analysis) of the peak period traffic indexes indicates an association between higher urban area population densities and greater traffic congestion, with a coefficient of determination (R2) of 0.48, which is significant at the one percent level of confidence (Figure 2). This is consistent with other research equating lower densities with faster travel times and anincreasing automobile use in response to higher densities.

At the regional level, a similar association is apparent. The United States, with the lowest urban population densities, has the least traffic congestion. Latin America, Eastern Europe and China, with higher urban densities, have worse traffic congestion. Density does not explain all the differences, however, especially among geographies outside the United States. Despite its high density, China’s traffic congestion is less intense than that of Eastern European and Latin American cities. It seems likely that this is, at least in part due to the better matching of roadway supply with demand in China, with its extensive urban freeway systems. Further, the cities of China often have a more polycentric employment distribution (Table).

Traffic Congestion & Urban Population Density Urban Poulation Density Peak Hour Congestion Per Square Mile Per KM2 Australia & New Zealand 49.2                4,600              1,800 Canada 49.4                5,000              1,900 China 64.9              15,700              6,100 Eastern Europe 80.8              11,800              4,500 Latin America 89.5              19,600              7,600 United States 37.1                3,100              1,200 Western Europe 47.4                8,700              3,400 South Africa 52.4                8,300              3,200 Peak Hour Congestion: Average of Tom Tom Peak Hour Congestion Indexes 2013 Population Densities: Demographia World Urban Areas

 

Both of these factors, high capacity roadways and dispersion of population as well as jobs are also important contributors to the lower congestion levels in the United States.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photo: On the Moscow MKAD Ring Road

 

[Originally published at New Geography]

Categories: On the Blog

Stay Away From Muni Broadband

Somewhat Reasonable - October 03, 2014, 1:14 PM

In this 3-minute video, titled “Stay away from Muni Broadband,” Scott Cleland lays out multiple reasons why we should be wary about Municipal Broadband.

First of all, it is preposterous to think of the government as a competitor. The government does not operate under the same rules as its private sector counterparts. In fact, the government makes the rules. The government sets taxes, they regulate industry, they have the ability to set barriers to entry. By using these powers, the government has a large advantage over the competition. As Cleland states, “you can’t fight city hall.”

Competing with the government can easily become a unfair fight. If the government begins to lose, they can use their powers to gain an edge. By reconstructing the regulations or fees, the government has the ability to injure its competition; this is an advantage the private-sector competition does not have. Also, these advantages of the government come at the expense of the taxpayer.

At best, municipal broadband is a waste of taxpayer money. As Cleland states, “In the past, these municipal broadband things have been boondoggles and have cost local and state coffers millions upon millions upon tens-of-millions of bonds that then were essentially bankrupt.” It is also a gamble. The government intends to spend money attempting to compete with the private sector even though they have no experience in this field.

The last argument Cleland gives against municipal broadband is the possibility of government snooping. With the government running the internet, this opens up the opportunity for them to be able to check your email or monitor your internet searches.

This short video by Cleland does a great job of explaining some of the arguments against municipal broadband. The government should stick to tasks we deem necessary while leaving the internet in the hands of the private sector.

Categories: On the Blog

Texas Textbooks Need Science Reality Check

Somewhat Reasonable - October 03, 2014, 11:02 AM

Dressed as T-Rex, Sandra Calderon talks with Nick Savelli prior to a State Board of Education public hearing on proposed new science textbooks., Tuesday, Sept. 17, 2013, in Austin, Texas. A new law is in place that gives school districts the freedom to choose their own instructional materials including software, electronic readers or textbooks with or without board approval. (AP Photo/Eric Gay)

Would you want the textbooks at your child’s school to teach your kids the Soviet Union still exists and is the greatest danger your child will face in their lifetime? Would you want the computers at your child’s school to use dial-up internet or run on Windows 98? Of course not, because these resources are out of date, and parents want their kids to get the best education possible.

To ensure the quality of the education provided to students, the Texas State Board of Education has begun the process of updating its textbooks to reflect the latest information and advancements in history and science, because part of giving kids the best education possible means giving them access to the best resources available.

We no longer teach kids about the current status of the Berlin Wall, so why would we teach our kids about climate change by using climate models and textbooks that are similarly out-of-date and out-of-touch with reality?

Many people don’t know mean global temperatures have not risen significantly for the last 17 years, meaning no significant global warming has occurred since you first bought Windows 98. Some peer-reviewed, scientific studies suggest the current time period with no global warming has been as long as 20 years. The scientific analysis and climate models that predicted drastic global warming over this period were simply wrong, so it makes sense to reexamine the issue in light of new evidence and teach our children accordingly.

This common sense approach to science and education has somehow managed to ruffle the feathers of left-leaning special-interest groups who want to protect the status quo. These groups repeat the tired and thoroughly debunked claims that 97 percent of climate scientists believe climate change is real, man-made, and dangerous. But citing cherry-picked survey data and flawed studies does not help children understand the science of the climate, nor does using climate models that have been so inaccurate in predicting the temperatures for past two decades. The reason these models are such poor predictors of global temperatures is they assume we have complete knowledge of the climate system, or at least enough to predict the future accurately. But as we’ve seen, that isn’t true.

Assuming we have complete knowledge of the global climate system is like assuming we know everything there is to know about the creatures in the oceans or the ecosystems of the rainforests. We just don’t have all that information. Scientists are constantly discovering new species of animals we never knew existed, such as the Yin-Yang Frog from Vietnam and the subterranean blind fish, which lives in an underground river that runs for 4.3 miles through limestone caves.

Arguing the claim of “scientific consensus” is problematic, too, because scientists don’t always know what they think they know. For example, scientists have rediscovered several species of animals they had long considered extinct, such as the Coelacanth, a species of fish thought to have disappeared 65 million years ago, until they were rediscovered in 1938. Another species, the Bermuda petrel, was believed to be extinct since the 1620s, but it was rediscovered in 1951, falsifying more than 331 years of “scientific consensus.” Although perhaps not as cute as the New Caledonian crested gecko, all these animals demonstrate science is never settled, and we must adjust our opinions to reflect new evidence as it becomes available. That is the scientific way.

No one is saying humans have zero effect on the climate, but there is legitimate disagreement over how much. Considering CO2 emissions have increased dramatically but temperatures have remained steady or fallen slightly in the past 20 years, it is reasonable to argue natural forces have an impact on the climate that is equal to or significantly greater than that of humans.

With the U.S. falling behind the rest of the world in science education, we should applaud, not condemn, the Texas Board of Education for trying to teach kids how to think instead of what to think – especially when the “consensus” is still running on Windows 98.

 

[Originally published at The Houston Chronicle]

Categories: On the Blog

John Fund Contrasts AGS Holder and Meese at Federalist Society Luncheon

Somewhat Reasonable - October 03, 2014, 10:52 AM

Part 1: Assistant to Attorney General Meese under Reagan, compares tenure of AG Holder to Meese at Chicago Federalist Society event featuring John Fund.

As president of the Chicago Lawyers’ Chapter of the Federalist Society, founded in 1982 as a group of conservatives and libertarians interested in the current state of the legal order, Laura Kotelman welcomed those who had come to have “Lunch with Author John Fund” on Monday, September 29 at the Union League Club, 65 West Jackson, Chicago, IL. John Fund is a National Affairs columnist for National Review magazine and on-air analyst on the Fox News Channel. He is considered a notable expert on American politics and the nexus between politics and economics and legal issues. Previously Fund served as a columnist and editorial board member for The Wall Street Journal.

While John Fund was in Chicago speaking to the Chicago Lawyers’ Chapter of the Federalist Society, co-author Hans von Spakovsky was at his venue in Toledo, Ohio, doing the same to promote their book: Obama’s Enforcer:  Eric Holder’s Justice Department, which catalogues the abuses of power at the Department of Justice under Attorney General Holder. Set forth is how Attorney General Eric Holder, Jr has politicized the Justice Department and put the interests of left-wing ideology and his political party ahead of the fair and partial administration of justice.

Remarks made by Federalist member Joseph A Morris, prior to his introduction of John Fund, provided a perfect segue to what Fund later shared about Eric Holder as President Obama’s Attorney General.  Morris, a former Assistant Attorney General and Director of the Office of Liasion Services with the U.S. Dept of Justice under Ronald Reagan, was eminently qualified to paint an accurate profile of Edwin Meese III, who served as U.S. Attorney General under President Reagan.  The comparison between Edwin Meese III under Reagan and Eric Holder under Obama in conducting the office of Attorney General was indeed worlds apart.

It was out of great respect for Joseph Morris by members of The Chicago Lawyers’ Chapter of the Federalist Society that Laura Kotelman introduced Morris as “our home town hero.”  Joseph Morris is a partner with Morris & De La Rosa in Chicago.

Joseph Morris comments on Edwin Meese as Reagan’s Attorney General

Joseph Morris directed those in attendance to an Opinion piece that appeared on the morning of the Fund event (9/29) in the Wall Street Journal, “Holder’s Legacy of Racial Politics,” by Edwin Meese III and Kenneth Blackwell, former Ohio Secretary of State. The article relates how Eric Holder battled against state voter-ID laws despite all the evidence of their fairness and popularity.  According to Morris, the only reason for opposing sensible voter-ID laws is a “desire for votes.”

Joe Morris, in reflecting upon Edwin Meese III, spoke of Meese as Governor Reagan’s legal advisor and head of Reagan’s campaign committee in 1980. When Reagan took office, Meese went along with Reagan as one of his three staff assistants. Howard Baker later became Chief of Staff in Reagan’s second administration. With the surfacing of the Iran Contra scandal in Reagan’s 2nd administration, Edwin Meese, having been appointed Attorney General well before the scandal broke, was assigned by President Reagan to investigate the matter. Unlike Messrs. Obama and Holder, Meese saw the job of the Attorney General as one to pursue the truth, not to cover up an internal administration scandal.

It was during Reagan’s second term with the emergence of the Iran Contra scandal that Joe Morris, serving under Reagan at the time as both Chief of Staff and General Counsel of USIA (United States Information Agency), was asked to assist the Reagan White House.  Morris recalls receiving two envelopes from the White House asking that he and his entire staff at USIA assist in the Iran Contra investigation by preserving all the facts (documents, dates, etc.). The instructions to be forthcoming about preserving records and being cooperative in the Iran-Contra investigation were received not only by Joseph Morris at USIA, but were also sent by President Reagan and Attorney General Meese to every other relevant agency of the U.S. Government.

Edwin Meese, as Attorney General, told Morris that he wanted the investigation to be taken seriously.  It was through Morris’ involvement in the Iran Contra Scandal, while performing his dual roles at the USIA, that he was brought into Reagan’s Justice Department as Assistant Attorney General under Edwin Meese.  Because of this relationship with Edwin Meese, Morris was able to present an accurate account of the way Meese conducted himself in his role of Attorney General under Reagan. Meese, in his role as Attorney General, sat in on the meetings of the NSC (National Security Council) responsible for coordinating policy on national security issues.  It didn’t take long for Meese to observe that as the only lawyer among the participants, he alone was able to advise in a way that was consistent with the Constitution.

Four principles championed by Attorney General Meese

Joseph Morris set out these four principles followed by Attorney General Meese under the Reagan administration:

  • Rule of Law must always follow the truth, wherever it goes, letting the facts speak for themselves.
  • The structure of the government (system of procedure) was revamped so staff members could be brought together in an open channel of communication.
  • No stranger to controversy, Edwin Meese did not shrink from what he considered his responsibility.  On December 4, 1986, Attorney General Edwin Meese III requested that an independent counsel be appointed to investigate Iran-Contra matters. On December 19, the three judges on the appointing panel named Lawrence Walsh, a former judge and deputy attorney general in the Eisenhower Administration, to the post.
  • Fighting a battle of ideas, Meese was willing to debate the “originalist” perspective of the Constitution.  In 1985, Attorney General Edwin Meese III delivered a series of speeches challenging the then-dominant view of constitutional jurisprudence and calling for judges to embrace a “jurisprudence of original intention.” There ensued a vigorous debate in the academy, as well as in the popular press, and in Congress itself over the prospect of an “originalist” interpretation of the Constitution.

John Fund speaks

In introducing John Fund, Joseph Morris spoke of Fund as being a hard worker and a close student of the Department of Justice for thirty years, with a particular interest in the soft underbelly of the election system. Morris recalled how John Fund would call him, asking to have lunch to talk about Chicago politics.  John Fund would, without fail, have with him a list of well thought out questions to ask such as:  “Could this Blagojewich person really become mayor?”  Later on: “What about Rahm Emanuel running for mayor in Chicago with all his ties to Obama?”

The above reference made by Morris about Emanual’s mayoral candidacy became the focus of John Fund’s opening remarks.  Fund related how Rahm Emanuel was one of only a few individuals who had ever apologized to him over something he had written.  What prompted Emanuel’s apology was a debate with Fund at Northwestern University in Evanston, IL, at which time Emanuel called Fund names that could only be defined as over-the-top.

Expressing his delight to be back in Chicago again, while his co-author was in Toledo, Ohio, John Fund felt he had drawn the better half of the straw. There followed a pithy comment by Fund about the resignation the week before (Thursday, September 25) of Eric Holder as Attorney General due to a conflict of forces. Fund suggested that Holder’s new job title be “Permanent Witness.”

Part 2: John Fund’s knowledge and wit will be shared as he elaborates on the way Eric Holder viewed his position at Attorney General, reflected by his behavior, while serving President Obama. Additional thoughts relative to the direction of this nation will also be covered.

[Originally published at Illinois Review]

 

Categories: On the Blog

Seniors Dispersing Away From Urban Cores

Somewhat Reasonable - October 03, 2014, 10:28 AM

Senior citizens (age 65 and over) are dispersing throughout major metropolitan areas, and specifically away from the urban cores. This is the opposite of the trend suggested by some planners and media sources who claim than seniors are moving to the urban cores. For example, one headline, “Millions of Seniors Moving Back to Big Cities” is at the top of a story with no data and anecdotes ranging that are at least as much suburban (Auburn Hills, in the Detroit area) and college towns (Oxford, Mississippi and Lawrence, Kansas), as they are big city. Another article, “Why Seniors are Moving to the Urban Core and Why It’s Good for Everyone,” is also anecdote based, and gave prominence to a solitary housing development in downtown Phoenix (more about Phoenix below).

Senior Metropolitan Growth Trails National

Between 2000 and 2010, the nation’s senior population increased approximately 5.4 million, an increase of 15 percent. Major metropolitan areas accounted for approximately 50 percent of the increase (2.7 million) and also saw their senior population increase 15 percent. By contrast, these same metropolitan areas accounted for 60 percent of overall growth between 2000 and 2010, indicating that most senior growth is in smaller metropolitan areas and rural areas.

Senior Metropolitan Population Dispersing

The number of senior citizens living in suburbs and exurbs of major metropolitan areas (over 1,000,000 population) increased between 2000 and 2010, according to census data. The senior increases were strongly skewed away from the urban cores. Suburbs and exurbs gained 2.82 million senior residents over the period, while functional urban cores lost 112,000. The later suburbs added 1.64 million seniors. The second largest increase was in exurban areas, with a gain of 0.88 million seniors. The earlier suburbs (generally inner suburbs) added just under 300,000 seniors (Figure 1).

During that period, the share of senior citizens living in the later suburbs increased 35 percent. The senior citizen population share in the exurbs rose nearly 15 percent. By contrast, the share of seniors living in the functional urban cores declined 17 percent. Their share in the earlier suburbs declined 11 percent.

This is based on an analysis of small area data for major metropolitan areas using the City Sector Model.

City Sector Model analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). It also avoids the use of the newer “principal cities” designation of larger employment centers within metropolitan areas, nearly all of which are suburbs, but are inappropriately joined with core municipalities in some analyses. The City Sector Model” small area analysis method is described in greater detail in the Note below.

Pervasive Suburban and Exurban Senior Gains

The gains in functional suburban and exurban senior population were pervasive. Among the 52 major metropolitan areas, there were gains in 50. In two areas (New Orleans and Pittsburgh), there were losses. However, in each of these cases there was an even greater senior loss in the functional urban cores. In no case did urban cores gain more or lose fewer seniors than the suburbs and exurbs. Eight of the functional urban cores experienced gains in senior population, while 44 experienced losses (Figure 2)

Largest Urban Cores

The major metropolitan areas with the largest urban cores (more than 20 percent of the population in the functional urban cores),  would tend to be the most attractive to seniors seeking an urban core lifestyle. But they  still saw their seniors heading  to the suburbs and exurbs (Figure 3). Senior populations declined in the functional urban cores of all but two of these nine areas, New York and San Francisco. However, in both of these metropolitan areas, the increases in suburban and exurban senior populations overwhelmed the increases in the urban cores. All of these nine major metropolitan areas experienced increases in their suburban and exurban senior populations.

Moreover, the Phoenix anecdote cited above is at odds with the reality that the later suburbs and exurbs gained 165,000 seniors between 2000 and 2010. The earlier suburbs lost 7,000 seniors (No part of Phoenix has sufficient density or transit market share to be classified as functional urban core).

Consistency of Seniors Trend with Other Metropolitan Indicators

As has been indicated in previous articles, there continues to be a trend toward dispersal and decentralization in US major metropolitan areas. There was an overall population dispersion from1990 to 2000 and 2000 to 2010, which continued trends that have been evident since World War II and even before, as pre-automobile era urban cores have lost their dominanceJobs continued to follow the suburbanization and exurbanization of the population over the past decade away as cities became less monocentric, less polycentric and more “non-centric.” As a result, work trip travel times are generally shorter for residents where population densities are lower. Baby boomers and Millennials have been shown to be dispersing as well, despite anecdotes to the contrary (Figure 4). The same applies to seniors.

Note: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before. All of these areas are defined at the zip code tabulation area (ZCTA) level.

—-

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photo: Later Suburbs of Cincinnati (where most senior growth occurred from 2000 to 2010). By Author

 

[Originally published at New Geography]

Categories: On the Blog

A Victory for Our Energy Independence: Cove Point Gas Plant Is Approved

Somewhat Reasonable - October 02, 2014, 4:25 PM

Victories are hard won and often seem few and far between on the free market energy and environment front. We spend much of our time, bemoaning and battling bad policies, regulations, laws and court decisions. We take as victories, celebrate and publicize simply defeating (often temporarily) bad policies.

Such victories, temporary or not, while they merit celebration, are simply holding actions. True environmental and energy victories are when free choice and markets expand, bringing increased well-being in the U.S., abroad or both. We won just such a victory on September 29, when the federal government approved the expansion and modification of the Cove Point natural gas liquefaction.

For a number of years natural gas was increasingly in short supply in the United States. We imported natural gas to feed our growing demand for this flexible resource through a number of liquefaction plants along the coasts. However, as high prices encouraged outside the box thinking and technological advancement, the fracking revolution broke loose, natural gas became abundant again and prices fell.

In the past couple of years, the average price of natural gas has been so low that some companies are capping existing wells and others are flaring natural gas associated with oil production because prices are to low to justify producing and storing it. At the same time, our allies in Europe and those facing energy scarcity in the developing world, need new sources of natural gas so they aren’t held hostage to an openly hostile regime in Moscow (Europe), or to their own bad luck geologically speaking (many developing countries).

That’s where Cove Point and other liquefaction plants come in. The U.S. can create jobs here, add to the GDP and government revenues, while also benefiting people and the environment in other parts of the world by exporting our increasingly abundant supply of natural gas.  Its a win-win that only environmental extremists can object to0 — and they do!

Environmentalists fought Cove Point on the grounds that it will encourage increased fracking and natural gas use. Remember, these are the same environmentalists who were lauding natural gas a decade ago when gas was expensive and they saw it as transition fuel from coal to renewable electric power systems. Once gas became relatively cheap, and it became clear that rather than a transition fuel, it would become a dominant source of energy and a driver of continued economic growth, environmentalists wanted to put on the brakes.  Gas is an enemy of their steady-state, zero-growth world view.

Boo Hoo!

When fully operational in 2017, the $3.8 billion project would adding about 75 jobs to the approximately 100 already at the site. Cove point will also contribute an estimated $40 million a year in additional tax revenue to Calvert County. Upon completion, Cove Point could move enough natural gas daily to meet the household needs of 860,000 homes for four days.  Cove Point’s parent company, Dominion, already has buyers lined up for its production, including a Japanese gas utility, a Japanese trading company and the U.S. unit of one of India’s largest gas distributors.

Cove Point is an important victory in the cause of decreasing fuel scarcity but it is just a step.  The Obama administration has been exceedingly slow in approving such plants. It is the fourth natural gas export facility to be approved by FERC with 14 more awaiting approval (they have been for sometime). FERC should move with greater urgency to approve the rest of the plants and not hamper them with overly burdensome conditions that would slow or even prevent (by raising the costs too much) them from being built.

Categories: On the Blog

Beyond Polycentricity: 2000s Job Growth (Continues to) Follow Population

Somewhat Reasonable - October 02, 2014, 3:01 PM

The United States lost jobs between 2000 and 2010, the first loss between census years that has been recorded in the nation’s history. The decline was attributable to two economic shocks, the contraction following the 9/11 attacks and the Great Recession, the worst financial crisis since the Great Depression. Yet, even in this moribund job market, employment continued to disperse in the nation’s major metropolitan areas.

This is the conclusion of a small area analysis (zip code tabulation areas) of data from County Business Patterns, from the Census Bureau, which captures nearly all private sector employment and between 85 and 90 percent of all employment (Note 1).

The small area analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). This “City Sector Model” small area analysis method is described in greater detail in Note 2.

Distribution of Employment in Major Metropolitan Areas

County Business Pattern data indicates that employment dropped approximately 1,070,000 in the 52 major metropolitan areas (those with more than 1,000,000 population) between 2000 and 2010. The inner city sectors (the functional urban cores and earlier suburbs) were hard-hit. Together the inner sectors, the functional urban cores and the earlier suburbs, lost 3.74 million jobs. The outer sectors, the later suburbs and the exurbs, gained 2.67 million jobs (Figure 1).

There were job losses of more than 300,000 in the functional urban cores, and even larger losses (3.2 million) in the earlier suburbs. The functional urban cores are defined by the higher population densities that predominated before 1940 and a much higher dependence on transit, walking and cycling for work trips. The earlier suburbs have median house construction dates before 1980.

The share of major metropolitan area employment in the functional urban cores dropped from 16.4 percent in 2000 to 16.2 percent in 2010. This compares to the 8 percent of major metropolitan employment that is downtown (central business district) areas. The notion, however, that metropolitan areas are dominated by their downtowns is challenged by the fact that 84 percent of jobs are outside the functional urban cores.

The largest percentage of major metropolitan areas is clustered in the earlier suburbs, those with median house construction dates from 1946 to 1979. In 2010, 46.8 percent of the jobs were in the earlier suburbs, a decline from 51.4 percent in 2000.

These losses in employment shares in the two inner city sectors were balanced somewhat by increases in the outer sectors, the later suburbs (with median house construction dates of 1980 or later) and the exurbs, which are generally outside built-up urban areas. The increase was strongest in the later suburbs, where, where employment increased by 2.6 million. The share of employment in the later suburbs rose to 25.5 percent from 21.6 percent. There was also a 600,000 increase in exurban employment. The exurban share of employment rose to 11.5 percent from 10.6 percent (Figure 2).

The Balance of Residents and Jobs

There is a surprisingly strong balance between population and employment within the city sectors, which belies the popular perception by some press outlets and even some urban experts that as people who farther away from the urban core, they have to commute farther. In fact, 92 percent of employees do not commute to downtown, and as distances increase, the share of employees traveling to work downtown falls off substantially. As an example, only three percent of working residents in suburban Hunterdon County, New Jersey (in the New York metropolitan area), work in the central business district, Manhattan, while 80 percent work in relatively nearby areas of the outer combined metropolitan area.

It is to be expected that the functional urban core would have a larger share of employment than population. However the difference is not great, with 16.2 percent of employment in the functional urban core and 14.4 percent of the population. The earlier suburbs have by far the largest share of the population at 42.0 percent. They also have the largest share of employment, at 46.8 percent. The later suburbs have 26.8 percent of the population, slightly more than their 25.5 percent employment share. The largest difference is, as would be expected, is in the exurbs, with 16.8 percent of major metropolitan area residents and 11.5 percent of employment (Figure 3). It is notable, however, that the difference between the share of population and employment varies less than 15 percent in the three built-up urban area sectors (urban core, earlier suburbs and later suburbs), though the difference was greater in the exurbs.

How Employment Followed Population in the 2000s

The outward shifts of population and employment are between in the city sectors. In the earlier suburbs, where the population and employment is the greatest, the population share declined 4.3 percentage points, while the employment share declined a near lockstep 4.6 percentage points. The later suburbs had a 4.5 percentage point increase in population share, followed closely a near lockstep 3.9 percentage point increase in employment share. In the exurbs, a 1.5 percentage point increase in the population share was accompanied by a 0.9 percentage point increase in the employment share. The connection is less clear in the functional urban core, where a 1.6 percentage point drop in the population share was associated with a 0.2 percentage point reduction in the employment share (Figure 4).

The similarity in population and employment shares between the city sectors is an indication that employment growth has been geographically tracking population growth for decades, as cities have evolved from moncentricity to polycentricity and beyond.

“Job Following” by Relative Urban Core Size

Similar results are obtained when cities are categorized by the population of their urban cores relative to the total city population. Each category indicates an outward shift from the functional urban cores and earlier suburbs to the later suburbs and exurbs, in both the population share and the employment share. However, the shift is less pronounced in the cities with larger relative urban cores, which tend to be in the older urban regions  (Figure 5). Out of the 18 cities with functional urban cores amounting to more than 10 percent of the metropolitan area, 16 are in the Northeast (including the Northeastern corridor cities of Washington and Baltimore) and the Midwest, where strong population growth ended long ago.

As usual, New York is in a category by itself, New York, has a functional urban core with more than 50 percent of its population. New York experienced an outward shift of 1.1 percent in its population, and a 0.4 percent outward shift of its employment (the total shift in share, from the urban core and earlier suburbs to the later suburbs and exurbs, expressed in terms of percentage points).

Generally speaking, the stronger the functional urban core, the less the movement of jobs and people from the center. The actual percentages of functional urban core population by city are shown in From Jurisdictional to Functional Analysis of Urban Cores and Suburbs (Figure 6).

On average, there was a shift of nearly five percent from the inner sectors (functional urban cores and earlier suburbs) to the outer sectors (later suburbs and exurbs)

Commute Times: Less Outside the Urban Cores

The earlier suburbs are generally between the functional urban cores and the later suburbs geographically. As a result, jobs are particularly accessible to residents from all over the metropolitan area. A further consequence is that commute times are shortest (26.3 minutes) in the earlier suburbs, where approximately half of the people also live. Commute times are a bit higher in the later suburbs (27.7 minutes). The exurbs have the third longest commutes, at 29.2 minutes. Finally, commute times are longest in the functional urban cores (31.8 percent), both because traffic congestion is greater (to be expected, not least because of their higher densities), and more people take transit, which is slower (Figure 7).

The dispersed, and well coordinated location of jobs and residences is one reason that United States metropolitan areas have shorter commute times and less traffic congestion than its international competitors in Europe, Australia, and Canada. All this is testimony to the effectiveness with which people and the businesses established to serve them have produced effective labor markets, which are the most affluent in the world, in which the transaction related impacts of work trip travel time are less than elsewhere.

Beyond Polycentricity

These are not new concepts, despite the continuing tendency to imagine the city as a monocentric organism where everyone works in downtown skyscrapers and lives in suburban dormitories. The lower density US city has not descended into the illusion of suburban gridlock that some planners have declared so stridently. Indeed, traffic congestion is considerably less intense in US cities than it is in the other parts of the high income world for which there is data.

A quarter century ago, University of Southern California economists Peter Gordon and Harry Richardson said that “the co-location of firms and households at decentralized locations has reduced, not lengthened commuting times and distances. Decentralization reduces pressures on the CBD, relieves congestion and avoids ‘gridlock.'”  In 1996 they Los Angeles as “beyond polycentricity” Both of these observations fit well as a description of trends in the 2000s. Most US major metropolitan areas are now “beyond polycentricity,” not just Los Angeles.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

——

Note 1: The Census Bureau describes “County Business Pattern” data as follows: “Statistics are available on business establishments at the U.S. level and by State, County, Metropolitan area, and ZIP code levels. Data for Puerto Rico and the Island Areas are available at the State and county equivalent levels. County Business Patterns (CBP) covers most NAICS industries excluding crop and animal production; rail transportation; National Postal Service; pension, health, welfare, and vacation funds; trusts, estates, and agency accounts; private households; and public administration. CBP also excludes most establishments reporting government employees.

Note 2: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before. All of these areas are defined at the zip code tabulation area (ZCTA) level.

—–

Photo: Beyond Polycentric Houston (by author)

 

[Originally published at New Geography]

Categories: On the Blog

Education Incentives can Help End Low Expectations

Somewhat Reasonable - October 02, 2014, 1:58 PM

shutterstock image

Behavioral psychologists and economists long have considered incentives to be a normal part of human nature, but applying them to education still stokes controversy.

For example, some people recoil at the idea of  paying kids and their teachers for high scores on advanced-placement tests that get students college credit in high school, as some schools in Northern Virginia are doing,

It sounds so … mercenary. Exchanging money for good performance? Handing out filthy lucre to reward a personally fulfilling and enriching achievement? Why, it almost sounds like the Grammys, or the World Series, or even a job. Nobody except the most Puritan-minded thinks any of these occupations or rewards is anything but a celebration of excellence, or at the very least a job well done. Adults can accept money as a reward for high performance. There’s no reason children cannot do the same — except prejudice.

These low expectations are endemic in education, research confirms. It starts with the teachers.For several generations now, Americans have underestimated their children. Laws mostly bar children from taking even a small-time job until age 16. Kids can hardly ride their own bikes down to the park or corner store any more.

University of Missouri economist Cory Koedel has found education students get the highest grades but the easiest work of all college majors. A 2013 study by the Thomas B. Fordham Institute found teachers typically assign books at their students’ reading level, not their grade level. This means teachers frequently assign too-easy books, a problem that compounds as children move up grades. If fourth-grader Suzy gets third-grade rather than fourth-grade books to read, and so on up through the grades, she likely is to remain behind in reading for the rest of her life.

Washington, D.C., mother Mary Riner became disgusted with the low expectations at her daughter’s supposedly well-performing grade school. Fifth-grade Latin homework, for example, didn’t involve memorizing vocabulary or practicing verb tenses, but coloring Latin words. Yes, coloring — with a crayon. Riner responded by helping start a truly demanding school, called BASIS DC.

Low expectations don’t occur in a vacuum. They result from a set of expectations in our society, and they reinforce and verify those expectations as a form of self-fulfilling prophecy. A smart use of incentives offers one way to address this problem.

In their new book “Rewards: How to Use Rewards to Help Children Learn—and Why Teachers Don’t Use Them Well,” authors Herbert Walberg and Joseph Bast illustrate how positive reinforcement can help lift expectations and thus raise student performance.

They discuss how the attitudes of many in the education establishment are a barrier to putting to work the science that shows kids respond to incentives just like adults. They also explain that rewards are about far more than money — good teachers use simple rewards, such as stickers or praise, to help instill in children the longer-lasting internal rewards of satisfaction in learning and pride in a job well done.

Perhaps the biggest shocker may be the realization that incentives always will be embedded in education, regardless of whether people acknowledge their existence. If teachers reinforce learning with encouragement, recognition and grades, that’s an incentive. If teachers give students too-easy work because they expect every real academic challenge to raise complaints, that creates a very different set of incentives for both teachers and students.

Incentives will always exist in education. The question is, will educators harness this power for the students’ good?

Joy Pullmann is managing editor of School Reform News and an education research fellow at The Heartland Institute.

[Originally published at WatchDog]

 

Categories: On the Blog

Education Incentives Can Help End Low Expectations

Somewhat Reasonable - October 02, 2014, 9:56 AM

Behavioral psychologists and economists have considered incentives to be a normal part of human nature for decades, if not centuries, but applying them to education still stokes controversy. For example, some people recoil at the idea of paying kids and their teachers for high scores on Advanced Placement tests that get students college credit in high school, as some schools in Northern Virginia are doing,

It sounds so … mercenary. Exchanging money for good performance? Handing out filthy lucre to reward a personally fulfilling and enriching achievement? Why, it almost sounds like the Grammys, or the World Series, or even a job. Nobody except the most Puritan-minded thinks any of these occupations or rewards is anything but a celebration of excellence, or at the very least a job well done. Adults can accept money as a reward for high performance. There’s no reason children cannot do the same—except prejudice.

For several generations now, Americans in general have underestimated their children. Laws mostly bar children from taking even a small-time job until age 16. Kids can hardly ride their own bikes down to the park or corner store any more.

These low expectations are endemic in education, research confirms. It starts with the teachers. University of Missouri economist Cory Koedel has found education students get the highest grades but the easiest work of all college majors. A 2013 study by the Thomas B. Fordham institute found teachers typically assign books at their students’ reading level, not their grade level. This means teachers frequently assign too-easy books, a problem that compounds as children move up grades. If fourth-grader Suzy gets third-grade rather than fourth-grade books to read, and so on up through the grades, she is likely to remain behind in reading for the rest of her life.

Washington, DC mother Mary Riner became disgusted with the low expectations at her daughter’s supposedly well-performing grade school. Fifth-grade Latin homework, for example, didn’t involve memorizing vocabulary or practicing verb tenses, but coloring Latin words. Yes, coloring—with a crayon. Riner responded by helping start a truly demanding school, called BASIS DC.

Low expectations don’t occur in a vacuum—they result from a set of expectations in our society, and they reinforce and verify those expectations as a form of self-fulfilling prophecy. A smart use of incentives offers one way to address this problem.

In their new book Rewards: How to Use Rewards to Help Children Learn—and Why Teachers Don’t Use Them Well, authors Herbert Walberg and Joseph Bast illustrate how positive reinforcement can help lift expectations and thus raise student performance. They discuss how the attitudes of many in the education establishment are a barrier to putting to work the science that shows kids respond to incentives just like adults. They also explain that rewards are about far more than money—good teachers use simple rewards, such as stickers or praise, to help instill in children the longer-lasting internal rewards of satisfaction in learning and pride in a job well done.

Perhaps the biggest shocker may be the realization that incentives will always be embedded in education, regardless of whether people acknowledge their existence. If teachers reinforce learning with encouragement, recognition, and grades, that’s an incentive. If teachers give students too-easy work because they expect every real academic challenge to raise complaints, that creates a very different set of incentives for both teachers and students.

Incentives will always exist in education. The question is, will educators harness this power for the students’ good?

Joy Pullmann is managing editor of School Reform News and an education research fellow at The Heartland Institute.

Categories: On the Blog

New Commuting Data Shows Gain by Individual Modes

Somewhat Reasonable - October 01, 2014, 2:12 PM

The newly released American Community Survey data for 2013 indicates little change in commuting patterns since 2010, a result that is to be expected in a period as short as three years. Among the 52 major metropolitan areas (over 1 million population), driving alone increased to 73.6% of commuting (including all travel modes and working at home). The one mode that experienced the largest drop was carpools, where the share of commuting dropped from 9.6% in 2010 to 9.0% in 2013. Doubtless most of the carpool losses represented gains in driving alone and transit. Transit grew, increasing from a market share of 7.9% in 2010 to 8.1% in 2013 in major metropolitan areas; similarly working at home increased from 4.4% to 4.6%, an increase similar to that of transit (Figure 1). Bicycles increased from 0.6% to 0.7%, while walking remained constant at 2.8%.

Transit: Historical Context

Transit has always received considerable media attention in commuting analyses. Part of this is because of the comparative labor efficiency (not necessarily cost efficiency) of transit in high-volume corridors leading to the nation’s largest downtown areas. Part of the attention is also due to the “positive spin” that has accompanied transit ridership press releases. An American Public Transportation Association press release earlier in the year, which claimed record ridership, have evoked a surprisingly strong response from some quarters: For example, academics David King, Michael Manville and Michael Smart wrote in the Washington Post:”We are strong supporters of public transportation, but misguided optimism about transit’s resurgence helps neither transit users nor the larger traveling public.” They concluded that transit trips per capita had actually declined in the past 5 years.

Nonetheless, transit remains well below its historic norms. The first commute data was in the 1960 census and indicated a 12.6% national market share for transit for the entire U.S. population. By 1990, transit’s national market share had dropped to 5.1%. After dropping to 4.6% in 2000, transit recovered to 5.2% in 2012. But clearly the historical decline of transit’s market share has at least been halted (Figure 2).

Even so, in a rapidly expanding market, many more people have begun driving alone than using transit. More than 47 million more commuters drive alone today than in 1980, while the transit increased about 1.4 million commuters over the same time period.

The largest decline occurred before 1960. Transit’s work trip market share was probably much higher in 1940, but the necessary data was not collected in the census, just before World War II and the great automobile-oriented suburbanization. In 1940, overall urban transit travel (passenger miles all day, not just commutes) is estimated to have been twice that of 1960 and nearly 10 times that of today.

Transit’s 2010-2013 Trend

To a remarkable extent, transit continues to be a “New York story.” Approximately 40% of all transit commuting is in the New York metropolitan area. New York’s 2.9 million transit commuters near six times that of second place Chicago. Transit accounts for 30.9% of commuting in New York. San Francisco ranks second at 16.1% and Washington third at 14.2%. Only three other cities, Boston (12.8%), Chicago (11.8), and Philadelphia (10.0%) have transit commute shares of 10% or more.

From 2010 to 2013, transit added approximately 375,000 new commuters. Approximately 40% of the entire nation’s transit commuting increase occurred in the New York metropolitan area. This was included in the predictable concentration (80%) of ridership gains in the transit legacy metropolitan areas, which are the six with transit market shares of 10% or more. Combined, these cities added 300,000 commuters, 89%, on the large rail systems that feed the nation’s largest downtown areas.

Perhaps surprisingly, Seattle broke into the top five, edging out legacy metropolitan areas (Figure 3) Philadelphia and Washington. Seattle has a newer light rail and commuter rail system. Even so, the bulk of the gain in Seattle was not on the rail system. Approximately 80% of its transit commuter growth was on non-rail modes. Seattle has three major public bus systems, a ferry system and the newer Microsoft private bus system that serves its employment centers throughout the metropolitan area. All of the new transit commuters in eighth ranked Miami were on non-rail modes, despite its large and relatively new rail system. New rail city Phoenix (10th) also experienced the bulk of its new commuting on non-rail modes (93%). Rail accounted for most of the gain in San Jose (9th), with a 58% of the total  The transit market shares in Miami, San Jose and Phoenix are all below the national average of 5.2%.

Outside the six transit legacy metropolitan areas, gains were far more modest, at approximately 75,000. Seattle, Miami, San Jose, and Phoenix accounted for nearly 60,000 of this gain, leaving only 15,000 for the other 42 major metropolitan areas, including Los Angeles, which had a 5,000 loss. Los Angeles now has a transit work trip market share of 5.8%, below the 5.9% in 1980 when the Los Angeles County Transportation Commission approved the funding for its rail system (the result of my amendment, see “Transit in Los Angeles“). Los Angeles is falling far short of its Matt Yglesias characterization as the “next great mass-transit city.”

Since 2000, the national trend has been similar. Nearly 80% of the increase in transit commuting has been in the transit legacy metropolitan areas, where transit’s share has risen from 17% to 20%. These areas accounted for only 23% of the major metropolitan area growth since 2000. By contrast, 77% of the major metropolitan area growth has been in the 46 other metropolitan areas, where transit’s share of commuting has remained at 3.2% since 2000. There are limits to how far the legacy metropolitan areas can drive up transit’s national market share.

Prospects for Commuting

At a broader level, the new data shows the continuing trend toward individual mode commuting, as opposed to shared modes. Between 2010 and 2013, personal modes (driving alone, bicycles, walking and working at home) increased from 82.3% to 82.7% of all commuting. Shared modes (carpools and transit) declined from 17.7% of commuting to 17.3%. These data exclude the “other modes” category (1.2% of commuting) because it includes both personal and shared commuting. None of this should be surprising, since one of the best ways to improve productivity, both personal and in the economy, is to minimize travel time for necessary activities throughout the metropolitan area (labor market).

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photograph: DART light rail train in downtown Dallas (by author)

 

[Originally published at New Geography]

Categories: On the Blog
Syndicate content