Somewhat Reasonable

Syndicate content Somewhat Reasonable | Somewhat Reasonable
The Policy and Commentary Blog of The Heartland Institute
Updated: 48 min 31 sec ago

Traffic Congestion in the World: 10 Worst and Best Cities

October 03, 2014, 1:50 PM

The continuing improvement in international traffic congestion data makes comparisons between different cities globally far easier. Annual reports (2013) by Tom Tom have been expanded to include China, adding the world’s second largest economy to previously produced array of reports on the Americas, Europe, South Africa and Australia/New Zealand. A total of 160 cities are now rated in these Tom Tom Traffic Index Reports. This provides an opportunity to provide world 10 most congested and 10 least congested cities lists among the rated cities.

Tom Tom provides all day congestion indexes and indexes for peak hours (heaviest traffic peak morning and evening hour). The traffic indexes rate congestion based on the additional time necessary to make the trip compared to those under free flow conditions. For example, an index of 10 indicates that a 30 minute trip would take 10 percent longer, or 33 minutes. An index of 50 means that a 30 minute trip will, on average, take 45 minutes.

Congestion in Peak Hours: 10 Most Congested Cities

This article constructs an average peak hour index, using the morning and evening peak period Tom Tom Traffic Indexes for the 125 rated metropolitan areas with principal urban areas of more than 1,000,000 residents. The peak hour index is used because peak hour congestion is generally of more public policy concern than all day congestion. This congestion occurs because of the concentration of work trips in relatively short periods of time. Work trips are by no means the majority of trips, but it can be argued that they cause the most congestion. Many cities have relatively little off-peak traffic congestion.

The two most congested cities are in Eastern Europe, Moscow and Istanbul (which stretches across the Bosporus into Asia). Four of the most congested cities are in China, three in Latin America (including all that are rated) and one is in Western Europe (Figure 1).

Moscow is the most congested city, with a peak hour index of 126. This means that the average 30 minute trip in free flow conditions will take 68 minutes during peak hours. Moscow has a limited freeway system, but its ambitious plans could relieve congestion. The city has undertaken a huge geographical expansion program, with the intention of relocating many jobs to outside the primary ring road. This dispersion of employment, if supported by sufficient road infrastructure could lead to improved traffic conditions.

Istanbul is the second most congested city with a peak hour traffic index of 108. The average free flow 30 minute trip would take 62 minutes during peak hours.

Rio de Janeiro is the third most congested city him with a peak hour traffic index of 99.5. The average free flow 30 minute trip takes 60 minutes due to congestion during peak hours.

Tianjin, which will achieve megacity status in 2015, and which is adjacent to Beijing, is the fourth most congested city, with an index of 91. In Tianjin, the peak hour congestion extends a free flow 30 minute trip to 57 minutes.

Mexico City is the fifth most congested city, with a peak hour traffic index of 88.5. The average free flow 30 minute trip takes 57 minutes due to congestion.

Hangzhou (capital of Zhejiang, China), which is adjacent to Shanghai, has the sixth worst traffic congestion, with a peak period traffic index of 87. The average 30 minute trip in free flow takes 56 minutes during peak hours.

Sao Paulo  has the seventh worst traffic congestion, with a peak hour index of 80.5. The average 30 minute trip in free flow takes 54 minutes during peak periods. Sao Paulo’s intense traffic congestion has long been exacerbated by truck traffic routed along the “Marginale” near the center of the city. A ring road now is mostly complete, but the section most critical to relieving traffic congestion from trucks is yet to be opened.

Chongqing has the eighth worst traffic congestion, with a peak hour index of 78.5. As a result, a trip that would take 30 minutes in free flow conditions takes 54 minutes during peak hours.

Beijing has the ninth worst traffic congestion, with a peak hour index of 76.5. As a result a trip that should take 30 minutes in free flow is likely to take 53 minutes during peak hour. In spite of recent reports of its intense traffic congestion, Beijing rates better than some other cities. There are likely two causes for this. With its seventh ring road now planned, Beijing has a top-flight freeway system. Its traffic is also aided by its dispersion of employment The lower density government oriented employment core , is flanked on both side by major business centers (“edge cities”) on the Second and Third Ring Roads. This disperses traffic.

Brussels has the 10th worst peak hour traffic congestion, with an index of 75. A trip that would take 30 minutes at free flow takes 53 minutes in peak hour congestion.

Seven of the 10 most congested cities are megacities (urban areas with populations over 10 million). The exceptions are Hangzhou, Chongqing and Brussels. Brussels has by far the smallest population, at only 2.1 million residents, little more than one-third the size of second smallest city, Hangzhou.

Most Congested Cities in the US and Canada

The most congested US and Canadian cities rank far down the list. Los Angeles ranks in a tie with Paris, Marseille and Ningbo (China), at a peak hour congestion index of 65. It may be surprising that Los Angeles does rank much higher. Los Angeles has   been the most congested city in the United States, displacing Houston in the 1980s. The intensity of the Los Angeles traffic congestion is driven by its highest urban area density in the United States and important gaps in the planned freeway system that were canceled. Nonetheless, Los Angeles is aided by a strong dispersion of employment, which helps to make makes its overall work trip travel times the lowest among world megacities for which data is available). Part of the Los Angeles advantages is its high automobile usage, which shortens travel times relative to megacities with much larger transit market shares (such as Tokyo, New York, London and Paris).

Vancouver is Canada’s most congested city, with a pea period index of 62.5 and has the 27th worst traffic congestion, in a tie with Stockholm. Vancouver had exceeded Los Angeles in traffic congestion in the 2013 mid-year Tom Tom Traffic Index report.

Least Congested Cities

All but one of the 10 least congested large cities in the Tom Tom report are in the United States. The least congested is Kansas City, with a peak period index of 19.5, indicating that a 30 minute trip in free flow is likely to take 36 minutes due to congestion. Kansas City has one of the most comprehensive freeway systems in the United States and has a highly dispersed employment base. US cities also occupy the second through the sixth least congested positions (Cleveland, Indianapolis, Memphis, Louisville and St. Louis). Spain’s Valencia is the seventh least congested city, while the eighth through 10th positions are taken by Salt Lake City, Las Vegas and Detroit.

Cities Not Rated

There are a number of other highly congested cities that are not yet included in international traffic congestion ratings. Data in the 1999 publication Cities and Automobile Dependence: A Sourcebook indicated that the greatest density of traffic among rated cities was in SeoulBangkokand Hong Kong. Singapore, Kuala LumpurJakartaTokyo, Surabaya (Indonesia), while Zürich and Munich also had intense traffic congestion. Later data would doubtless add Manila to the list. The cities of the Indian subcontinent also experience extreme, but as yet unrated traffic congestion. It is hoped that traffic indexes will soon be available for these and other international cities.

Determinants of Traffic Congestion

An examination (regression analysis) of the peak period traffic indexes indicates an association between higher urban area population densities and greater traffic congestion, with a coefficient of determination (R2) of 0.48, which is significant at the one percent level of confidence (Figure 2). This is consistent with other research equating lower densities with faster travel times and anincreasing automobile use in response to higher densities.

At the regional level, a similar association is apparent. The United States, with the lowest urban population densities, has the least traffic congestion. Latin America, Eastern Europe and China, with higher urban densities, have worse traffic congestion. Density does not explain all the differences, however, especially among geographies outside the United States. Despite its high density, China’s traffic congestion is less intense than that of Eastern European and Latin American cities. It seems likely that this is, at least in part due to the better matching of roadway supply with demand in China, with its extensive urban freeway systems. Further, the cities of China often have a more polycentric employment distribution (Table).

Traffic Congestion & Urban Population Density Urban Poulation Density Peak Hour Congestion Per Square Mile Per KM2 Australia & New Zealand 49.2                4,600              1,800 Canada 49.4                5,000              1,900 China 64.9              15,700              6,100 Eastern Europe 80.8              11,800              4,500 Latin America 89.5              19,600              7,600 United States 37.1                3,100              1,200 Western Europe 47.4                8,700              3,400 South Africa 52.4                8,300              3,200 Peak Hour Congestion: Average of Tom Tom Peak Hour Congestion Indexes 2013 Population Densities: Demographia World Urban Areas

 

Both of these factors, high capacity roadways and dispersion of population as well as jobs are also important contributors to the lower congestion levels in the United States.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photo: On the Moscow MKAD Ring Road

 

[Originally published at New Geography]

Categories: On the Blog

Stay Away From Muni Broadband

October 03, 2014, 12:14 PM

In this 3-minute video, titled “Stay away from Muni Broadband,” Scott Cleland lays out multiple reasons why we should be wary about Municipal Broadband.

First of all, it is preposterous to think of the government as a competitor. The government does not operate under the same rules as its private sector counterparts. In fact, the government makes the rules. The government sets taxes, they regulate industry, they have the ability to set barriers to entry. By using these powers, the government has a large advantage over the competition. As Cleland states, “you can’t fight city hall.”

Competing with the government can easily become a unfair fight. If the government begins to lose, they can use their powers to gain an edge. By reconstructing the regulations or fees, the government has the ability to injure its competition; this is an advantage the private-sector competition does not have. Also, these advantages of the government come at the expense of the taxpayer.

At best, municipal broadband is a waste of taxpayer money. As Cleland states, “In the past, these municipal broadband things have been boondoggles and have cost local and state coffers millions upon millions upon tens-of-millions of bonds that then were essentially bankrupt.” It is also a gamble. The government intends to spend money attempting to compete with the private sector even though they have no experience in this field.

The last argument Cleland gives against municipal broadband is the possibility of government snooping. With the government running the internet, this opens up the opportunity for them to be able to check your email or monitor your internet searches.

This short video by Cleland does a great job of explaining some of the arguments against municipal broadband. The government should stick to tasks we deem necessary while leaving the internet in the hands of the private sector.

Categories: On the Blog

Texas Textbooks Need Science Reality Check

October 03, 2014, 10:02 AM

Dressed as T-Rex, Sandra Calderon talks with Nick Savelli prior to a State Board of Education public hearing on proposed new science textbooks., Tuesday, Sept. 17, 2013, in Austin, Texas. A new law is in place that gives school districts the freedom to choose their own instructional materials including software, electronic readers or textbooks with or without board approval. (AP Photo/Eric Gay)

Would you want the textbooks at your child’s school to teach your kids the Soviet Union still exists and is the greatest danger your child will face in their lifetime? Would you want the computers at your child’s school to use dial-up internet or run on Windows 98? Of course not, because these resources are out of date, and parents want their kids to get the best education possible.

To ensure the quality of the education provided to students, the Texas State Board of Education has begun the process of updating its textbooks to reflect the latest information and advancements in history and science, because part of giving kids the best education possible means giving them access to the best resources available.

We no longer teach kids about the current status of the Berlin Wall, so why would we teach our kids about climate change by using climate models and textbooks that are similarly out-of-date and out-of-touch with reality?

Many people don’t know mean global temperatures have not risen significantly for the last 17 years, meaning no significant global warming has occurred since you first bought Windows 98. Some peer-reviewed, scientific studies suggest the current time period with no global warming has been as long as 20 years. The scientific analysis and climate models that predicted drastic global warming over this period were simply wrong, so it makes sense to reexamine the issue in light of new evidence and teach our children accordingly.

This common sense approach to science and education has somehow managed to ruffle the feathers of left-leaning special-interest groups who want to protect the status quo. These groups repeat the tired and thoroughly debunked claims that 97 percent of climate scientists believe climate change is real, man-made, and dangerous. But citing cherry-picked survey data and flawed studies does not help children understand the science of the climate, nor does using climate models that have been so inaccurate in predicting the temperatures for past two decades. The reason these models are such poor predictors of global temperatures is they assume we have complete knowledge of the climate system, or at least enough to predict the future accurately. But as we’ve seen, that isn’t true.

Assuming we have complete knowledge of the global climate system is like assuming we know everything there is to know about the creatures in the oceans or the ecosystems of the rainforests. We just don’t have all that information. Scientists are constantly discovering new species of animals we never knew existed, such as the Yin-Yang Frog from Vietnam and the subterranean blind fish, which lives in an underground river that runs for 4.3 miles through limestone caves.

Arguing the claim of “scientific consensus” is problematic, too, because scientists don’t always know what they think they know. For example, scientists have rediscovered several species of animals they had long considered extinct, such as the Coelacanth, a species of fish thought to have disappeared 65 million years ago, until they were rediscovered in 1938. Another species, the Bermuda petrel, was believed to be extinct since the 1620s, but it was rediscovered in 1951, falsifying more than 331 years of “scientific consensus.” Although perhaps not as cute as the New Caledonian crested gecko, all these animals demonstrate science is never settled, and we must adjust our opinions to reflect new evidence as it becomes available. That is the scientific way.

No one is saying humans have zero effect on the climate, but there is legitimate disagreement over how much. Considering CO2 emissions have increased dramatically but temperatures have remained steady or fallen slightly in the past 20 years, it is reasonable to argue natural forces have an impact on the climate that is equal to or significantly greater than that of humans.

With the U.S. falling behind the rest of the world in science education, we should applaud, not condemn, the Texas Board of Education for trying to teach kids how to think instead of what to think – especially when the “consensus” is still running on Windows 98.

 

[Originally published at The Houston Chronicle]

Categories: On the Blog

John Fund Contrasts AGS Holder and Meese at Federalist Society Luncheon

October 03, 2014, 9:52 AM

Part 1: Assistant to Attorney General Meese under Reagan, compares tenure of AG Holder to Meese at Chicago Federalist Society event featuring John Fund.

As president of the Chicago Lawyers’ Chapter of the Federalist Society, founded in 1982 as a group of conservatives and libertarians interested in the current state of the legal order, Laura Kotelman welcomed those who had come to have “Lunch with Author John Fund” on Monday, September 29 at the Union League Club, 65 West Jackson, Chicago, IL. John Fund is a National Affairs columnist for National Review magazine and on-air analyst on the Fox News Channel. He is considered a notable expert on American politics and the nexus between politics and economics and legal issues. Previously Fund served as a columnist and editorial board member for The Wall Street Journal.

While John Fund was in Chicago speaking to the Chicago Lawyers’ Chapter of the Federalist Society, co-author Hans von Spakovsky was at his venue in Toledo, Ohio, doing the same to promote their book: Obama’s Enforcer:  Eric Holder’s Justice Department, which catalogues the abuses of power at the Department of Justice under Attorney General Holder. Set forth is how Attorney General Eric Holder, Jr has politicized the Justice Department and put the interests of left-wing ideology and his political party ahead of the fair and partial administration of justice.

Remarks made by Federalist member Joseph A Morris, prior to his introduction of John Fund, provided a perfect segue to what Fund later shared about Eric Holder as President Obama’s Attorney General.  Morris, a former Assistant Attorney General and Director of the Office of Liasion Services with the U.S. Dept of Justice under Ronald Reagan, was eminently qualified to paint an accurate profile of Edwin Meese III, who served as U.S. Attorney General under President Reagan.  The comparison between Edwin Meese III under Reagan and Eric Holder under Obama in conducting the office of Attorney General was indeed worlds apart.

It was out of great respect for Joseph Morris by members of The Chicago Lawyers’ Chapter of the Federalist Society that Laura Kotelman introduced Morris as “our home town hero.”  Joseph Morris is a partner with Morris & De La Rosa in Chicago.

Joseph Morris comments on Edwin Meese as Reagan’s Attorney General

Joseph Morris directed those in attendance to an Opinion piece that appeared on the morning of the Fund event (9/29) in the Wall Street Journal, “Holder’s Legacy of Racial Politics,” by Edwin Meese III and Kenneth Blackwell, former Ohio Secretary of State. The article relates how Eric Holder battled against state voter-ID laws despite all the evidence of their fairness and popularity.  According to Morris, the only reason for opposing sensible voter-ID laws is a “desire for votes.”

Joe Morris, in reflecting upon Edwin Meese III, spoke of Meese as Governor Reagan’s legal advisor and head of Reagan’s campaign committee in 1980. When Reagan took office, Meese went along with Reagan as one of his three staff assistants. Howard Baker later became Chief of Staff in Reagan’s second administration. With the surfacing of the Iran Contra scandal in Reagan’s 2nd administration, Edwin Meese, having been appointed Attorney General well before the scandal broke, was assigned by President Reagan to investigate the matter. Unlike Messrs. Obama and Holder, Meese saw the job of the Attorney General as one to pursue the truth, not to cover up an internal administration scandal.

It was during Reagan’s second term with the emergence of the Iran Contra scandal that Joe Morris, serving under Reagan at the time as both Chief of Staff and General Counsel of USIA (United States Information Agency), was asked to assist the Reagan White House.  Morris recalls receiving two envelopes from the White House asking that he and his entire staff at USIA assist in the Iran Contra investigation by preserving all the facts (documents, dates, etc.). The instructions to be forthcoming about preserving records and being cooperative in the Iran-Contra investigation were received not only by Joseph Morris at USIA, but were also sent by President Reagan and Attorney General Meese to every other relevant agency of the U.S. Government.

Edwin Meese, as Attorney General, told Morris that he wanted the investigation to be taken seriously.  It was through Morris’ involvement in the Iran Contra Scandal, while performing his dual roles at the USIA, that he was brought into Reagan’s Justice Department as Assistant Attorney General under Edwin Meese.  Because of this relationship with Edwin Meese, Morris was able to present an accurate account of the way Meese conducted himself in his role of Attorney General under Reagan. Meese, in his role as Attorney General, sat in on the meetings of the NSC (National Security Council) responsible for coordinating policy on national security issues.  It didn’t take long for Meese to observe that as the only lawyer among the participants, he alone was able to advise in a way that was consistent with the Constitution.

Four principles championed by Attorney General Meese

Joseph Morris set out these four principles followed by Attorney General Meese under the Reagan administration:

  • Rule of Law must always follow the truth, wherever it goes, letting the facts speak for themselves.
  • The structure of the government (system of procedure) was revamped so staff members could be brought together in an open channel of communication.
  • No stranger to controversy, Edwin Meese did not shrink from what he considered his responsibility.  On December 4, 1986, Attorney General Edwin Meese III requested that an independent counsel be appointed to investigate Iran-Contra matters. On December 19, the three judges on the appointing panel named Lawrence Walsh, a former judge and deputy attorney general in the Eisenhower Administration, to the post.
  • Fighting a battle of ideas, Meese was willing to debate the “originalist” perspective of the Constitution.  In 1985, Attorney General Edwin Meese III delivered a series of speeches challenging the then-dominant view of constitutional jurisprudence and calling for judges to embrace a “jurisprudence of original intention.” There ensued a vigorous debate in the academy, as well as in the popular press, and in Congress itself over the prospect of an “originalist” interpretation of the Constitution.

John Fund speaks

In introducing John Fund, Joseph Morris spoke of Fund as being a hard worker and a close student of the Department of Justice for thirty years, with a particular interest in the soft underbelly of the election system. Morris recalled how John Fund would call him, asking to have lunch to talk about Chicago politics.  John Fund would, without fail, have with him a list of well thought out questions to ask such as:  “Could this Blagojewich person really become mayor?”  Later on: “What about Rahm Emanuel running for mayor in Chicago with all his ties to Obama?”

The above reference made by Morris about Emanual’s mayoral candidacy became the focus of John Fund’s opening remarks.  Fund related how Rahm Emanuel was one of only a few individuals who had ever apologized to him over something he had written.  What prompted Emanuel’s apology was a debate with Fund at Northwestern University in Evanston, IL, at which time Emanuel called Fund names that could only be defined as over-the-top.

Expressing his delight to be back in Chicago again, while his co-author was in Toledo, Ohio, John Fund felt he had drawn the better half of the straw. There followed a pithy comment by Fund about the resignation the week before (Thursday, September 25) of Eric Holder as Attorney General due to a conflict of forces. Fund suggested that Holder’s new job title be “Permanent Witness.”

Part 2: John Fund’s knowledge and wit will be shared as he elaborates on the way Eric Holder viewed his position at Attorney General, reflected by his behavior, while serving President Obama. Additional thoughts relative to the direction of this nation will also be covered.

[Originally published at Illinois Review]

 

Categories: On the Blog

Seniors Dispersing Away From Urban Cores

October 03, 2014, 9:28 AM

Senior citizens (age 65 and over) are dispersing throughout major metropolitan areas, and specifically away from the urban cores. This is the opposite of the trend suggested by some planners and media sources who claim than seniors are moving to the urban cores. For example, one headline, “Millions of Seniors Moving Back to Big Cities” is at the top of a story with no data and anecdotes ranging that are at least as much suburban (Auburn Hills, in the Detroit area) and college towns (Oxford, Mississippi and Lawrence, Kansas), as they are big city. Another article, “Why Seniors are Moving to the Urban Core and Why It’s Good for Everyone,” is also anecdote based, and gave prominence to a solitary housing development in downtown Phoenix (more about Phoenix below).

Senior Metropolitan Growth Trails National

Between 2000 and 2010, the nation’s senior population increased approximately 5.4 million, an increase of 15 percent. Major metropolitan areas accounted for approximately 50 percent of the increase (2.7 million) and also saw their senior population increase 15 percent. By contrast, these same metropolitan areas accounted for 60 percent of overall growth between 2000 and 2010, indicating that most senior growth is in smaller metropolitan areas and rural areas.

Senior Metropolitan Population Dispersing

The number of senior citizens living in suburbs and exurbs of major metropolitan areas (over 1,000,000 population) increased between 2000 and 2010, according to census data. The senior increases were strongly skewed away from the urban cores. Suburbs and exurbs gained 2.82 million senior residents over the period, while functional urban cores lost 112,000. The later suburbs added 1.64 million seniors. The second largest increase was in exurban areas, with a gain of 0.88 million seniors. The earlier suburbs (generally inner suburbs) added just under 300,000 seniors (Figure 1).

During that period, the share of senior citizens living in the later suburbs increased 35 percent. The senior citizen population share in the exurbs rose nearly 15 percent. By contrast, the share of seniors living in the functional urban cores declined 17 percent. Their share in the earlier suburbs declined 11 percent.

This is based on an analysis of small area data for major metropolitan areas using the City Sector Model.

City Sector Model analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). It also avoids the use of the newer “principal cities” designation of larger employment centers within metropolitan areas, nearly all of which are suburbs, but are inappropriately joined with core municipalities in some analyses. The City Sector Model” small area analysis method is described in greater detail in the Note below.

Pervasive Suburban and Exurban Senior Gains

The gains in functional suburban and exurban senior population were pervasive. Among the 52 major metropolitan areas, there were gains in 50. In two areas (New Orleans and Pittsburgh), there were losses. However, in each of these cases there was an even greater senior loss in the functional urban cores. In no case did urban cores gain more or lose fewer seniors than the suburbs and exurbs. Eight of the functional urban cores experienced gains in senior population, while 44 experienced losses (Figure 2)

Largest Urban Cores

The major metropolitan areas with the largest urban cores (more than 20 percent of the population in the functional urban cores),  would tend to be the most attractive to seniors seeking an urban core lifestyle. But they  still saw their seniors heading  to the suburbs and exurbs (Figure 3). Senior populations declined in the functional urban cores of all but two of these nine areas, New York and San Francisco. However, in both of these metropolitan areas, the increases in suburban and exurban senior populations overwhelmed the increases in the urban cores. All of these nine major metropolitan areas experienced increases in their suburban and exurban senior populations.

Moreover, the Phoenix anecdote cited above is at odds with the reality that the later suburbs and exurbs gained 165,000 seniors between 2000 and 2010. The earlier suburbs lost 7,000 seniors (No part of Phoenix has sufficient density or transit market share to be classified as functional urban core).

Consistency of Seniors Trend with Other Metropolitan Indicators

As has been indicated in previous articles, there continues to be a trend toward dispersal and decentralization in US major metropolitan areas. There was an overall population dispersion from1990 to 2000 and 2000 to 2010, which continued trends that have been evident since World War II and even before, as pre-automobile era urban cores have lost their dominanceJobs continued to follow the suburbanization and exurbanization of the population over the past decade away as cities became less monocentric, less polycentric and more “non-centric.” As a result, work trip travel times are generally shorter for residents where population densities are lower. Baby boomers and Millennials have been shown to be dispersing as well, despite anecdotes to the contrary (Figure 4). The same applies to seniors.

Note: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before. All of these areas are defined at the zip code tabulation area (ZCTA) level.

—-

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photo: Later Suburbs of Cincinnati (where most senior growth occurred from 2000 to 2010). By Author

 

[Originally published at New Geography]

Categories: On the Blog

A Victory for Our Energy Independence: Cove Point Gas Plant Is Approved

October 02, 2014, 3:25 PM

Victories are hard won and often seem few and far between on the free market energy and environment front. We spend much of our time, bemoaning and battling bad policies, regulations, laws and court decisions. We take as victories, celebrate and publicize simply defeating (often temporarily) bad policies.

Such victories, temporary or not, while they merit celebration, are simply holding actions. True environmental and energy victories are when free choice and markets expand, bringing increased well-being in the U.S., abroad or both. We won just such a victory on September 29, when the federal government approved the expansion and modification of the Cove Point natural gas liquefaction.

For a number of years natural gas was increasingly in short supply in the United States. We imported natural gas to feed our growing demand for this flexible resource through a number of liquefaction plants along the coasts. However, as high prices encouraged outside the box thinking and technological advancement, the fracking revolution broke loose, natural gas became abundant again and prices fell.

In the past couple of years, the average price of natural gas has been so low that some companies are capping existing wells and others are flaring natural gas associated with oil production because prices are to low to justify producing and storing it. At the same time, our allies in Europe and those facing energy scarcity in the developing world, need new sources of natural gas so they aren’t held hostage to an openly hostile regime in Moscow (Europe), or to their own bad luck geologically speaking (many developing countries).

That’s where Cove Point and other liquefaction plants come in. The U.S. can create jobs here, add to the GDP and government revenues, while also benefiting people and the environment in other parts of the world by exporting our increasingly abundant supply of natural gas.  Its a win-win that only environmental extremists can object to0 — and they do!

Environmentalists fought Cove Point on the grounds that it will encourage increased fracking and natural gas use. Remember, these are the same environmentalists who were lauding natural gas a decade ago when gas was expensive and they saw it as transition fuel from coal to renewable electric power systems. Once gas became relatively cheap, and it became clear that rather than a transition fuel, it would become a dominant source of energy and a driver of continued economic growth, environmentalists wanted to put on the brakes.  Gas is an enemy of their steady-state, zero-growth world view.

Boo Hoo!

When fully operational in 2017, the $3.8 billion project would adding about 75 jobs to the approximately 100 already at the site. Cove point will also contribute an estimated $40 million a year in additional tax revenue to Calvert County. Upon completion, Cove Point could move enough natural gas daily to meet the household needs of 860,000 homes for four days.  Cove Point’s parent company, Dominion, already has buyers lined up for its production, including a Japanese gas utility, a Japanese trading company and the U.S. unit of one of India’s largest gas distributors.

Cove Point is an important victory in the cause of decreasing fuel scarcity but it is just a step.  The Obama administration has been exceedingly slow in approving such plants. It is the fourth natural gas export facility to be approved by FERC with 14 more awaiting approval (they have been for sometime). FERC should move with greater urgency to approve the rest of the plants and not hamper them with overly burdensome conditions that would slow or even prevent (by raising the costs too much) them from being built.

Categories: On the Blog

Beyond Polycentricity: 2000s Job Growth (Continues to) Follow Population

October 02, 2014, 2:01 PM

The United States lost jobs between 2000 and 2010, the first loss between census years that has been recorded in the nation’s history. The decline was attributable to two economic shocks, the contraction following the 9/11 attacks and the Great Recession, the worst financial crisis since the Great Depression. Yet, even in this moribund job market, employment continued to disperse in the nation’s major metropolitan areas.

This is the conclusion of a small area analysis (zip code tabulation areas) of data from County Business Patterns, from the Census Bureau, which captures nearly all private sector employment and between 85 and 90 percent of all employment (Note 1).

The small area analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). This “City Sector Model” small area analysis method is described in greater detail in Note 2.

Distribution of Employment in Major Metropolitan Areas

County Business Pattern data indicates that employment dropped approximately 1,070,000 in the 52 major metropolitan areas (those with more than 1,000,000 population) between 2000 and 2010. The inner city sectors (the functional urban cores and earlier suburbs) were hard-hit. Together the inner sectors, the functional urban cores and the earlier suburbs, lost 3.74 million jobs. The outer sectors, the later suburbs and the exurbs, gained 2.67 million jobs (Figure 1).

There were job losses of more than 300,000 in the functional urban cores, and even larger losses (3.2 million) in the earlier suburbs. The functional urban cores are defined by the higher population densities that predominated before 1940 and a much higher dependence on transit, walking and cycling for work trips. The earlier suburbs have median house construction dates before 1980.

The share of major metropolitan area employment in the functional urban cores dropped from 16.4 percent in 2000 to 16.2 percent in 2010. This compares to the 8 percent of major metropolitan employment that is downtown (central business district) areas. The notion, however, that metropolitan areas are dominated by their downtowns is challenged by the fact that 84 percent of jobs are outside the functional urban cores.

The largest percentage of major metropolitan areas is clustered in the earlier suburbs, those with median house construction dates from 1946 to 1979. In 2010, 46.8 percent of the jobs were in the earlier suburbs, a decline from 51.4 percent in 2000.

These losses in employment shares in the two inner city sectors were balanced somewhat by increases in the outer sectors, the later suburbs (with median house construction dates of 1980 or later) and the exurbs, which are generally outside built-up urban areas. The increase was strongest in the later suburbs, where, where employment increased by 2.6 million. The share of employment in the later suburbs rose to 25.5 percent from 21.6 percent. There was also a 600,000 increase in exurban employment. The exurban share of employment rose to 11.5 percent from 10.6 percent (Figure 2).

The Balance of Residents and Jobs

There is a surprisingly strong balance between population and employment within the city sectors, which belies the popular perception by some press outlets and even some urban experts that as people who farther away from the urban core, they have to commute farther. In fact, 92 percent of employees do not commute to downtown, and as distances increase, the share of employees traveling to work downtown falls off substantially. As an example, only three percent of working residents in suburban Hunterdon County, New Jersey (in the New York metropolitan area), work in the central business district, Manhattan, while 80 percent work in relatively nearby areas of the outer combined metropolitan area.

It is to be expected that the functional urban core would have a larger share of employment than population. However the difference is not great, with 16.2 percent of employment in the functional urban core and 14.4 percent of the population. The earlier suburbs have by far the largest share of the population at 42.0 percent. They also have the largest share of employment, at 46.8 percent. The later suburbs have 26.8 percent of the population, slightly more than their 25.5 percent employment share. The largest difference is, as would be expected, is in the exurbs, with 16.8 percent of major metropolitan area residents and 11.5 percent of employment (Figure 3). It is notable, however, that the difference between the share of population and employment varies less than 15 percent in the three built-up urban area sectors (urban core, earlier suburbs and later suburbs), though the difference was greater in the exurbs.

How Employment Followed Population in the 2000s

The outward shifts of population and employment are between in the city sectors. In the earlier suburbs, where the population and employment is the greatest, the population share declined 4.3 percentage points, while the employment share declined a near lockstep 4.6 percentage points. The later suburbs had a 4.5 percentage point increase in population share, followed closely a near lockstep 3.9 percentage point increase in employment share. In the exurbs, a 1.5 percentage point increase in the population share was accompanied by a 0.9 percentage point increase in the employment share. The connection is less clear in the functional urban core, where a 1.6 percentage point drop in the population share was associated with a 0.2 percentage point reduction in the employment share (Figure 4).

The similarity in population and employment shares between the city sectors is an indication that employment growth has been geographically tracking population growth for decades, as cities have evolved from moncentricity to polycentricity and beyond.

“Job Following” by Relative Urban Core Size

Similar results are obtained when cities are categorized by the population of their urban cores relative to the total city population. Each category indicates an outward shift from the functional urban cores and earlier suburbs to the later suburbs and exurbs, in both the population share and the employment share. However, the shift is less pronounced in the cities with larger relative urban cores, which tend to be in the older urban regions  (Figure 5). Out of the 18 cities with functional urban cores amounting to more than 10 percent of the metropolitan area, 16 are in the Northeast (including the Northeastern corridor cities of Washington and Baltimore) and the Midwest, where strong population growth ended long ago.

As usual, New York is in a category by itself, New York, has a functional urban core with more than 50 percent of its population. New York experienced an outward shift of 1.1 percent in its population, and a 0.4 percent outward shift of its employment (the total shift in share, from the urban core and earlier suburbs to the later suburbs and exurbs, expressed in terms of percentage points).

Generally speaking, the stronger the functional urban core, the less the movement of jobs and people from the center. The actual percentages of functional urban core population by city are shown in From Jurisdictional to Functional Analysis of Urban Cores and Suburbs (Figure 6).

On average, there was a shift of nearly five percent from the inner sectors (functional urban cores and earlier suburbs) to the outer sectors (later suburbs and exurbs)

Commute Times: Less Outside the Urban Cores

The earlier suburbs are generally between the functional urban cores and the later suburbs geographically. As a result, jobs are particularly accessible to residents from all over the metropolitan area. A further consequence is that commute times are shortest (26.3 minutes) in the earlier suburbs, where approximately half of the people also live. Commute times are a bit higher in the later suburbs (27.7 minutes). The exurbs have the third longest commutes, at 29.2 minutes. Finally, commute times are longest in the functional urban cores (31.8 percent), both because traffic congestion is greater (to be expected, not least because of their higher densities), and more people take transit, which is slower (Figure 7).

The dispersed, and well coordinated location of jobs and residences is one reason that United States metropolitan areas have shorter commute times and less traffic congestion than its international competitors in Europe, Australia, and Canada. All this is testimony to the effectiveness with which people and the businesses established to serve them have produced effective labor markets, which are the most affluent in the world, in which the transaction related impacts of work trip travel time are less than elsewhere.

Beyond Polycentricity

These are not new concepts, despite the continuing tendency to imagine the city as a monocentric organism where everyone works in downtown skyscrapers and lives in suburban dormitories. The lower density US city has not descended into the illusion of suburban gridlock that some planners have declared so stridently. Indeed, traffic congestion is considerably less intense in US cities than it is in the other parts of the high income world for which there is data.

A quarter century ago, University of Southern California economists Peter Gordon and Harry Richardson said that “the co-location of firms and households at decentralized locations has reduced, not lengthened commuting times and distances. Decentralization reduces pressures on the CBD, relieves congestion and avoids ‘gridlock.'”  In 1996 they Los Angeles as “beyond polycentricity” Both of these observations fit well as a description of trends in the 2000s. Most US major metropolitan areas are now “beyond polycentricity,” not just Los Angeles.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

——

Note 1: The Census Bureau describes “County Business Pattern” data as follows: “Statistics are available on business establishments at the U.S. level and by State, County, Metropolitan area, and ZIP code levels. Data for Puerto Rico and the Island Areas are available at the State and county equivalent levels. County Business Patterns (CBP) covers most NAICS industries excluding crop and animal production; rail transportation; National Postal Service; pension, health, welfare, and vacation funds; trusts, estates, and agency accounts; private households; and public administration. CBP also excludes most establishments reporting government employees.

Note 2: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before. All of these areas are defined at the zip code tabulation area (ZCTA) level.

—–

Photo: Beyond Polycentric Houston (by author)

 

[Originally published at New Geography]

Categories: On the Blog

Education Incentives can Help End Low Expectations

October 02, 2014, 12:58 PM

shutterstock image

Behavioral psychologists and economists long have considered incentives to be a normal part of human nature, but applying them to education still stokes controversy.

For example, some people recoil at the idea of  paying kids and their teachers for high scores on advanced-placement tests that get students college credit in high school, as some schools in Northern Virginia are doing,

It sounds so … mercenary. Exchanging money for good performance? Handing out filthy lucre to reward a personally fulfilling and enriching achievement? Why, it almost sounds like the Grammys, or the World Series, or even a job. Nobody except the most Puritan-minded thinks any of these occupations or rewards is anything but a celebration of excellence, or at the very least a job well done. Adults can accept money as a reward for high performance. There’s no reason children cannot do the same — except prejudice.

These low expectations are endemic in education, research confirms. It starts with the teachers.For several generations now, Americans have underestimated their children. Laws mostly bar children from taking even a small-time job until age 16. Kids can hardly ride their own bikes down to the park or corner store any more.

University of Missouri economist Cory Koedel has found education students get the highest grades but the easiest work of all college majors. A 2013 study by the Thomas B. Fordham Institute found teachers typically assign books at their students’ reading level, not their grade level. This means teachers frequently assign too-easy books, a problem that compounds as children move up grades. If fourth-grader Suzy gets third-grade rather than fourth-grade books to read, and so on up through the grades, she likely is to remain behind in reading for the rest of her life.

Washington, D.C., mother Mary Riner became disgusted with the low expectations at her daughter’s supposedly well-performing grade school. Fifth-grade Latin homework, for example, didn’t involve memorizing vocabulary or practicing verb tenses, but coloring Latin words. Yes, coloring — with a crayon. Riner responded by helping start a truly demanding school, called BASIS DC.

Low expectations don’t occur in a vacuum. They result from a set of expectations in our society, and they reinforce and verify those expectations as a form of self-fulfilling prophecy. A smart use of incentives offers one way to address this problem.

In their new book “Rewards: How to Use Rewards to Help Children Learn—and Why Teachers Don’t Use Them Well,” authors Herbert Walberg and Joseph Bast illustrate how positive reinforcement can help lift expectations and thus raise student performance.

They discuss how the attitudes of many in the education establishment are a barrier to putting to work the science that shows kids respond to incentives just like adults. They also explain that rewards are about far more than money — good teachers use simple rewards, such as stickers or praise, to help instill in children the longer-lasting internal rewards of satisfaction in learning and pride in a job well done.

Perhaps the biggest shocker may be the realization that incentives always will be embedded in education, regardless of whether people acknowledge their existence. If teachers reinforce learning with encouragement, recognition and grades, that’s an incentive. If teachers give students too-easy work because they expect every real academic challenge to raise complaints, that creates a very different set of incentives for both teachers and students.

Incentives will always exist in education. The question is, will educators harness this power for the students’ good?

Joy Pullmann is managing editor of School Reform News and an education research fellow at The Heartland Institute.

[Originally published at WatchDog]

 

Categories: On the Blog

Education Incentives Can Help End Low Expectations

October 02, 2014, 8:56 AM

Behavioral psychologists and economists have considered incentives to be a normal part of human nature for decades, if not centuries, but applying them to education still stokes controversy. For example, some people recoil at the idea of paying kids and their teachers for high scores on Advanced Placement tests that get students college credit in high school, as some schools in Northern Virginia are doing,

It sounds so … mercenary. Exchanging money for good performance? Handing out filthy lucre to reward a personally fulfilling and enriching achievement? Why, it almost sounds like the Grammys, or the World Series, or even a job. Nobody except the most Puritan-minded thinks any of these occupations or rewards is anything but a celebration of excellence, or at the very least a job well done. Adults can accept money as a reward for high performance. There’s no reason children cannot do the same—except prejudice.

For several generations now, Americans in general have underestimated their children. Laws mostly bar children from taking even a small-time job until age 16. Kids can hardly ride their own bikes down to the park or corner store any more.

These low expectations are endemic in education, research confirms. It starts with the teachers. University of Missouri economist Cory Koedel has found education students get the highest grades but the easiest work of all college majors. A 2013 study by the Thomas B. Fordham institute found teachers typically assign books at their students’ reading level, not their grade level. This means teachers frequently assign too-easy books, a problem that compounds as children move up grades. If fourth-grader Suzy gets third-grade rather than fourth-grade books to read, and so on up through the grades, she is likely to remain behind in reading for the rest of her life.

Washington, DC mother Mary Riner became disgusted with the low expectations at her daughter’s supposedly well-performing grade school. Fifth-grade Latin homework, for example, didn’t involve memorizing vocabulary or practicing verb tenses, but coloring Latin words. Yes, coloring—with a crayon. Riner responded by helping start a truly demanding school, called BASIS DC.

Low expectations don’t occur in a vacuum—they result from a set of expectations in our society, and they reinforce and verify those expectations as a form of self-fulfilling prophecy. A smart use of incentives offers one way to address this problem.

In their new book Rewards: How to Use Rewards to Help Children Learn—and Why Teachers Don’t Use Them Well, authors Herbert Walberg and Joseph Bast illustrate how positive reinforcement can help lift expectations and thus raise student performance. They discuss how the attitudes of many in the education establishment are a barrier to putting to work the science that shows kids respond to incentives just like adults. They also explain that rewards are about far more than money—good teachers use simple rewards, such as stickers or praise, to help instill in children the longer-lasting internal rewards of satisfaction in learning and pride in a job well done.

Perhaps the biggest shocker may be the realization that incentives will always be embedded in education, regardless of whether people acknowledge their existence. If teachers reinforce learning with encouragement, recognition, and grades, that’s an incentive. If teachers give students too-easy work because they expect every real academic challenge to raise complaints, that creates a very different set of incentives for both teachers and students.

Incentives will always exist in education. The question is, will educators harness this power for the students’ good?

Joy Pullmann is managing editor of School Reform News and an education research fellow at The Heartland Institute.

Categories: On the Blog

New Commuting Data Shows Gain by Individual Modes

October 01, 2014, 1:12 PM

The newly released American Community Survey data for 2013 indicates little change in commuting patterns since 2010, a result that is to be expected in a period as short as three years. Among the 52 major metropolitan areas (over 1 million population), driving alone increased to 73.6% of commuting (including all travel modes and working at home). The one mode that experienced the largest drop was carpools, where the share of commuting dropped from 9.6% in 2010 to 9.0% in 2013. Doubtless most of the carpool losses represented gains in driving alone and transit. Transit grew, increasing from a market share of 7.9% in 2010 to 8.1% in 2013 in major metropolitan areas; similarly working at home increased from 4.4% to 4.6%, an increase similar to that of transit (Figure 1). Bicycles increased from 0.6% to 0.7%, while walking remained constant at 2.8%.

Transit: Historical Context

Transit has always received considerable media attention in commuting analyses. Part of this is because of the comparative labor efficiency (not necessarily cost efficiency) of transit in high-volume corridors leading to the nation’s largest downtown areas. Part of the attention is also due to the “positive spin” that has accompanied transit ridership press releases. An American Public Transportation Association press release earlier in the year, which claimed record ridership, have evoked a surprisingly strong response from some quarters: For example, academics David King, Michael Manville and Michael Smart wrote in the Washington Post:”We are strong supporters of public transportation, but misguided optimism about transit’s resurgence helps neither transit users nor the larger traveling public.” They concluded that transit trips per capita had actually declined in the past 5 years.

Nonetheless, transit remains well below its historic norms. The first commute data was in the 1960 census and indicated a 12.6% national market share for transit for the entire U.S. population. By 1990, transit’s national market share had dropped to 5.1%. After dropping to 4.6% in 2000, transit recovered to 5.2% in 2012. But clearly the historical decline of transit’s market share has at least been halted (Figure 2).

Even so, in a rapidly expanding market, many more people have begun driving alone than using transit. More than 47 million more commuters drive alone today than in 1980, while the transit increased about 1.4 million commuters over the same time period.

The largest decline occurred before 1960. Transit’s work trip market share was probably much higher in 1940, but the necessary data was not collected in the census, just before World War II and the great automobile-oriented suburbanization. In 1940, overall urban transit travel (passenger miles all day, not just commutes) is estimated to have been twice that of 1960 and nearly 10 times that of today.

Transit’s 2010-2013 Trend

To a remarkable extent, transit continues to be a “New York story.” Approximately 40% of all transit commuting is in the New York metropolitan area. New York’s 2.9 million transit commuters near six times that of second place Chicago. Transit accounts for 30.9% of commuting in New York. San Francisco ranks second at 16.1% and Washington third at 14.2%. Only three other cities, Boston (12.8%), Chicago (11.8), and Philadelphia (10.0%) have transit commute shares of 10% or more.

From 2010 to 2013, transit added approximately 375,000 new commuters. Approximately 40% of the entire nation’s transit commuting increase occurred in the New York metropolitan area. This was included in the predictable concentration (80%) of ridership gains in the transit legacy metropolitan areas, which are the six with transit market shares of 10% or more. Combined, these cities added 300,000 commuters, 89%, on the large rail systems that feed the nation’s largest downtown areas.

Perhaps surprisingly, Seattle broke into the top five, edging out legacy metropolitan areas (Figure 3) Philadelphia and Washington. Seattle has a newer light rail and commuter rail system. Even so, the bulk of the gain in Seattle was not on the rail system. Approximately 80% of its transit commuter growth was on non-rail modes. Seattle has three major public bus systems, a ferry system and the newer Microsoft private bus system that serves its employment centers throughout the metropolitan area. All of the new transit commuters in eighth ranked Miami were on non-rail modes, despite its large and relatively new rail system. New rail city Phoenix (10th) also experienced the bulk of its new commuting on non-rail modes (93%). Rail accounted for most of the gain in San Jose (9th), with a 58% of the total  The transit market shares in Miami, San Jose and Phoenix are all below the national average of 5.2%.

Outside the six transit legacy metropolitan areas, gains were far more modest, at approximately 75,000. Seattle, Miami, San Jose, and Phoenix accounted for nearly 60,000 of this gain, leaving only 15,000 for the other 42 major metropolitan areas, including Los Angeles, which had a 5,000 loss. Los Angeles now has a transit work trip market share of 5.8%, below the 5.9% in 1980 when the Los Angeles County Transportation Commission approved the funding for its rail system (the result of my amendment, see “Transit in Los Angeles“). Los Angeles is falling far short of its Matt Yglesias characterization as the “next great mass-transit city.”

Since 2000, the national trend has been similar. Nearly 80% of the increase in transit commuting has been in the transit legacy metropolitan areas, where transit’s share has risen from 17% to 20%. These areas accounted for only 23% of the major metropolitan area growth since 2000. By contrast, 77% of the major metropolitan area growth has been in the 46 other metropolitan areas, where transit’s share of commuting has remained at 3.2% since 2000. There are limits to how far the legacy metropolitan areas can drive up transit’s national market share.

Prospects for Commuting

At a broader level, the new data shows the continuing trend toward individual mode commuting, as opposed to shared modes. Between 2010 and 2013, personal modes (driving alone, bicycles, walking and working at home) increased from 82.3% to 82.7% of all commuting. Shared modes (carpools and transit) declined from 17.7% of commuting to 17.3%. These data exclude the “other modes” category (1.2% of commuting) because it includes both personal and shared commuting. None of this should be surprising, since one of the best ways to improve productivity, both personal and in the economy, is to minimize travel time for necessary activities throughout the metropolitan area (labor market).

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

Photograph: DART light rail train in downtown Dallas (by author)

 

[Originally published at New Geography]

Categories: On the Blog

Does the GOP Need to Start Listening to Millennials?

September 30, 2014, 2:59 PM

Recently on his show, former Gov. Mike Huckabee ran a segment called, “Does the GOP need to start listening to millennials?” The answer to this question is not only a “yes,” it’s a “you should have started listening several years ago.”

The segment consisted of Huckabee speaking with three college students about what they stand for and what makes them active in politics. The students and Heartland Senior Fellow Benjamin Domenech gave good, thoughtful answers, however, they barely scratched the surface of the growing liberty movement and its potential impact.

This movement started to form into a cohesive group back during the primaries of the 2008 presidential election. The messages of freedom and peace espoused by Ron Paul resonated with many young people and spread wildly on the internet. While the candidate was mostly ignored by the mainstream media, his supporters did all in their power to counteract this apparent blackout in coverage.

A true grassroots campaign took shape. “Who is Ron Paul?” signs started showing up along many streets. Internet polls were targeted to show Ron Paul as the favorite among candidates. Then the money started to roll in. Supporters of Paul embraced the “Money Bomb,” a one day fundraising blitz. On November 5th, 2007, the campaign raised $4.3 million in a single day. Then on December 16, 2007, Paul’s campaign made headlines by raising nearly $6 million in one day, breaking the previous record.

The campaign showed they were a force; it showed the supporters they could organize and cause ripples at a national level. Although Paul is no longer running, the supporters are still actively engaged in politics and policy discussions. In the years since the 2008 primaries, the Libertarian Party has seen an explosion in membership. Also, the presence of the liberty movement on the internet is unavoidable.

There are multiple reasons the freedom message connected with youth. Millennials, like myself, have grown up in a broken system. The national debt was already at appalling levels; the economy collapsed and has been limping on ever since; unemployment, especially among the youth, remains high; and student debt is piling up. What is most concerning is that the government seems unable to do anything about it. In fact, it is becoming more apparent that the government is the main source of these problems. This is why the liberty movement has grown so large recently; it is refreshing to think there may be a solution that takes the weight of such a large and bloated government off the backs of these disenfranchised individuals.

This is where the GOP comes in. The Republican Party has an opportunity to pick up these politically active individuals. They will have to work for it though. While generally the Republican Party advocates for smaller government, they have to take it a few steps further to win over the libertarian-minded youth. These millennials see no justification for the government to be involved in personal matters, bedroom issues, or other unnecessary prohibitions.

If the Republican Party embraces the idea of a smaller government with less intrusion into individual privacy, they’ll have a chance of absorbing much of this liberty movement. This alteration will turn the appearance of the GOP from a party of “old white men” to one that accepts and encompasses a wide range of people who believe freedom is the answer to societal problems.

Categories: On the Blog

Sea Level Rise Issues in Florida Elections

September 30, 2014, 2:03 PM

The September 24, 2014 New York Times (NYT) had an article by reporter Gail Collins “Florida Goes Down the Drain—The Politics of Climate Change”.  A more inflammatory title for the same article appeared in the September 27, 2014, The Atlanta Journal-Constitution as “Florida soggier as GOP ignores climate change”.  Reading the articles shows the obvious intent to inject climate change into the November Florida elections—in particular the Governor’s race between incumbent Republican Governor Rick Scott and Democrat candidate Charles Crist.  Ms. Collins portrays Governor Scott as uninformed about climate change issues with regard to sea level rise.

This article may be of concern to Republicans campaigning in the 2014 and 2016 elections.

MISTATEMENTS OF CLIMATE SCIENCE FACTS

Ms. Collins article is fabrication and a testament many reporters have no concern about the truth with regard to climate science.  No one except climate alarmist’s mention concerns about the words “climate change”.   Climate change has taken place continuously since the planet was created 4.5 billion years ago.  These words always go together and are just as normal as the “sun always rises in the East”.  The controversy is about causes of global warming which Democrats blame on carbon dioxide from burning fossil fuels.  The description of global warming is labeled catastrophic anthropogenic (man-made) global warming (CAGW)

Atmospheric carbon dioxide has been increasing since 1950 and this is thought by many to be due to human’s burning fossil fuels—coal, oil, and natural gas.  The increase has been from 310 parts per million (ppm) in 1950 to 400 ppm in 2014.  From 1950 to about 1975 there was no global warming and some concern about an impending ice age shown by articles in the June 24, 1974 Time Magazine and the April 28, 1975 Newsweek.  Even Dr. John Holdren’s, president Obama’s science advisor, writings in 1971 “warned of a coming ice age”.  From 1975 to 1998, global temperatures slowly increased and since 1998 there has been no increase.  Annual increases in atmospheric carbon dioxide from 1998 to present are at the highest level in millennia of about 2 ppm.  Using global warming alarmist’s logic you could say carbon dioxide increases prevent global warming.

Environmentalists within the Democrat Party like Al Gore and Tim Wirth subscribed to CAGW in the 1980s and gained further support after the United Nations formed the United Nations Intergovernmental Panel on Climate Change (UNIPCC) which produced a series of 5 Assessment Reports released since 1990 with the most recent in 2014.  These documents are accepted without question.  CAGW proponents argue increased atmospheric carbon dioxide has caused increased heat waves, record high temperatures, flooding, drought, wildfires, reduced snowfall, tornadoes, hurricanes, sea level rise, Artic ice melting, etc.

To counteract omissions, half-truths, and false statements in these reports, the Non-governmental International Panel on Climate Change (NIPCC) was formed in 2003.  Since 2009, NIPCC has released6 Reports that give authoritative, easily-read information about vast amounts of experimental data showing negligible influence of carbon dioxide from burning fossil fuels on climate, benefits of increased atmospheric carbon dioxide, financial losses from mitigation, and proper role of adapting to climate change.  NIPCC is supported by three non-profit organizations–Center for the Study of Carbon Dioxide and Global ChangeScience and Environmental Policy Project, and The Heartland Institute.

A host of data exists to show all catastrophic events alleged caused by CAGW occurred in the past when atmospheric carbon dioxide levels were lower and constant.   For many weather events, rates of occurrences recently declined.  The U. S. government provides data on various climate events CAGW proponents claim are increased– heat wavesrecord high temperaturesfloodingdroughtwildfires, reduced snowfall,  tornadoeshurricanes, sea level riseArctic ice melting.  Inspection of the data shows CAGW claims are false or exaggerated.  Another factor omitted is Antarctic sea ice in September 2014is at the highest level since satellite measurements were started in 1979.

The lack of global warming the past 16 years, when atmospheric carbon dioxide levels increased the highest rate in thousands of years, is conveniently ignored in the UNIPCC Summary for Policymakers Reports.  Due to consternation among climate alarmists,  52 explanations have been produced to date for the pause in global warming.

In addition, lack of global temperature increases from 1998 has embarrassed climate alarmists so they changed the cause for concern from “global warming” to “climate change”.   Do they expect there is a planet with no climate change and temperatures remain constant forever?  Is this a condition once existing on Earth?  The actions of climate alarmists are deceitful and dishonest in their characterizing carbon dioxide increases causing “climate change” instead of “global warming”.

EXAGGERATION OF FLORIDA SEA LEVEL RISE

The NYT article exaggerates sea level rise in Florida by making charges Miami Beach may be under water in the near future.  The National Oceanographic and Atmospheric Administration post their Tides and Currents Database of information about world-wide tidal gauge measurements of sea level change.  For Miami and adjacent areas is a table of sea level rise the past century.

                        SEA LEVEL RISE IN FLORIDA

 

LOCATION                       DATA YEARS            AVERAGE SEA LEVEL RISE

Miami Beach                           1931-1981                                  2.39 mm/year

Daytona Beach                        1925-1983                                  2.32 mm/year

Jacksonville, FL                      1928-2006                                  2.40 mm/year

Vaca Key                                1971-2006                                  2.78 mm/year

Key West                                1913-2006                                  2.24 mm/year

 

With current trends of sea level rise, the Miami Beach sea level may rise 240 mm in 100 years or 9.5 inches.   Gail Collins also claimed higher sea level rises with, “the group had pulled out their maps and projections — including the one that shows much of Miami-Dade County underwater by 2048.”  This nonsense is based on projections from computer models that are shown unable to predict current global temperature changes or local climate changes.  Thus, they are worthless for making energy policy decisions.  A May 2014 report, “Tide gauge location and measurement of global sea level rise” shows sea level rise is local, globally averaged 1 mm per year, and had not accelerated the past 50 years.  Another report, “Secular and Current Sea Level Rise” shows UNIPCC computer projections have no credibility.

Miami Beach and adjacent areas have unusual problem of street flooding during spring and fall high tides because of back flows in their storm drains.  The Palm Beach Post reported, “Much of Miami Beach’s drainage system dates back to the 1940s and there is limited data about how many outfalls were designed to remain above high tide or for how long.  But an analysis performed by Coastal Systems International, another contractor assisting in the project, showed the ends of the drain pipes are spending more time submerged, with the mean high water elevation creeping up by about 1.68 inches over the last 14 years.”  The flooding is a problem of design and not increasing carbon dioxide due to use of fossil fuels.  Their drainage systems needs re-designed to prevent back flows and improve drainage.  Adding increasing energy costs due to CAGW fears only exacerbates the financial costs to Florida beachfront residents.

History has shown us sea levels rise and fall with time.  Fifteen thousand years ago when the upper North America was covered with ice, sea levels were 400 feet lower than present.  Data from Europe around one thousand years ago indicates sea levels were “higher” than present.  Climate alarmists are using natural sea level changes to provoke fear among the population to provide support for economy killing policies of abolishing use of the United States abundant, economical, and geographically distributed fossil fuels of coal, oil, and natural gas.

CONSEQUENCES OF ACTIONS

The Cornwall Alliance, a Christian public policy organization, published a September 17, 2014 policy statement “Protect the Poor:  Ten Reasons to Oppose Harmful Climate Change Policies” that succinctly points out reasons against and consequences of enacting global warming policies advocated by Gail Collins in her NYT article.  With present technology, renewable energy sources of solar, wind, ethanol from corn, other biofuels, etc. are not practical and economical in many instances, herehereherehere,herehere.   Abandoning fossil fuel use will condemn developing nations to perpetual poverty and developed nations like the United States to a less healthy and enjoyable society.  Remember sub-Saharan Africa has the lowest penetration of electricity of any large area on earth with 550 million without access to electricity.   This is the reason plagues like Ebola can take root.

Categories: On the Blog

Thinking the Unthinkable: Imposing the “Utility Model” on Internet Providers

September 30, 2014, 9:38 AM

Back in 1997, then-FCC Chairman Reed Hundt titled a speech, “Thinking About Why Some Communications Mergers Are Unthinkable.” In his address, Mr. Hundt explained why, in his view, it was “unthinkable” to contemplate a merger between AT&T and one of the Bell Operating Companies. A principal reason had to do with what Mr. Hundt claimed would be the “resulting concentration” of “the long distance market.”

Well, this thinking about the unthinkable was not very prescient regarding the development of what, even then, was a rapidly changing marketplace. There is no longer any meaningful “long distance market.” Long distance is long gone.

But the regulatory immodesty that leads FCC commissioners, even well-meaning ones, to think that they can predict – and then manage for the benefit of consumers – increasingly fast-paced technological and marketplace changes is not, like long distance, long gone. Indeed, I fear that, right now, such immodesty is at a dangerously high point.

So much so that in recent days I have found myself “thinking the unthinkable.” It now looks possible that FCC Chairman Tom Wheeler and his two Democrat colleagues, Mignon Clyburn and Jessica Rosenworcel, might actually vote to classify broadband Internet service providers (ISPs) as common carriers under Title II of the Communications Act. This means regulating Internet providers under a public utility-type regime that was applied in the last century to the monopolistic Ma Bell – even though the Internet service provider market is now effectively competitive.

It means regulating Internet providers under a regime like the one applied to electric utilities. Susan Crawford, one of the leading advocates of Title II regulation, explicitly equates the provision of electricity service and Internet service and advocates regulating them the same way. On page 265 of her book, Captive Audience, she concludes that “America needs a utility model” for Internet providers. Professor Crawford’s thinking is fully in line with that of other Title II advocates.

Well, I think it is unthinkable that Chairman Wheeler and his two Democrat colleagues might adopt a utility model for broadband. Sure, I understand that there are various theories going around that, after imposing Title II regulation, the Commission could then decide to forbear from actually applying some of the Title II common carrier requirements, such as requiring advance agency permission before ISPs construct new networks, or imposing agency-prescribed regulatory accounting requirements and equipment depreciation schedules on ISPs, or prescribing the value of the providers’ property. But the Commission is not even proposing at this time to exercise such forbearance authority. And, in any event, it has exercised forbearance authority only sparingly, and then only very slowly, since the Telecommunications Act of 1996 granted the agency such authority. And through its precedents the Commission has established high hurdles to granting forbearance.

More to the point, while a few of the Title II advocates suggest the FCC could forbear from applying all but Title II’s Section 202 nondiscrimination prohibition, this is a distinct minority view. Most do not advocate forbearing from Section 201’s rate regulation provision. After all, the “utility model” advocated by Professor Crawford and others has rate regulation at its very core. Many of the complaints of these Title II advocates concerning Internet provider practices, including wireless Internet providers, concern what they claim are “unreasonable” data tiers or limits, and they routinely seek to have the FCC compel the production of information concerning demand and usage levels, service provider costs, and service revenues. This is the very type of information central to traditional utility rate cases.

In a recent letter to Verizon Wireless concerning the way Verizon administers its unlimited data plan, FCC Chairman Wheeler questioned whether the provider was trying to “enhance its revenue streams.” Frankly, I don’t believe America’s Internet providers could have invested over $1.3 trillion since 1996 – and $75 billion just in 2013 -if they didn’t have an eye on their revenue streams. But what is most important to appreciate is that FCC inquiries regarding Internet provider revenue streams, usage levels, data tier modeling, and cost of providing service presage rate regulation under Title II.

To me, it is unthinkable that the FCC would now consider going backwards by imposing Title II common carrier regulation on broadband Internet providers. In 2002, the Commission declared “broadband services should exist in a minimal regulatory environment that promotes investment and innovation in a competitive market.” In classifying cable broadband, and then wireline broadband, as information services rather than services subject to Title II regulation, the Commission emphasized it wanted to create a rational framework “for the competing services that are provided via different technologies and network architectures.” It recognized, in 2002, that Internet access already was “evolving over multiple electronic platforms, including wireline, cable, terrestrial wireless and satellite.”

Of course, since the FCC adopted a “minimal regulatory environment” for broadband in 2002 – and then successfully defended its decision all the way to the Supreme Court in the Brand Xdecision – the broadband Internet market, in fact, has become increasingly competitive, with facilities-based competition evolving over multiple platforms as the Commission envisioned. Now, I understand that Professor Crawford wrote an entire book, Captive Audience, in an attempt to demonstrate that cable operators have a “monopoly” in the provision of Internet service because, in her view, only they can provide the speed of 100 Mbps that she claims qualifies as high-speed (or “high-enough” speed) broadband.

Recently, Chairman Wheeler gave a speech in somewhat the same vein. He acknowledged that 80% of American households have access to a broadband connection that delivers a speed of 25 Mbps or better, and that a majority of households have access to a speed of 100 Mbps. Then, remarkably, he suggested it is “unacceptable” that 40% presently do not have access to 100 Mbps.

Of course, we all want to see deliverable speeds continue to improve as they steadily have improved over the past decade. But it is wrong – and it leads to the wrong policy prescriptions – to suggest that the “market” is uncompetitive by defining market parameters in a Crawford-like way that necessarily excludes alternative service providers that satisfy consumer demand at prices consumers are willing to pay. In his speech, Chairman Wheeler did something like this by concluding that wireless is just not a “full substitute” for fixed broadband – this despite accumulating evidence to the contrary. Indeed, three of the four major wireless providers in the U.S. already offer average actual speeds of over 30 Mbps, and 91.6% of the U.S. population has access to three or more wireless Internet providers. But if “full substitute” is taken to mean that, in every case and at all times, wireless will satisfy the demands of all consumers, then this is just a mistaken attempt at unsupportable market definition narrowing.

It is wrong to ignore the remarkable progress in broadband that American consumers have enjoyed since 2002 when the Commission adopted the minimal regulatory broadband regime, which has, for the most part, prevailed since then. It is wrong to suggest market definitions that do not comport with the way consumers see the available choices for services they demand.

Indeed, perhaps recognizing, at least sub silentio, that claims that the broadband market is uncompetitive are wrong, the FCC is proposing to impose new net neutrality regulations without requiring any showing of market failure or consumer harm resulting from existing Internet provider practices.

Even though I once thought the notion of imposing the Title II “utility model” on Internet providers was unthinkable, most unfortunately, it is now thinkable. And, even though Chairman Wheeler and his two Democrat colleagues will say they are acting in the name of consumers, and in conformance with the wishes of “consumer advocates,” I am convinced such action will harm consumers and diminish overall consumer welfare.

In order to avoid the unthinkable, it will be necessary for Chairman Wheeler very shortly to begin to mount a vigorous principled defense of his proposal to adopt a “commercial reasonableness” standard for assessing the lawfulness of Internet provider practices. As I have stated here many times, in light of the lack of evidence of present market failure or consumer harm, the preferred course at this time is for the Commission not to adopt any new net neutrality regulations. (The transparency regulation remains in effect, and it is a useful consumer protection measure.) But assuming there is Commission majority for adopting additional regulatory mandates, from a consumer welfare standpoint, the “commercial reasonableness” proposal under Section 706 is superior to adoption of the Title II utility model.

As for consumer welfare, which ought to be the Commission’s lodestar, I want to end on this point, one I made in remarks during the FCC’s first Open Internet Roundtable and in this Free State Foundation Perspectives“Net Neutrality v. Consumers.” The most vocal Title II advocates, including those in the Roundtable in which I participated, Public Knowledge’s Michael Weinberg and Stanford’s Professor Barbara van Schewick, insist that new so-called “zero-rating” wireless plans, such as those introduced by Sprint and T-Mobile, must be considered discriminatory and, therefore, unlawful under the net neutrality regime they advocate. Essentially, these plans, in one way or another, limit consumers access to the entire Internet in exchange for offering a lower price for access, or they prefer some sites over others for purposes of avoiding data charges. You can read the details of the plan in my “Net Neutrality v. Consumers” piece. I agree with the Title II advocates that these plans are based on a form of “discrimination,” as they use the term, because the plans do not treat all bits in a completely “neutral” fashion. So they claim such plans are inconsistent with an “Open Internet.”

I maintain plans, such as those offered by T-Mobile and Sprint, are attractive to consumers, especially low income and minority consumers. Indeed, I am confident that if consumers are asked, “If an Open Internet is interpreted to mean that plans like T-Mobile’s and Sprint’s must be withdrawn, do you favor an Open Internet?” the vast majority of consumer would say “no.” This is a much different, but much more meaningful – and much more honest – question to ask than “Do you favor an Open Internet?”

As far as I know, unlike the Title II advocates, neither Chairman Wheeler nor Commissioners Clyburn or Rosenworcel have yet taken the position that, in their view, “zero-rating” plans like T-Mobile’s and Sprint’s harm consumers. But as they seemingly go further down the road towards adopting the utility model for the Internet, including for wireless Internet providers, they should ask themselves, and then tell the rest of us, whether they agree that those plans, and similar ones, should be banned as discriminatory and inconsistent with an Open Internet. Because, if they do think so, then I don’t think they will find themselves on the side of the majority of consumers.

So, it comes to this: At least under the multi-factored “commercial reasonableness” standard, properly implemented, there would be an opportunity to defend, in a principled way, innovative, consumer-friendly plans. But the Title II advocates will settle for nothing less than rigid interpretations that outlaw any differential treatment of data, regardless of consumer benefits.

If the unthinkable of regulating broadband under the “utility model” is not going to become the reality, it is time for Chairman Wheeler, along with all those on the side of consumers, to make clear the stakes. In 1999, FCC Chairman William Kennard firmly rejected the notion of dumping the “whole morass of regulation” of the utility model on the cable pipe. He concluded: “This is not good for America.”

Given that competition in the broadband Internet marketplace is indisputably more robust today than in 1999, what would not have been good for America in 1999 would certainly not be good for America in 2014.

* Randolph J. May is President of the Free State Foundation, an independentnonpartisan free market-oriented think tank located in Rockville, Maryland.

A PDF of this Perspectives may be accessed here.

[Originally published at Free State Foundation]

Categories: On the Blog

Penguins Pray for Global Warming

September 29, 2014, 2:47 PM

For decades, climate alarmists have been attempting to trigger global cooling by killing industry with carbon taxes and absorbing solar energy with windmills, solar panels and wood-fired power stations.

It seems they may have succeeded as Antarctica now has an expanding fringe of sea ice which recently reached an all-time high. 

Where are the environmentalists raising alarms about pooped penguins at the edge of survival, trudging extra mile after mile across the frozen Antarctic sea ice to reach open water? 

And who is preparing to protect the native penguins of Chile when feral Emperor penguins start walking onto Cape Horn? 

Seems like the Greens have goofed again?

Categories: On the Blog

Evaluating the Title II Rainbow of Proposals for the FCC to Go Nuclear

September 29, 2014, 2:40 PM

While proposing to follow the D.C. Circuit Court’s roadmap in Verizon v. FCC to create a legal FCC regulatory framework for the Internet Age under the FCC’s 706 authorities, the FCC also invited proposals to potentially subject broadband to Title II common carrier utility regulation.

The FCC’s invitation has prompted a “rainbow of policy and legal proposals” that would explore “new ideas for protecting and promoting the open Internet” by imposing Title II telecommunications regulation on America’s Internet infrastructure.

This analysis will review and debunk what the FCC has suggested are the main Title II proposals at this time from: Public Knowledge, AOL, Rep. Eshoo/Silicon Valley, Professor Tim Wu, and Mozilla.

By way of background, the Title II policy debate is about who makes Internet infrastructure decisions, the businesses which have long done so, or government regulators; and the debate is also about who pays for it.

In a nutshell, Title II is legacy 1934 telephone monopoly utility regulation where Federal and State regulators, not business owners, make every substantive decision for the business: i.e. rates, terms, conditions, profit, quality, technology, services, devices, and what can be built when and where.

Hence Title II is oft-considered the regulatory “nuclear option” because its purpose is to destroy the current user-centric, Title I information services regulatory foundation, on top of which most everything about the American Internet today is built upon, in order to replace it with an obsolete Title II FCC-centric/empowering regulatory regime from 1934.

Title II is also considered the “nuclear option” because it would destroy much of the value of the nation’s $1.2 trillion in private investment in America’s wireline, wireless and satellite Internet infrastructure, and because the “fallout” from Title II reclassification could make the sector “radioactive” to future private investment.

Importantly, the common purpose behind all the Title II proposals is giving the FCC the regulatory power to impose a “zero-price” for downstream traffic via common carrier price regulation. Title II advocates all agree that only the user should have to pay for bandwidth, and that large edge companies should not have to pay anything for their outsized bandwidth consumption of video streaming because the user requested the video streams.

Behind the smokescreen of the “fast lane-slow lane” FCC debate is a largely hidden policy debate over who pays to fund America’s Internet infrastructure – everyone who benefits from it or only Internet consumers. Title II advocates want Internet consumers to subsidize Internet content producers without being transparent to consumers.

The “rainbow” of Title II “nuclear option” proposals range from maximal 1934 telephone monopoly utility regulation, to a variety of different tactical, partial or targeted Title II regulation.

The targeted Title II advocates imagine they can detonate a regulatory “tactical nuke” that can maximally-regulate ISPs with no fallout risk or harm to edge content producers, consumers, or the virtuous innovation cycle, just like policy makers imagined that they could use a “tactical nuke” in the Cold War that would only kill people, not destroy buildings, and not trigger the mutual-assured-destruction of nuclear war.

Full “Nuclear Option” Proposal from Public Knowledge: Public Knowledge’s FCC filingproposes full reclassification of broadband access, wireline and wireless, as a Title II common carrier service, without a presumption of forbearance.

Its argument is that the FCC got broadband regulation totally wrong and never should have classified broadband access as an information service, because the Internet needs Government public utility regulation, not private-sector facilities-based broadband competition, to well serve Internet users.

Simply, Public Knowledge is calling for the FCC to preemptively blow up America’s entire Internet regulatory foundation for both the FCC and the FTC in order to save the Internet frompotential harms.

AOL’s Title II “Nuclear Option” Plus Section 706: AOL’s proposes  that the FCC “use the entire legal arsenal available to it” to prevent ISPs from negotiating commercially reasonable payments from large edge content producers, like Netflix has done.

Tactical Title II “Nuclear Option” Proposals:

Rep Eshoo’s Silicon Valley “Light touch” Title II Section 202 “Nuclear Option” Proposal:Sensitive to the risk of “radioactive” fallout to Silicon Valley of a full Title II nuclear option, Rep. Eshoo has advocated for a “creative pathway” of reclassifying broadband as a 100% common carrier telecommunications service, but then forbearing over time from 99% of it, leaving just the Section 202 nondiscrimination requirement that Silicon Valley most wants.   

Apparently Silicon Valley imagines America’s legal code to be a binary series of ones and zeroes where the FCC can reprogram its Title II nuclear device to precisely delete legal code that one doesn’t want.

Obviously Silicon Valley does not understand that it is the regulatory classification of service that is actually “binary,” because it legally must be either a “one” information service, or a “zero” telecommunications service, it can’t be both. And Silicon Valley does not appreciate that FCC forbearance, or “deleting” is not like efficiently pushing a “delete” key, but is among the most inefficient, convoluted, and uncertain of FCC tools and processes.

Wu-Narenchania’s “Magical” Title II “Nuclear Option” Proposal: Professor Tim Wu, who coined the term “net neutrality,” told FCC staff in the FCC Chairman’s Office “We have the magical formula and it’ll solve all your problems” per the Washington Post. His proposalwould be a “surgical” “nuclear strike” by treating Internet backbone downstream/”sender side” transit to a consumer as a Title II common carrier telecommunications service while treating upstream/”receiver-side” transit to an edge provider as a Title I information service.

The hubris of this particular “nuclear option” imagines that the FCC can differentiate and nano-regulate commingled best-efforts Internet packet delivery of quadrillions of bits annually — by direction. In other words, it would be akin to ones and zeros packet traffic going in one direction like east would be regulated like a utility, but ones and zeros packet traffic going in another direction like west would not.

Ironically, this effort to avoid discrimination and achieve Professor Wu’s zero-price for downstream traffic would do so by discriminating against receiver-side-consumers by forcing them to subsidize FCC-favored, sender-side-edge-producers of content, which would enjoy zero-price delivery of packets.

In the past, the FCC’s implicit subsidy programs have had businesses subsidizing consumers’ services especially rural or disadvantaged consumers. Perversely, all Title II nuclear options discussed here are designed to set a zero-price for all downstream traffic so consumers are forced to subsidize big Silicon Valley content producers like Netflix, Google-YouTube, Amazon, Yahoo, etc.

Amazingly, Professor Wu proposes to impose the Title II “nuclear option” to the Internet backbone, which has never been subject to Title II regulation since the Federal government privatized the Internet backbone two decades ago. Price regulating one direction of this complex omni-directional network of networks could risk screwing up the Internet backbone market with collateral casualties for the whole Internet ecosystem.

Professor Wu’s fantastical proposal is the equivalent of a surgeon imagining he could safely operate on a person’s spinal cord to fix only neural signals coming from the brain to the body and not those going to the brain from the body – with no risk at all to the patient or liability for the FCC “hospital” authorizing the procedure!

Mozilla’s Title II “Nuclear Option” Proposal: When the FCC invites edge content producers an opportunity to ask for whatever they want to take from other people, under the political cover of promoting “innovation,” courtesy of the FCC, they naturally get greedy and feel entitled.

No kidding, Mozilla’s proposal actually asks the FCC “to create a new type of service, one that has never before been classified” … for “remote edge providers” … that “works a little like a doorman in a high-end condominium.” … “It would clearly wall off the Internet from the access service” with “protective rules.”

Ironically those interests who have long opposed “walled gardens” for Internet content as antithetical to a free and open Internet, now are proposing being “walled off” from any obligation to pay their fair share of the cost of Internet infrastructure that their special highest-traffic services cause for everyone else in the ecosystem. Even more ironically, Mozilla believes its “high-end” walled garden is entitled to a “doorman.”

Particularly problematic in Mozilla’s petition for “walled garden” special treatment for “remote edge providers” paid for by consumers, is no disclosure that most of Mozilla’s revenues for the last several years have come from Google.

Google has paid Mozilla ~$300m a year for the last three years to make Google the default search engine on Mozilla’s Firefox browser. At a minimum it is in the public interest for Mozilla and the FCC to be fully transparent that Mozilla has a very large financial conflict-of-interest in this debate.

The cumulative hypocrisy of Mozilla’s Title II proposal is legion.

In sum, the Title II policy debate is about who makes Internet infrastructure decisions, the businesses which have long done so, or government regulators, and also who pays for it.

It is all about whether or not the FCC will destroy everything Internet that was built upon the fundamental assumption of a user-centric, light-touch information services regulation, by reclassifying broadband as an FCC-centric common carrier utility telecommunications service to impose maximal regulation.

Title II is called the “nuclear option” for a reason – its broad and lasting destructiveness.

Is Congress paying attention?

All of these destructive Title II “nuclear option” proposals are all de facto legislative proposals that should be proposed to Congress for Congress’ consideration.

It would be wise for the FCC to be very respectful of Congress’ constitutional prerogatives here, given that the FCC is a creature of Congress, not a sovereign power in and of itself as Title II advocates imply.

***

FCC Open Internet Order Series

 

Part 1: The Many Vulnerabilities of an Open Internet [9-24-09]

Part 2: Why FCC proposed net neutrality regs unconstitutional, NPR Online Op-ed [9-24-09]

Part 3: Takeaways from FCC’s Proposed Open Internet Regs [10-22-09]

Part 4: How FCC Regulation Would Change the Internet [10-30-09]

Part 5: Is FCC Declaring ‘Open Season’ on Internet Freedom? [11-17-09]

Part 6: Critical Gaps in FCC’s Proposed Open Internet Regulations [11-30-09]

Part 7: Takeaways from the FCC’s Open Internet Further Inquiry [9-2-10]

Part 8: An FCC “Data-Driven” Double Standard? [10-27-10]

Part 9: Election Takeaways for the FCC [11-3-10]

Part 10: Irony of Little Openness in FCC Open Internet Reg-making [11-19-10]

Part 11: FCC Regulating Internet to Prevent Companies from Regulating Internet [11-22-10]

Part 12: Where is the FCC’s Legitimacy? [11-22-10]

Part 13: Will FCC Preserve or Change the Internet? [12-17-10]

Part 14: FCC Internet Price Regulation & Micro-management? [12-20-10]

Part 15: FCC Open Internet Decision Take-aways [12-21-10]

Part 16: FCC Defines Broadband Service as “BIAS”-ed [12-22-10]

Part 17: Why FCC’s Net Regs Need Administration/Congressional Regulatory Review [1-3-11]

Part 18: Welcome to the FCC-Centric Internet [1-25-11]

Part 19: FCC’s Net Regs in Conflict with President’s Pledges [1-26-11]

Part 20: Will FCC Respect President’s Call for “Least Burdensome” Regulation? [2-3-11]

Part 21: FCC’s In Search of Relevance in 706 Report [5-23-11]

Part 22: The FCC’s public wireless network blocks lawful Internet traffic [6-13-11]

Part 23: Why FCC Net Neutrality Regs Are So Vulnerable [9-8-11]

Part 24: Why Verizon Wins Appeal of FCC’s Net Regs [9-30-11]

Part 25: Supreme Court likely to leash FCC to the law [10-10-12]

Part 26: What Court Data Roaming Decision Means for FCC Open Internet Order [12-4-12]

Part 27: Oops! Crawford’s Model Broadband Nation, Korea, Opposes Net Neutrality [2-26-13]

Part 28: Little Impact on FCC Open Internet Order from SCOTUS Chevron Decision [5-21-13]

Part 29: More Legal Trouble for FCC’s Open Internet Order & Net Neutrality [6-2-13]

Part 30: U.S. Competition Beats EU Regulation in Broadband Race [6-21-13]

Part 31: Defending Google Fiber’s Reasonable Network Management [7-30-13]

Part 32: Capricious Net Neutrality Charges [8-7-13]

Part 33: Why FCC won’t pass Appeals Court’s oral exam [9-2-13]

Part 34: 5 BIG Implications from Court Signals on Net Neutrality – A Special Report [9-13-13]

Part 35: Dial-up Rules for the Broadband Age? My Daily Caller Op-ed Rebutting Marvin Ammori’s [11-6-13]

Part 36: Nattering Net Neutrality Nonsense Over AT&T’s Sponsored Data Offering [1-6-14]

Part 37: Is Net Neutrality Trying to Mutate into an Economic Entitlement? [1-12-14]

Part 38: Why Professor Crawford Has Title II Reclassification All Wrong [1-16-14]

Part 39: Title II Reclassification Would Violate President’s Executive Order [1-22-14]

Part 40: The Narrowing Net Neutrality Dispute [2-24-14]

Part 41: FCC’s Open Internet Order Do-over – Key Going Forward Takeaways [3-5-14]

Part 42: Net Neutrality is about Consumer Benefit not Corporate Welfare for Netflix [3-21-14]

Part 43: The Multi-speed Internet is Getting More Faster Speeds [4-28-14]

Part 44: Reality Check on the Electoral Politics of Net Neutrality [5-2-14]

Part 45: The “Aristechracy” Demands Consumers Subsidize Their Net Neutrality Free Lunch [5-8-14]

Part 46: Read AT&T’s Filing that Totally Debunks Title II Reclassification [5-9-14]

Part 47: Statement on FCC Open Internet NPRM [5-15-14]

Part 48: Net Neutrality Rhetoric: “Believe it or not!” [5-16-14]

Part 49: Top Ten Reasons Broadband Internet is not a Public Utility [5-20-14]

Part 50: Top Ten Reasons to Oppose Broadband Utility Regulation [5-28-14]

Part 51: Google’s Title II Broadband Utility Regulation Risks [6-3-14]

Part 52:  Exposing Netflix’ Biggest Net Neutrality Deceptions [6-5-14]

Part 53: Silicon Valley Naïve on Broadband Regulation (3 min video) [6-15-14]

Part 54: FCC’s Netflix Internet Peering Inquiry – Top Ten Questions [6-17-14]

Part 55: Interconnection is Different for Internet than Railroads or Electricity [6-26-14]

Part 56: Top Ten Failures of FCC Title II Utility Regulation [7-7-14]

Part 57: NetCompetition Statement & Comments on FCC Open Internet Order Remand [7-11-14]

Part 58: MD Rules Uber is a Common Carrier – Will FCC Agree? [8-6-14]

Part 59: Internet Peering Doesn’t Need Fixing – NetComp CommActUpdate Submission [8-11-14]

Part 60: Why is Silicon Valley Rebranding/Redefining Net Neutrality?  [9-2-14]

Part 61: the FCC’s Redefinition of Broadband Competition [9-4-14]

Part 62: NetCompetition Comments to FCC Opposing Title II Utility Regulation of Broadband [9-9-14]

Part 63: De-competition De-competition De-competition [9-14-14]

Part 64: The Forgotten Consumer in the Fast Lane Net Neutrality Debate [9-18-14]

Part 65: FTC Implicitly Urges FCC to Not Reclassify Broadband as a Utility [9-23-14]

[Originally published at PrecursorBlog]

Categories: On the Blog

People’s Climate March Wants to Change the System, Not the Weather

September 29, 2014, 2:28 PM

“Extremist voices and groups have hijacked Islam and misappropriated the right to speak on its behalf,” Iyad Ameen Madani, secretary general of the Organization of Islamic Cooperation, told the 25th Session of the Arab Summit earlier this year.

Surely sincere lovers of nature can similarly see that extremists have hijacked the environmental movement, as evidenced by the People’s Climate March last week in New York City and the subsequent UN Climate Summit.

The People’s Climate March had little to do with the climate. The eco-extremists want to “change the system.”

While reported numbers vary, hundreds of thousands of people clogged (and littered) the streets of New York City, with solidarity events held elsewhere around the globe. The parade had grand marshals such as actors Leonardo DiCaprio and Mark Ruffalo, and politicos such as Al Gore and Robert Kennedy, Jr.

It also had an assortment of anti-Americans and anti-capitalists. Human Eventsdescribed the menagerie this way: “If you’re in favor of totalitarian power, sympathetic to America’s enemies, dubious about representative democracy, hostile to free markets, or you just get turned on by fantasizing about violent revolution, there was a place for you at this march.”

Marchers carried a banner stating: “Capitalism is the disease, socialism is the cure.” Other signs read: “Capitalism is killing the planet. Fight for a socialist future.”

Hydraulic fracturing—uniquely responsible for U.S. carbon dioxide emissions dropping to the lowest in 20 yearscame under special attack: “Make fracking a crime.” Marchers held signs saying: “Fracking = Climate Change. Ban fracking now.”

Speaking of crimes, Robert Kennedy, Jr., in an interview at the Climate March, toldClimate Depot’s Marc Morano that he wishes there were a law to punish global warming skeptics. Interviews with marchers revealed sentiments ranging from “corporations have to be reined in” to the notion that the marchers are “building a revolution for a whole new society—a new socialist society.”

A man in a cow costume carried a sign reading: “I fart. Therefore, I am the problem.” Bob Linden, host of the nationally-syndicated program “Go Vegan,” stated: “[I]f 50 to 85 percent of us switch to veganism by 2020, scientists tell us we can save the planet from climate change.”

Unfortunately, you won’t see any of this in the mainstream media. The New York Timesslide show of the event features a pictorial display of flower wreaths, children, and happy dancers.

In a piece titled: “Rockets Red Glare Distract Nation from UN Climate Summit and Import of Global Climate Protests,” the Huffington Post laments that “the critically important UN Climate Summit in New York has had to compete on mainstream media with the far more dramatic war coverage.” It continues that “the climate’s fate is far more important to the world even than the desperately needed military campaign by the U.S. and its allies to eradicate barbaric ISIL terrorists from Syria and Iraq.”

The new war in Iraq and Syria, waged by Islamic extremists, centers on hate for all things Western and a desire to change systems of government to an Islamic caliphate. The People’s Climate March also centers on hate and a desire to change the government.

One description of the March said: “These people are defined by what they hate, and a big part of what they hate is capitalism.”

During a panel discussion held in conjunction with the March, a questioner wonderedaloud to Naomi Klein, author of This Changes Everything: Capitalism vs. The Climate: “Even if the climate change issue did not exist, you would be calling for the same structural changes.” Her answer: “Yeah.”

Every Muslim isn’t a terrorist and every person who cares about the planet isn’t an eco-extremist. But just as ISIS changed America’s view, the Climate March made clear that extremist voices have hijacked the environmental movement.

National Geographic summed up the March this way: “Despite all the enthusiasm displayed in New York and elsewhere on a muggy September Sunday, public opinion pollsconsistently show that climate change does not rank as a high priority for most Americans.”

Americans are smarter than the collection of anti-capitalist satellite groups think. They’ve seen through the rhetoric and realize, as the Climate March made clear, that it is not about climate change, it is about system change.

The author of Energy Freedom, Marita Noon serves as the executive director for Energy Makes America Great Inc. and the companion educational organization, the Citizens’ Alliance for Responsible Energy (CARE).

[Originally published at Breitbart]

Categories: On the Blog

America’s Densest Cities

September 29, 2014, 1:47 PM

There is a general perception that the densest US cities are in the Northeast, where downtowns tend to be bigger and inner city densities are higher. However, cities have become much larger geographically, and also include the automobile oriented lower density suburbs that have developed since World War II. In fact, most of the densest major urban areas are in the West.

Since 1950, each decennial census of the United States has defined urban areas, or, areas of continuous urbanization. Urban areas include core cities (municipalities, such as the city of New York or the city of Boston) as well as adjacent suburbs.

Urban areas do not correspond to city limits or jurisdiction borders. They are composed of small census districts that average fewer than 50 residents and can cross state lines. Metropolitan areas, which are often wrongly used interchangeably with urban areas, are based on county boundaries and always contain rural areas. So, metropolitan area densities are a useless statistic for urban density analysis.

New York and Los Angeles

This article ranks the densities of the largest urban areas (cities) in the nation’s 52 metropolitan areas with more than 1,000,000 population.

With the largest population, New York is America’s ultimate city. More than 18 million people live in the urban area. The New York urban area covers the most land area in the world. It stretches far beyond City Hall in Manhattan, 50 miles west to Hackettstown, New Jersey, 90 miles east to Sag Harbor Long Island, 55 miles north to Dutchess County, New York, and 80 miles south to Ocean County, New Jersey. The New York urban area is geographically bigger than Delaware and Rhode Island combined. Nonetheless, New York has fallen behind Los Angeles, San Francisco, and San Jose in urban density.

Despite its international reputation for endless urban sprawl, the densest major city is Los Angeles. Los Angeles covers one-half the land area of New York, with two-thirds the population (12.2 million). With an area of 1,736 square miles, Los Angeles has an urban density of 6,999 per square mile. The urban core of Los Angeles is much less dense than New York, but the suburbs (where most people live) are twice as dense.

Six urban areas are geographically larger than Los Angeles. These include New York, Atlanta (2,645 square miles), Chicago (2,443), Boston (1,873), Philadelphia (1,981) and Dallas-Fort Worth (1,779). Among these, Boston is reputed for its high density urban core. But because of its very low density suburbs, Boston is less than one-third as dense as Los Angeles and less dense than cities perceived to have lower density, such as Phoenix and Houston. (For complete information on urban area, core municipality and suburban, see here.)

Balance of the Top Ten

San Francisco is the second densest city, at 6,266 per square mile. San Francisco has a dense urban core like New York. But more of San Francisco looks like Los Angeles than New York. Its suburbs are 50 percent more dense than those of New York.

San Jose ranks third, with an urban density of 5,820. Yet, even with virtually no pre-automobile urban core, San Jose is more dense than New York. This is because its all-suburban urban form is dense enough to erase the effect of New York’s hyper dense urban core.

Las Vegas ranks fifth (after New York), with a density of 4,525. Las Vegas was too small to be a metropolitan area in 1950, and like San Jose is composed of virtually all post-war suburban development.

Miami ranks sixth, at 4,238 , with a higher core density and higher density suburbs.

The next three positions are occupied by #7 San Diego (4,037), #8 Salt Lake City (3,675) and #9 Sacramento (3,660). In each case, these cities have denser suburbs than average, which is the principal reason for their strong rankings.

New Orleans is the 10th densest city, which represents a substantial decline from 2000. Before Hurricane Katrina (2005), New Orleans ranked fifth in the nation, at 5,096. Its 2010 density (3,579) was a full 30 percent lower than in 2000.

The Bottom Ten

The bottom ten includes seven southern cities, two from the Northeast and one from the Midwest. Their densities range from 1,414 in Birmingham to 2,031 in Grand Rapids. The bottom ten also includes Charlotte, Atlanta, Raleigh, Nashville, Hartford, Pittsburgh, Richmond, and Jacksonville.

Interestingly, the Hartford metropolitan area has the highest gross domestic product (GDP) per capita in the world, according to data in the Brookings Global Metro Monitor, which is counter to the perception that associates stronger economic performance with higher urban densities.

Density Goes West

Thus, only one Northeastern city ranks in the top ten (New York) and none are from the Midwest. Seven of the top ten are in the West, with five in California and two more from the Intermountain West. Two more are from the South. However, two western cities that have among the strongest urban containment policies (densification policies), Seattle and Portland are not among the top ten in density.

Since World War II, nearly all of the nation’s urban growth has been in suburban areas. Most of this growth has occurred in the West and South, rather than in the Northeast and the Midwest (North Central). The growing western suburbs developed at higher densities. The combination of these factors accounts for the higher urban densities in the West (Table 2). The effect of the much higher densities of urban cores in the Northeast are offset by the denser suburbs of the West. Indeed, the suburbs of the West are denser than all but seven of the 52 major urban areas.

[Originally published at Huffington Post]

 

Categories: On the Blog

Repeal The Oil Export Ban

September 29, 2014, 8:36 AM

Thanks mainly to the shale revolution, oil production in the U.S. hit a 28-year high last month while imports were at their lowest levels since 1995. Consequently, prices have fallen 15% since June, and Saudi Arabia has cut production by 400,000 barrels a day — providing further evidence that OPEC no longer has the power to set prices.

Against these developments, the current ban on exporting American oil is nonsensical.

Even the liberal Brookings Institution in a recent study concludes that it’s time to remove the ban, arguing that the more we export the greater the expected decline in gasoline prices, perhaps as much as 12 cents per gallon. “As counter-intuitive as it may seem, lifting the ban actually lowers gasoline prices by increasing the total amount of crude supply.”

In addition the study identifies a number of other economic benefits from exporting oil, including higher GDP and lower unemployment.

Some politicians and pundits claim that exporting oil will divert us from the path toward “energy independence.” Others argue that exporting oil will weaken our energy security since we’re still a net importer. Still others claim that keeping domestic oil at home will help lower gasoline and diesel prices.

All of these arguments are baseless. Currently, we lead the world in the output of natural gas, nuclear power and renewables. We’re still No. 3 in oil production, but the International Energy Agency projects that within a few years America will reclaim the No. 1 ranking. In short, we’re already energy independent.

As for energy security, it’s hard to envision a political scenario that would result in our inability to import oil.

Over the past two years we’ve seen political unrest in Iraq, Libya, Bahrain, Syria and other petroleum exporting countries, but oil prices have actually fallen. Most of the oil we import today comes from friendly nations like Canada and Mexico. OPEC now accounts for less than 10 percent of U.S. consumption, with half coming from Saudi Arabia to supply its huge refinery in Port Arthur.

What’s more, we still have the Strategic Petroleum Reserve in the unlikely event of a political conflict that could disrupt global oil movements.

And because the price of oil is determined (more or less) by global supply and demand, keeping U.S. oil in the U.S. will not confer any benefits to consumers. On the other hand, exporting some of our oil can help sustain the energy boom that has created hundreds of thousands of jobs in recent years against the backdrop of a less-than-robust economic recovery from the Great Recession.

Obviously, changing the laws that banned the export of crude oil in the aftermath of the mid-1970s energy crisis will not be easy, for two reasons.

First, politicians, the media and the public must recognize that oil is simply a commodity. Just as we export rice and wheat at the same time we import rice and wheat, there’s no reason we shouldn’t do the same with oil.

Second, most mainstream environmental groups oppose oil exports for the same reason they oppose natural gas exports, offshore drilling and the Keystone XL pipeline. To them, any of these developments will bring about more fossil fuel production and more fossil fuel consumption. That’s bad for the planet, end of story. But they exist outside of reality.

If it makes economic and logistical sense to export some grades of oil, such as light sweet crude where we have a supply glut from the Eagle Ford shale in Texas, we should do so. And if it makes economic and logistical sense to import oil, such as diluted bitumen from the Alberta Oil Sands to feed into Gulf Coast refineries that are designed to process heavy crude, we should do so as well.

America is an energy-rich country, the richest in the world. We need to stop acting as though we’re energy poor.

• Weinstein is associate director of the Maguire Energy Institute and an adjunct professor of business economics in the Cox School of Business at Southern Methodist University.

[Originally published at Investors.com]

Categories: On the Blog

Notes From TPPF’s Climate and Energy Summit

September 29, 2014, 12:26 AM

I spent last Thursday and Friday and the Texas Public Policy Foundation’s Energy & Climate Policy Summit.  The location was great and the people intelligent.

I must admit, I’ve become a bit jaded over the years having attended so many energy and/or climate conferences (as a speaker, moderator and attendee), most of which I learn little that I didn’t already know.  The speakers were known quantities and they were preaching to the choir. This conference was an exception, though the choir was in attendance, and I knew (or knew of) many of the speakers, I learned something new from almost each and everyone of them.  If was a very informative conference providing me with a lot of speakers for future podcasts and papers to be covered in The Heartland Institute’s various publications.

We opened with a luncheon at which scientist, author and now Lord, Matt Ridley spoke.  He is a powerful speaker, discussing the critical nature of coal and other fossil fuels to historical economic progress and prosperity.

The first panel discussed the state of climate change science.  Climate scientists Roy Spencer and Judith Curry (who has a new paper out) both hammered on the climate models and their overstatement of climate sensitivity.  A scientist from NASA, Hal Dorian, then presented a new model developed by retired NASA scientists that shows climate change forecasts to be alarmist in the extreme. To his credit, Zong-Liang Yang, played Daniel in the lions den by defending the global warming orthodoxy as maintained by the IPCC.

The second panel addressed the nightmare of current and future climate regulation.  It went through the history of how the Supreme Court landed us in the position of having to defend against EPA climate regulations, despite the fact, as Marlo Lewis pointed out, Congress refused to pass climate related bills 692 times between the 101 and 111 Congress.

Mike Nasi gave a frightening presentation on the costs of climate regulations in terms of what they would shut down in terms of energy use.  His talk ended optimistically however, as he has come to the conclusion that I had long ago.  In the end, reality will stymie these climate schemes because American’s want their energy, or more accurately, their cars and lights and air conditioners and refrigerators, and they want them to run on command or nearly constantly as need be.  Thus, when power gets scarce (climate regulations can do a lot of damage before the lights start to flicker), citizens will demand change — or make change themselves by removing politicians.

Other sessions covered the history, politics and economics of climate change; the failures of alternatives to fossil fuels; and, refreshingly for me as an ethicist, a serious discussion of the moral case for fossil fuels and against energy poverty (a themed that carried over into the closing lunch).  All to often, climate and energy conferences focus on science disputes or costs calculations and ignore the very real pain that climate policies cause, and the immorality of climate prescriptions for energy poverty and centralized control of the economy.

This last session included four great presentations, including a talk by TPPF’s own Kathleen Hartnett White, and, most interestingly speech Caleb Rossiter, a liberal/progressive who has lost friends and a job because he rejects climate alarmism.  He’s a brave and honest man.

The Thursday night dinner featured speaker was Texas Governor Rick Perry.  He detailed Texas’s trials and travails with the EPA and how and why our state (yes, I’m a proud 5th generation Texan) has lead the way on energy production and in the fight against overweening Federal interference with state affairs.

If this conference were a movie, I’d give it two thumbs up and four stars.

Categories: On the Blog

5 Reasons Leonardo DiCaprio Can Stop Worrying About the Climate

September 28, 2014, 3:21 PM

Leonardo DiCaprio last weekend participated in the “People’s Climate March” in New York City and followed it up with an address to the United Nations. He’s got that kind of access now that he’s been appointed the United Nations’ latest “Messenger for Peace.”

If you have not seen Leo’s speech, it’s quite a remarkable dramatic performance – and we should expect nothing less from the star of “Critters 3” and other fine films. Catch the video and run-down of Leo’s speech at Newsbusters. The bearded actor (and we can only hope that face sweater is for an upcoming role, and not a style choice) offered a new wrinkle outside the usual list of doom and gloom.

“To be clear, this is not about just telling people to change their light bulbs or to buy a hybrid car,” DiCaprio said. “This disaster has grown beyond the choices that individuals make. This is now about our industries and governments around the world taking decisive, large-scale action.”

That is an awful convenient stance for a guy who owns at least four homes, took a private jet to New York, arrived at the rally in a limo and likes to party on an eight-story, 500-foot-long yacht that he rents from … wait for it … an oil-soaked Arab billionaire. I kid you not.

Leo may be part of the 0.01 percent, but he’s also in the minority of only 20 percent of Americanswho think “the debate is over” about human-caused global warming. Most Americans rightly find themselves in the global warming “skeptic” camp — despite decades of propaganda by the media, public schools, and Hollywood actors like Leo saying human activity has caused a climate crisis.

To ease Leo’s mind, here are five reasons why he and the rest of us need not be so worried about the climate – let alone take “decisive, large-scale action” that will make life miserable for the other 99.98 percent.

  • Global Warming Stopped in 1997

Global temperatures rose through most of the 20th century, about 0.9 degrees Celsius. But for nearly the last 18 years, global surface temperatures have flatlined. In fact, some satellite measurements have even indicated a slight cooling trend. This has happened despite humans spewing out more than 100 billion tons of carbon dioxide into the atmosphere since 2000. To put that figure in perspective, humans have emitted roughly 400 billion tons of CO2 into the atmosphere since 1750. So a quarter of all human emissions since the start of the Industrial Revolution have occurred this century. And yet …no warming since “Titanic” came out. This is good news, Leo! You can take your private jet from LA to New York even if all you want is a slice of pizza.

  • Extreme Weather Events Have Actually Decreased

Al Gore promised in his Oscar-winning 2006 film “An Inconvenient Truth” that the earth was going to experience a sharp increase in severe weather because of man-caused global warming. So … have we experienced more frequent and violent tornadoes in the U.S.? Nope. Indeed, the number of powerful tornadoes has declined since the 1970s peak. Every single day sets a new record for a Category 3 hurricane failing to hit the U.S. And how about wildfires? Those are burning bigger and hotter every year, consuming more and more acreage, right? Again, no.

Remember these facts the next time severe weather does strike somewhere. That’s not happening because man angered the weather gods. It’s happening because it happens … and less frequently now than is normal.

  • Sea-Level Rise is Not Accelerating

The seas began to rise at the beginning of the end of the last Ice Age about 20,000 years ago. At its peak – when the many-miles-thick glaciers that covered a lot of the Northern Hemisphere were melting – sea-level rise was about 10 mm per year. Since we don’t have nearly as much ice to melt today (thank goodness), sea-level rise is not going to exceed that pace – let alone make coastal cities uninhabitable, as Gore and DiCaprio often say. Indeed, the pace of sea-level rise has been about 1 mm per year for most of the last century, and it is not accelerating.

Fun fact: Sea levels were actually higher than today in recorded history.

  • The Ice Caps Are Fine

The amount of ice on the earth’s poles has decreased dramatically since the last ice age ended, but not all that much since. Humans were not able to measure polar ice levels from space until the 1970s, when the first satellites to observe the poles went into orbit. So when you hear that 2012 marked the lowest level of Arctic ice “ever recorded,” that actually means “since ‘Dirty Harry’ was in theaters.”

In fact, 2014 is turning out to be a “recovery summer” for Arctic ice, up 43 percent from the recorded low of 2012. Gore, you might remember, predicted this would be the year we’d see our first ice-free Arctic. But Gore gets a lot of things wrong, so that’s hardly surprising.

Meanwhile, down at the South Pole, there’s even better news for ice fans. Antarctic polar ice extentkeeps setting new records. Will someone think of the penguins! Now they have to walk farther to reach the water to feed.

  • Carbon Dioxide Is Not a Pollutant, but is Good for Plants and Animals

This is a radical, but correct supposition: Carbon dioxide is good for the planet.

First of all, CO2 is not a pollutant. What you see in most of those photos of smokestacks in magazines and newspapers is steam released after most of the harmful particulates have been filtered out. Carbon dioxide, which is in that steam, is harmless to humans — but what plants need to keep greening the planet.

As we all learned in 8th grade, CO2 is also what makes plants grow. Back in the early ’90s, Sting (who was at the NYC climate march with Leo) got some attention for singing about how we need to save the forests. Satellite data shows that the earth has actually been increasing the density of its forest cover for the last 30 years, and counting. Nice work, Sting!

The CO2 level in the atmosphere today is about 400 ppm, give or take. That puts the earth in the “safe zone” for keeping agriculture thriving. At 150 ppm, plants start dying. Human CO2 emissions are actually helping plant life and agriculture thrive — a great boon to humanity, especially in the developing world. Dr. Patrick Moore, one of the founding members of Greenpeace, explains this fact well (especially for laymen) during this presentation at The Heartland Institute’s latest climate conference.

One more fact about CO2 levels: They were higher than 2000 ppm during past ice ages.

I don’t expect to see any of these inconvenient facts in Leo’s latest horror flick “Carbon,” to be ignored by the masses in a theater or website near you. He’s in the make-believe business. The rest of us should stay grounded in reality.

[First published at Hollywood in Toto.]

Categories: On the Blog