On the Blog

EPA Pesticide Bans Threaten You and the Economy

Somewhat Reasonable - August 22, 2014, 9:46 AM

When Rachel Carson’s book, “Silent Spring”, was published, filled with totally false claims about DDT, the Environmental Protection Agency looked it over and concluded she had used manipulated data. They concluded that DDT should not be banned, but its first administrator, William Ruckleshaus, overruled the agency and imposed a ban.

Ruckleshaus was a lawyer, not a scientist. He was also politically connected enough to hold a variety of government positions. He got the nod for the EPA job from John Mitchell, Nixon’s Attorney General who later went to jail for his participation in the Watergate cover-up.

Wikipedia says, “With the formation of EPA, authority over pesticides was transferred to it from the Department of Agriculture. The fledgling EPA’s first order of business was whether to issue a ban of DDT. Judge Edmund Sweeney was appointed to examine the case and held testimony hearings for seven months. His conclusion was that DDT “is not a carcinogenic hazard to man” and that “there is a present need for the essential uses of DDT”. However, Ruckelshaus (who had not attended the hearings or read the report himself) overruled Sweeney’s decision and issued the ban nevertheless, claiming that DDT was a ‘potential human carcinogen.’” In 2008, having returned to the practice of law, he endorsed Barack Obama.

I cite this history from the 1970s because most people believe that the EPA operates on the basis of science and, from the beginning, that could hardly have been less true. It has evolved over the years into a totally rogue government agency issuing thousands of regulations with the intent to control virtually every aspect of life in America, from agriculture to manufacturing, and, in the case of pesticides, the effort to ban them all, always claiming that it was to protect public health.

Not killing pests, insects and rodents, is a great way to put everyone’s health in jeopardy. New York City announced a new war in May against rats and will spend $600,000 to hire new inspectors to deal with an increased population. Lyme disease and West Nile Fever are just two of the diseases that require serious insect pest control. A wide variety of insects spread many diseases from Salmonella to Hantavirus. Termites do billions in property damage every year.

Thanks to the EPA ban on DDT and the nations that followed the USA action, an estimated 60 million people have died from malaria since 1970 because it was and is the most effective way to control the mosquitoes that spread it, particularly in Africa. In the West, malaria had been eliminated thanks to the use of DDT before the ban.

In the 1980s I worked with the company that produced an extraordinary pesticide, Ficam that was applied with nothing more than water. Although it had gone through the costly process of securing EPA registration, the agency told the manufacturer it would have to do so again. Because the cost could not justify re-registration it was taken off the market in the USA, but continues to be used successfully for malaria control in more than sixteen nations in Sub-Saharan Africa and against the spread of Chagas, a tropical parasitic disease in Latin American nations. Ficam can be used to control a wide variety of insect pests. But not in the USA.

In 2000, the EPA, during the Clinton-Gore administration, announced that “a major step to improve safety for all Americans from the health risks posed by pesticides. We are eliminating virtually all home and garden use of Dursban—the most used household pesticide in the United States.”  It was widely used because it did a great job of controlling a wide variety of insect pests, but the EPA preferred the pests to the human species it allegedly was “protecting.” The ban was directed against chlorpyrifos which the EPA noted was “the most commonly used pesticide in homes, buildings, and schools.” It was used in some 800 pest control products.

Recently I have been receiving notices from Friends of the Earth (FOE) announcing “a new effort to help save bees. “We need to ban bee-killing pesticides now!” says one of their emails, claiming that “A growing body of science shows that neonicotinoid (neonic) pesticides are a key contributor to bee declines.” This is an outright lie. As always, FOE’s claims are accompanied by a request for a donation.

Dr. HenryI. Miller, a physician and molecular biologist, a fellow at Stanford University’s Hoover Institution, was a founding director of the FDA’s Office of Biotechnology. Recently he disputed the White House’s creation of a Pollinator Health Task Force and a directive to the EPA to “assess the effect if pesticides, including neonicotinoids, on bee and other pollinator health and take action, as appropriate.”  This is the next step—a totally political one—that will deny one of the most important pesticides to protect crops from being used. “This would have disastrous effects on modern farming and food prices,” warns Dr. Miller.

“Crafted to target pests that destroy crops, while minimizing toxicity to other species, neonics,” said Dr. Miller, “are much safer for humans and other vertebrates than previous pesticides…there is only circumstantial or flawed experimental evidence of harm to bees by neonics.”

“The reality is that honeybee populations are not decline,” noted Dr. Miller, citing U.N. Food and Agricultural Organization statistics. If anything is affecting bee populations worldwide it is the increasingly cold weather than has been occurring for the past 17 years as the result of a natural cooling cycle which is the result of less solar radiation from the Sun. The other threat to bee is Varroa mites and the “lethal viruses they vector into bee colonies.”

“A ban on neonics would not benefit bees, because they are not the chief source of bee health problems today.

But the Friends of the Earth who are no friends of the humans that live on it want to ban neonics and it is clear that the White House and the EPA are gearing up, for example, to induce a major reduction in crops such as Florida’s citrus industry which is subject to the Asian citrus psyllid, an insect that spreads a devastating disease of citrus trees. Other food crops are similarly affected by insect pests and the end result of a ban would severely damage the U.S. economy.

In every way possible the environmentalists—Greens—continue to attack the nation’s and the world’s food supply and the result of that will kill off a lot of humans. The EPA’s pesticide bans are not about protecting health. They are an insidious way of increasing sickness from an ancient enemy of mankind, insect and rodent pests.

© Alan Caruba, 2014

Categories: On the Blog

The New York Times Has Zero Idea How the Internet Works – Or Is Lying Its Masthead Off

Somewhat Reasonable - August 22, 2014, 9:34 AM

It takes a special man to cram so much wrong into a mere 342 words.  Or an Old Grey Lady.

The New York Times utterly ridiculous Editorial Board recently as one addressed Title II Internet regulatory Reclassification and Network Neutrality – and they did so in utterly ridiculous fashion.

They either have absolutely no idea what any of this is – or they are lying through their printing presses.

The Times calls for the federal government to illegally commandeer control of the entirety of the World Wide Web – so as to then impose Net Neutrality.  Guess with whom they are in agreement?

The hardcore Media Marxist Left wants President (Barack) Obama’s Federal Communications Commission to unilaterally change – for the worse – how the government regulates the Internet. Which would be an egregious violation of existing law – the 1996 Telecommunications Act.

This law classified the Internet as Title I – a very light-touch regulatory regime. As happens when the government largely leaves something alone, the Internet has become a free speech, free market Xanadu. Arguably no endeavor in human history has grown so big, so well, so fast.

If ever there was an example of “if it ain’t broke, don’t fix it” – this is it. Yet the perpetually broken government is listening to these Leftist loons – and considering the move to Title II.

Title II is the uber-regulatory superstructure with which we have strangled landline phones – you know, that bastion of technological and economic innovation. Which do you find more impressive – your desktop dialer or your iPhone?

Title II regulations date back to the 1930s – so you know they’ll be a perfect fit for the ultra-modern, incredibly dynamic, expanding-like-the-universe World Wide Web.

This would be the most detrimental of all Information Superhighway road blocks. Rather than the omni-directional, on-the-fly innovation that now constantly occurs, Title II is a Mother-May-I-Innovate, top-down traffic congest-er.

Imagine taking a 16-lane Autobahn down to just a grass shoulder.

The Times editorial wrongness begins in their title.

President Obama: No Internet Fast Lanes

There will be no “fast lanes.”  There will be what there have been since just about the Internet’s inception – innovative ways to make uber-bandwidth hogs, like video merchants, easier to deliver.  Which keeps the traffic for everyone flowing smoothly.

The Web would have long ago ground to a halt had not these innovations been developed and continuously enhanced.  The bandwidth hogs are looking to have the government mandate that the Internet Service Providers (ISPs) build, maintain and grow them – and give the hogs free, unlimited access.

Guess who would then get to pick up that gi-normous, ever-growing, ongoing tab?  Hint: You saw him or her this morning brushing your teeth.

The Times then gets the first half of their very first sentence wrong.

The Federal Communications Commission, which could soon allow phone and cable companies to block or interfere with Internet content,….

Actually, ISPs have always and forever been able to do that.  But they haven’t.  Why?  Because they are in the customer service business – if they intentionally fail to service their customers, they will no longer have customers.

It’s called the free market, Times.  You should look into it – instead of looking to end it.

And there are already existing laws and an existing government entity – the Federal Trade Commission (FTC) – to address this if it ever does happen.  Which it won’t.

More Times inanity:

The F.C.C. is trying to decide whether telecommunications companies should be able to strike deals with powerful firms like Netflix and Amazon for faster delivery of videos and other data to consumers.

As Amazon grew and their package tally exponentially increased, it didn’t demand the various delivery services keep their shipping rates exactly the same.  That would be absurd.

And I’m sure the government-run Postal Service would have been very accommodating of that request.

Small and young businesses will not be able to compete against established companies if they have to pay fees to telephone and cable companies to get content to users in a timely manner.

Small and young businesses don’t and won’t have to pay – because they are but a blip on the Internet radar screen.  Netflix and Google’s YouTube are at peak times more than half of all U.S. Internet trafficthey should pay a little something, you know, for the effort.

If you leave the grocery store with twenty steaks, you pay more than if you walk around a little and leave with nothing – is that so complicated?

Tom Wheeler, the chairman of the F.C.C. who was appointed by Mr. Obama, has proposed troubling rules that would allow cable and phone firms to enter into specials with companies like Facebook and Google as long as the contracts are “commercially reasonable.

Apparently it is, in fact, too complicated for the Times.  Which wants to mandate grocery stores allow someone to fill up an eighteen-wheeler with steaks – and pay the store nothing.  Which isn’t “commercially reasonable”- it is supermarket death by government.

Which is exactly what the Media Marxists want for the Internet.

“(T)he ultimate goal is to get rid of the media capitalists in the phone and cable companies and to divest them from control.

How very Hugo Chavez of them.  And, of course, the New York Times.

[Originally published at NewsBusters]

Categories: On the Blog

Don’t Tax the Internet into Oblivion

Somewhat Reasonable - August 21, 2014, 9:56 AM

Original photo by Alexander Anton.

Most of us who have grown up with the Internet have watched it do amazing work. Companies that are idea-driven and whose sources of income come from “users” have even started going public with major IPOs. Whatever has been going on seems to be working. Freedom surges online; anyone can start a Facebook page, a Tumblr blog, a PayPal account, a Twitter, or a YouTube account and gain millions of followers with as little capital as a phone or a PC. Even the much ridiculed Justin Bieber was found by Usher on YouTube, where the young star is now worth $200 million—all in just seven short years.

For much of the time that this extreme growth occurred, the government was careful not to hamper innovation. The recent debate has been about the Internet Tax Freedom Act (ITFA) that is set to expire on November 1st. First created in 1998, it has been renewed three times, preventing state and local taxation of Internet access and electronic commerce. It also allows seven states who were taxing Internet access prior to the law to be grandfathered in so they can continue unabridged.

It seems at first glance that the federal government should not be telling states and municipalities what they can and can’t tax, but undue taxes naturally hinder the flow of the marketplace to begin with. Looking at current cell phone taxes provide some insight as to how Internet access taxes could look if the law expires. The national average on cell phone taxes is over 16.3%! My $80/month “unlimited everything” plan from Sprint sounds good in a Spotify ad or on a billboard, but it’s deceiving. It’s actually closer to $100 with all the extra taxes. A similar phenomenon could happen and put accessing the Internet out of reach for people who can barely afford it now.

A bipartisan group in the Senate has been working on a proposal called the Marketplace and Internet Tax Fairness Act, which would ban Internet access taxes for another 10 years, but allow states to collect sales taxes from out-of-state companies, and still allow grandfathered states to charge Internet access taxes. It is unclear whether or not sales taxes from Internet purchases would also be up for expiration after 10 years like the Internet access tax ban would be, but it seems unlikely.

Since the Internet itself has no one “location,” it would be difficult to create a simple set of tax rules for items bought and sold. Rather than make it complex and add to the mix of confusing tax policies that already dominate American life, we should continue to shop and sell unabridged from government interference. Because the Internet can facilitate private marketplace transactions easily and efficiently, profits can be spent on investments that can lead humanity further into the future. Imagine if the government taxed the Internet. They could tax individual apps, how much data you use, the words you type in an email or a blog… Considering there are state taxes that apply to altered bagels in New York, toilet-flushing in Maryland, or holiday decorations in Texas, it isn’t far-fetched that certain states would come up with odd ways to suck out as much money as they could from the Internet…and for counties and cities that have home rule, and thus, their own taxes, multiple taxes by multiple bodies of government could begin appearing.

This proposal is ludicrous, especially since we still operate under federalism. If the Internet is inherently borderless and knows no location, then it should not have any state or local tax applied to its use. Imagine the arguments between states on who should be able to tax what. Should the states that house the servers for the seller’s website get any of the revenue? How about the state where the seller lives? What about the state that the seller’s items get shipped out of? These, along with many other variables, seem not to be an option when it comes to Internet sales taxes.

When taxes are first explained to us—probably when most of us were young—they made it sound all well and good. “Taxes are used for roads and schools and to defend our country!” We repeatedly hear these statements throughout our lives. However, as we grow up, we realize how wasteful governments of all sizes are. The surplus of social security income is not kept in a nice “pot” for us to all go back and draw from (plus, it’s a forced retirement plan), and tolls we pay in most states (with exorbitant fines if you don’t pay, often marked up 1000% or more than the original price)—contrary to popular belief—do not all go to repairing and building new highways. The documented waste, tax increases, new tax proposals all keep growing and growing while the good service we’re promised for our schools and our roads and our defense continually seems neglected. Do not add Internet taxes of any kind—sales or access—to this list that will only fuel the government fire of irresponsibility and waste for decades to come.

Categories: On the Blog

Debunking Consumerist Bogus Claim Mobile Data Does Not Compete with Cable

Somewhat Reasonable - August 21, 2014, 6:13 AM

Pro-regulation interests often resort to highly misleading arguments to advance their cause. Fortunately that kind of deception ultimately exposes the weakness of their underlying argument and public policy position.

To promote Netflix’ “strong” version of net neutrality regulation and to oppose the Comcast-TWC acquisition, Consumerist just framed a very deceptive whopper competition argument: “Comcast says mobile data is competitive, but it costs $2k to stream Breaking Bad over LTE.”

Consumerist“Since Netflix is the driver of so much internet traffic, and the center of so many of the conversations around home broadband, TV seemed to be the way to go. The question we decided to answer is: How much will you pay for the data it takes to watch the entire series run of Breaking Bad in one month?” Consumerist’s contrived calculation was $1,200 – $2,200 for a billing cycle.

Consumerist cynically uses a classic deceptive straw man argument, hoping that most people will not catch their bogus premise that Comcast does not have broadband competition from 4 national mobile LTE OTT competitors, because it is exceptionally more expensive to binge-watch premium video programming on mobile LTE plans than it is on cable.

Let’s deconstruct this clearly unreasonable straw man argument.

First, no good deed goes unpunished. Consumerist turns the great benefit of Comcast-TWC’s high-bandwidth/usage broadband offerings that enable binge-watching of premium programming, into a problem!

Second, one can easily buy all seasons of Breaking Bad on DVD at Best Buy and other retail outlets, or on iTunes and Amazon.

Third, one can binge-watch Breaking Bad on cable or DBS — with or without a DVR. These technologies are designed, in infrastructure and economic model, to enable mass binge-watching economically.

Fourth, almost no one uses mobile LTE Consumerist’ straw man way. That’s because people know how to routinely use free WiFi to watch video on their LTE phones. Most LTE providers enable and encourage offloading high-bandwidth video content like Breaking Bad onto WiFi to more-economically manage their data usage. In addition, cable broadband providers offer their subscribers free WiFi in hundreds of thousands of spots in America so they also could binge-watch in those widely-available locations if they wanted to.

Fifth, in what common sense world is it bad for people to binge-watch video programming on technologies actually designed for viewing mass volumes of video by millions of people? Any engineer will tell Consumerist that distributing large volumes of video programming daily to many millions of people is highly economic and efficient via over-the-air broadcast, cable, or DBS technologies.

Sixth, Consumerist’s straw-man argument rests on an even more bogus straw-man core-assumption: competitors must have near identical offerings to consumers in order to be considered competitive substitutes. That is not a consumer-focused view of competition, because consumers know they have a diversity of wants, needs, demands, and means that need to be matched with a diversity of technologies, infrastructures, services, amounts, and prices. Diversity of choice, availability of innovative offerings, and robust investment are all hallmarks of dynamic market competition, and which are all generously present in America’s world-leading broadband and video distribution markets.

Lastly, Consumerist is implicitly promoting the Netflix argument for maximal broadband regulation and blocking the Comcast-TWC acquisition by assuming that the high-end markets that offer the highest prices for the most cutting-edge or market-leading services can be separated from the underlying basic mass market — for antitrust purposes. The fallacy here is similar to thinking that the market for luxury cars is separate from the market for non-luxury cars – when one can understand that the distinction of what car or feature is considered a “luxury” when the products and features in high-end markets are highly-fluid, with easily-disputed market boundaries.

In sum, apparently NetFlix, Consumerist and other broadband-regulation maximalists, need to resort to deceptive straw man arguments to try and somehow justify their extreme position for maximal broadband regulation and blocking the Comcast-TWC merger.

Importantly, one of the things that make the FCC an expert agency is that they can see through fallacious straw men arguments, and FCC-reviewing courts certainly can as well.

[First published at the Precursor blog.]

Categories: On the Blog

Obama, ISIS, and Being on the Right Side of History Between Tee Times

Somewhat Reasonable - August 21, 2014, 12:44 AM

President Obama on Wednesday slightly delayed his afternoon tee time to speak about the monstrous beheading of American journalist James Foley by ISIS. It was an underwhelming address from the Leader of the Free World who finds the crown so heavy and bothersome that he puts it down aside the putting green.

In his address, Obama did well in the “sympathy-in-chief” role. I do believe that Obama is horrified and saddened, as all Americans are, about the tragic fate of James Foley. But Obama failed in his actual job — that of a leader who must express genuine and righteous anger about this act of barbarism against all people who cherish liberty.

Obama has displayed more passion and employed sharper rhetoric when talking about Republicans in Congress — who, last I heard, are not in the business of sawing off heads to make their point clear. Maybe we’ll get a better performance from our president if ISIS makes fun of the Obamacare website.

Read the whole transcript of Obama’s remarks here, but this is the excerpt that matters to me:

People like this ultimately fail. They fail because the future is won by those who build and not destroy. The world is shaped by people like Jim Foley and the overwhelming majority of humanity who are appalled by those who killed him.

Obama’s phrasing — “people like this ultimately fail” — is passive and weak. It’s akin to Obama’s frequent rhetorical tic about anyone in America who opposes his agenda being on the “wrong side of history.” It’s a throw-away line. It’s meaningless, especially from him. Our semi-retired president just doesn’t get it.

An ideology, a movement, or a nation ultimately fails because someone put them on the wrong side of history. The history-writers are the ideological victors — almost always via war. The people of those nations sacrificed many lives and much treasure to present the “right side of history,” to ensure that “people like this ultimately fail.” At the end of the 20th Century, a history in favor of liberty was written by the West, the inheritors of the Enlightenment. Despite Obama’s rhetoric, such results did not, and will not, happen passively — and certainly not because The One merely states it.

It took the West’s leadership and action to ensure the Nazis would “ultimately fail.” It took the West’s leadership and action to ensure Soviet Communism would “ultimately fail.” It was the West’s reluctance for total victory in Korea allowed the Kim clan to write their own “right side of history,” which has starved and enslaved millions of innocents. The Democrats’ betrayal of its South Vietnamese allies allowed the murderous communists to write its own “right side of history.” So far, at least, many oppressors and murderers in the world — for decades — are not counted among those who will “ultimately fail.”

And if the West doesn’t fully rise to the challenge of Islamic Fascism, centuries of Enlightenment progress for the betterment of liberty will be wiped away in a new history written by these Islamist Fascist monsters.

Categories: On the Blog

Don’t Fear the Data-Reaper?

Somewhat Reasonable - August 20, 2014, 3:47 PM

Two articles today show how the Internet economy tends to be like the overall economy but much, much faster. Innovation is faster, the rise of new companies is faster, and maturing and death of those firms is likewise faster than in the industrial and service sectors that preceded it and remain in place beside it.

The incredibly rapid rise of sales of smartphone apps, for example, has topped out and is likely headed for a precipitous decline, the Financial Times reports:

Almost a third of smartphone users do not download any apps for their devices in a typical month, according to a report by Deloitte that predicts the volume of app store sales is hitting a ceiling.

The average number of apps downloaded on a monthly basis has decreased considerably in 2014, the firm found in a survey of people in the UK. As smartphones saturate mobile markets in the US and Europe, developers must rely on customers continuing to download new apps for their businesses to grow.

A variety of problems—including concentration of sales among a few companies and the difficulty of marketing apps in an overcrowded market—indicate that the smartphone app sector has matured and is heading down:

The number of smartphone users who do not download any apps has reached 31 per cent, a steep increase from less than fifth in a similar survey last year. For those that have, the mean number of apps downloaded has fallen to just 1.82, from 2.32 last year.

“Each additional new smartphone [owner] has less inclination to download apps, either out of apathy or, at a more global level, affordability,” Mr Lee said.

Identifying an example of a Web phenomenon that is currently on a rapid rise and hastening the pace of change in the web sector, Gene Marks notes in Forbes that a new web-based “sharing economy” is already having a huge effect and is poised to grow much bigger and have a significantly greater effect on our lives, for both good and ill:

[T]his sharing economy is going to change the world. It’s not cars or tasks or rooms or groceries. It’s data. Today’s cloud based software companies are building enormous troves of data. And the smarter ones are doing this because they see the future. And their future is sharing.

Your location is being tracked. Your purchase history is being stored. Your user profile has been collected. You are prompted to save your passwords. You are asked to confirm your personal details. You must submit an email address. You are required to provide your mother’s maiden name. Every hour more bits of data about you are being gathered, stored, categorized, and archived.

For the forward thinking software service, it’s not just about user licenses. It’s not just about selling boxes or books or tablets or shoes. It’s not merely a mobile app that lets you just buy concert tickets, listen to music or take a note. It’s about the data that’s being collected. It’s why Facebook purchased What’sApp or why Amazon recently announced it was going into the mobile payments business to compete with the likes of Square and PayPal. The current fees from these services are not what’s important. In the long term, the data is what’s important.

Noting that we are currently in only “the very early days of the data sharing economy,” Marks tells readers they shouldn’t worry about this accumulation of information about them:

We are entering a world where the trillions of terabytes of data collected by software companies will soon be put to beneficial use. This world will make them a lot of money. And in return, the world will be a better place for you and me.

Companies will use this information, Marks says, to make life easier for you, and the fact that they will make money from the process should not bother us, he argues. There is some merit to that. After all, that’s what businesses do: give you things you want in exchange for things they want (usually money); it’s a mutually beneficial process. As an example of how it works, Marks quotes David Barrett, CEO of Expensify (a mobile expense reporting service):

“When you forward a travel itinerary to us, we identify it as such and build a travel profile with flight status updates and other features. With this we know not just your past preferences, but your future plans.” . . .

“Imagine you add in another player to this scenario: the OpenTable API (Application Programming Interface ).” Barrett continued. “With this we could say ‘hey there, it’s dinnertime and you’re in a town you don’t know. There’s a great Thai food restaurant next to your hotel and Bob, another one of our users who traveled to this town, ranked this Thai restaurant 5 stars saying ‘So hot I cried!’ Do you want me to make a reservation?’ Or perhaps withGrubHub, we just order your favorite dish and have it waiting for you at the hotel. Or add in the Uber API: ‘I see you just landed, would you like a ride to the hotel? Or maybe a detour to this great Thai restaurant first?’”

“Then imagine you added a simple star-rating and review system onto the expense when you submit it. Now we have a business-travel focused Yelp, except “authenticated” via the credit card purchase (e.g. no reviewing someplace you didn’t go) and “weighted” by how much you spent there (someone who spends $100 should be more trusted than someone who spent $10).”

Later in the article, Marks quotes Barrett extending the example further:

“Want to get really crazy? How about: ‘Hey, this is a bit weird, but there’s another business traveler nearby in town for a bit who loves Thai food as much as you do, and I see from her calendar that she’s free; would you like to meet up at Bob’s favorite Thai place for some curry? If this isn’t your thing, let me know and I’ll never mention it again.’”

As that somewhat tongue-in-cheek but ultimately serious and plausible example shows, the personal-data-sharing economy is beginning to hit its economic prime years, as the technology could indeed be powerfully useful to consumers. We’ve gotten used to amazon.com and other online retailers making recommendations based on our past buying and browsing habits, and there is nothing different in kind about the advanced sort of data-sharing Marks and Barrett describe.

There is a big difference, however, between this kind of economy and conventional economic transactions. In the latter, the exchange is a straightforward trade of goods and services for money, which is merely a means of purchasing the former. In a personal-data-sharing economy, the exchange is, on the surface, strictly of information: the consumer lets the business collect his data, and the business then sells that data to other businesses or uses it itself to generate more business from that consumer and others. The consumer then gets information—about restaurants, transportation options, and even potential companions—in return for letting their data be mined.

The exchange, however, seems to give a much greater amount of power to the business than to the consumer, as is evident in the example of the dating recommendation cited above. When strangers are able to make dating recommendations without even being asked, they clearly have a good deal more power than the individual consumer in such a transaction. Such power could, as Marks notes, do much good. I am far from convinced, however, that most independent-minded people would prefer to live in such a world.

It appears, of course, that we probably won’t have much choice in the matter. Nonetheless, the example of the smartphone apps, noted above, may provide some hope for those who prefer privacy over convenience. In addition, widespread problems tend to bring forth commercial available solutions, which seems likely to happen in regard to the personal-data-sharing economy. To paraphrase Mark Twain, the reports of the death of privacy may have been greatly exaggerated.

 

Categories: On the Blog

Digital Learning Makes Rewards Fun, Effective

Somewhat Reasonable - August 20, 2014, 9:26 AM

[NOTE: The following is excerpted from a chapter of the next Heartland Institute book titled Rewards: How to use rewards to help children learn — and why teachers don’t use them well.  Read the first part of this series here. This piece was first published at The American Thinker.]

Children today are much more comfortable using information technology than are those of previous generations.  Many grow up playing video games offering strong visual and audio stimulation, instant feedback on decisions, and nonfinancial rewards for achievement, such as winning competitions, accumulating points, and being able to move to the next level of a game.  The popularity of such games confirms what parents and good teachers know instinctively: children can acquire knowledge and learn new skills at seemingly phenomenal speeds when they are fully engaged in the learning experience.

Technology applied to learning, also known as digital learning or online adaptive instruction, has vast potential to transform schooling.  Either by itself or “blended” with traditional classroom teaching, digital learning is building a record of results substantially superior to traditional teaching and potentially far cheaper when used on a large scale.

Online adaptive instruction can provide in one package the goals, activities, tests, and incentives needed to accelerate student learning.  Students receive feedback as they move through a set of activities that the program customizes to their individual abilities.  Many programs utilize algorithms grounded in psychological research on common errors students have made in face-to-face settings.  Such research makes it possible to offer detailed cues for what to do next and prompt the user to move on to more difficult levels, or to repeat a lesson, perhaps from another perspective, when appropriate.

While there are obstacles to the spread of digital learning, cost is not one of them.  The per-pupil costs of online schooling, which requires fewer teachers, have only recently been compared to that of traditional classroom instruction.  According to a study by the Thomas B. Fordham Institute, full online learning on average costs about $4,300 annually less than traditional schooling, while the blended model saves about $1,100 per student per year [1].  These cost savings are likely to increase over time as the technology improves and as educators gain experience in its use.  Requiring nine rather than 12 years of schooling would reduce costs substantially more.

Best Practices

Digital learning is spreading quickly as parents, students, and educators recognize its transformative potential.  Some obstacles need to be overcome, such as certification requirements that block entry into the teaching profession by talented and motivated individuals, seat-time and class-size requirements that make school schedules rigid and unable to accommodate computer lab sessions, and opposition from teachers’ unions [2].  A rapidly growing community of educators with experience using digital learning tools and literature describing best practices are available to reformers who want to accelerate this progress.

The Digital Learning Council, a nonprofit organization launched in 2010 to integrate current and future technological innovations into public education, has produced a series of publications (all of them available online) to help parents, educators, and policymakers find and use the best practices for digital learning.  The council has proposed “10 Elements of High Quality Digital Learning,” which it describes as “actions that need to be taken by lawmakers and policymakers to foster a high-quality, customized education for all students.  This includes technology-enhanced learning in traditional schools, online and virtual learning, and blended learning that combines online and onsite learning.”

In 2011, the American Legislative Exchange Council (ALEC), a respected membership organization for state legislators, adopted a model resolution endorsing the “ten elements” approach.  In 2012, ALEC created and endorsed model legislation, the Statewide Online Education Act, that provides a detailed template for states to follow to remove roadblocks to expanding digital learning.  The National Conference of State Legislatures (NCSL), another organization of state legislators, also has endorsed expanding the use of digital learning and provides case studies of its successful implementation [3].

The Clayton Christensen Institute for Disruptive Innovation, formerly the Innosight Institute, is another good source of best practices.  The nonprofit think-tank was founded by Harvard professor Clayton M. Christensen, author of the 2008 bestseller Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns.  The organization conducts original research on the cutting edge of digital learning, consults with elected officials, and provides speakers for public events.  Researchers affiliated with the organization have created a “blended-learning taxonomy” that distinguishes among the various ways of blending digital learning with traditional schooling, such as Station Rotation, Lab Rotation, Flipped Classroom, Flex, A La Carte, Enriched Virtual, and Individual Rotation models.

Conclusion

Digital learning – the combination of online adaptive testing and instruction made possible by new technologies, software, and the internet – is beginning to transform K-12 education.  It accelerates learning for a number of reasons, but an important one is because it makes rewards for learning more accurate, timely, and attuned to the interests and abilities of students.  It promises to deliver the “creative destruction” required to substantially improve America’s failing elementary and high-school system.

ClassDojoGoalbook, and Funnix are three examples of the rapidly growing number of software programs available to educators to bring digital learning into the classroom.  Rocketship EducationKhan AcademyCoursera, andUdacity illustrate the variety of new institutions that are using digital learning to transform traditional teaching methods.  Given the pace at which software is improving and institutions are evolving, these examples may seem out of date in a few years.

Research shows substantial positive achievement effects of online education in pre-internet days and larger effects in recent years.  More advanced technologies used on a much wider scale promise even larger achievement effects, lower costs, and a greater variety of incentives, curricula, and teaching methods from which parents, students, and educators can choose.  Obstacles in the path to increased use of digital learning can be removed by parents and policymakers working together to adopt the policies recommended by pioneering leaders in the field, the Digital Learning Council, and other groups supporting this disruptive innovation – which will likely lead to far more effective education.

Herbert J. Walberg and Joseph L. Bast are chairman and president, respectively, of The Heartland Institute and authors of Rewards: How to use rewards to help children learn – and why teachers don’t use them well(October 1, 2014; ISBN 978-1-934791-38-7).  This article is excerpted from Chapter 10, “Rewards and Digital Learning.”

[First published at the American Thinker.]

Notes

[1] Tamara Butler Battaglino, Matt Haldeman, and Eleanor Laurans, “The Costs of Online Learning,” in Chester E. Finn, Jr. and Daniela R. Fairchild, eds., Education Reform for the Digital Era (Washington, DC: Thomas B. Fordham Institute, 2012), pp. 55–76.

[2] Chester E. Finn, Jr. and Daniella R. Fairchild, “Overcoming the Obstacles to Digital Learning”; Paul T. Hill, “School Finance in the Digital-Learning Era”; and John E. Chubb, “Overcoming the Governance Challenge in K-12 Online Learning,” all in Chester E. Finn, Jr. and Daniela R. Fairchild, eds.,ibid., pp. 1–11, 77–98, and 99–134.

[3] Sunny Deyé, “K-12 Online Learning Options,” National Conference of State Legislatures, Legisbriefs 21, no. 16 (April 2013).

Categories: On the Blog

The 20th Was the Failed Welfare State Century – The 21st Must Be About Less Government

Somewhat Reasonable - August 19, 2014, 1:47 PM

The Twentieth was the Century of the Welfare State.  Governments the world over built and then continuously grew their domestic aid money delivery apparatuses.  Tens of trillions of dollars were spent in attempts to raise poor people up and out.

It’s been disastrous.

European Union and the Failed Welfare State

Transforming the Developmental Welfare State in East Asia

More Evidence of the Social Welfare State as a Failure

Much of the planet was dominated by the Soviet Union – whose satellites and clients were just welfare states under the Red umbrella.  Communism is the Welfare State in full bloom – and to say it doesn’t work is the century’s greatest understatement.  One hundred million people died – and billions more lived nasty, brutish and short lives in abject desolation and destitution.

The United States has on welfare spent more than $7 trillion – just in the last fifty years.

The Continuing Failure of America’s Welfare State

Why the Welfare State Is Doomed to Fail

And President Barack Obama has failed utterly to learn the last century’s lessons.  He is in his mere eight years set to more than double our disastrous outlay.

Obama to Spend $10.3 Trillion on Welfare

Some remain steadfastly impervious to facts.

With this unfathomable amount of money, our poverty level has remained virtually unchanged.  Oh – and we’re now almost $18 trillion in debt.

That’s just domestic.  Nations around the world – led by the U.S. – have spent trillions more making other nations around the world into Welfare States.  Which has for them been just as disastrous as it has been for us domestically.

The Continuing Failure of Foreign Aid

Exploring the Failure of Foreign Aid

Africa has largely become a Welfare Continent.  It has received tons of free coin – and has been for the most part been mired in perpetual, dire malaise.

Why Foreign Aid Always Fails in Africa

Time for a Rethink: Why Development Aid for Africa Has Failed

Time for a rethink indeed – not just in Africa, but throughout the world.

U2 singer Bono – a decades-long global Welfare State promoter and believer – certainly thinks so.

“So some of Africa is rising – and some of Africa is stuck.  The question is whether the rising bit will pull the rest of Africa up – or whether the other Africa will weigh the continent down.  Which will it be? 

“The stakes here aren’t just about them.  Imagine for a second this last global recession – but without the economic growth of China and India.  Without the hundreds of millions of newly-minted middle class folks who now buy American and European goods.  Imagine that. Think about the last five years.

“Rock star preaches capitalism.  Shh…wow.  Sometimes I hear myself and I just can’t believe it.  But commerce is real.  What you’re about here – it’s real. 

“Aid is just a stop gap.  Commerce, entrepreneurial capitalism takes more people out of poverty than aid.  Of course we know that.”

I wish we all did.  Global Welfare Statesmen – take note of the Rocker.

Commerce and entrepreneurial capitalism practiced inter-nationally – between nations – is called free trade.  Utterly unfettered by the “assistance” of government.

You know who’s learning it?  More and more of Africa.

Africa Continues to Reap from US-Africa Trade Pact

“Trade lanes in Africa have increased significantly as a result of relieved trade barriers, which have had a positive impact on many local businesses. A key driver of this growth has been the African Growth and Opportunity Act (AGOA), which has stimulated trade and investment between Africa and the United States….

“Africa is the ‘last frontier’, the more we collectively focus on connecting it with the world, the more sustainable its economies will be and the more jobs we will create – creating a virtuous cycle of success.”

Rather than the 20th Century vicious cycle of Welfare State poverty.

Free trade allows peoples everywhere to lead exponentially better lives.

Governments need to do less – less free money-ing, taxing, tariff-ing, subsidizing and protectionism-ing.

And simply get out of the way – and let the magic of the marketplace clean up their messes.

[Originally published at PJ Media]

Categories: On the Blog

Federal Reserve Policies Cause Booms and Busts

Somewhat Reasonable - August 19, 2014, 1:33 PM

Since the economic crisis of 2008-2009, the Federal Reserve – America’s central bank – has expanded the money supply in the banking system by over $4 trillion, and has manipulated key interest rates to keep them so artificially low that when adjusted for price inflation, several of them have been actually negative. We should not be surprised if this is setting the stage for another serious economic crisis down the road.

Back on December 16, 2009, the Federal Reserve Open Market Committee announced that it was planning to maintain the Federal Funds rate – the rate of interest at which banks lend to each other for short periods of time – between zero and a quarter of a percentage point. The Committee said that it would keep interest rates “exceptionally low” for an “extended period of time, which has continued up to the present.

 

Federal Reserve Policy and Monetary Expansion

Beginning in late 2012, the then Fed Chairman, Ben Bernanke, announced that the Federal Reserve would continue buying U.S government securities and mortgage-backed securities, but at the rate of an enlarged $85 billion per month, a policy that continued until early 2014.  Since then, under the new Federal Reserve chair, Janet Yellen, the Federal Reserve has been “tapering” off its securities purchases until in July of 2014, it was reduced to a “mere” $35 billion a month.

In her recent statements, Yellen has insisted that she and the other members of the Federal Reserve Board of Governors, who serve as America’s monetary central planners, are watching carefully macro-economic indicators to know how to manage the money supply and interest rates to keep the slowing general economic recovery continuing without fear of price inflation.

Some of the significant economic gyrations on the stock markets over the past couple of months have reflected concerns and uncertainties about whether the Fed’s flood of paper money and near zero or negative real interest rates might be coming to an end. In other words, borrowing money to undertake investment projects or to fund stock purchases might actually cost something, rather than seeming to be free.

When the media has not been distracted with the barrage of overseas crises, all of which seem to presume the need for America to play global policeman and financial paymaster to the world at U.S. taxpayers’ expense, the resumption by news pundits and too many economic policy analysts is that Federal Reserve’s manipulation of interest rates is a good, desirable and necessary responsibility of the central bank.

As a result, virtually all commentaries about the Fed’s announced policies focus on whether it is too soon for the Federal Reserve to raise interest rates given the state of the economy, or whether the Fed should already be raising interest rates to prevent future price inflation.

What is being ignored is the more fundamental question of whether the Fed should be attempting to set or influence interest rates in the market. The presumption is that it is both legitimate and desirable for central banks to manipulate a market price, in this case the price of borrowing and lending. The only disagreements among the analysts and commentators are over whether the central banks should keep interest rates low or nudge them up and if so by how much.


Market-Based Interest Rates have Work to Do

In the free market, interest rates perform the same functions as all other prices: to provide information to market participants; to serve as an incentive mechanism for buyers and sellers; and to bring market supply and demand into balance. Market prices convey information about what goods consumers want and what it would cost for producers to bring those goods to the market. Market prices serve as an incentive for producers to supply more of a good when the price goes up and to supply less when the price goes down; similarly, a lower or higher price influences consumers to buy more or less of a good. And, finally, the movement of a market price, by stimulating more or less demand and supply, tends to bring the two sides of the market into balance.

Market rates of interest balance the actions and decisions of borrowers (investors) and lenders (savers) just as the prices of shoes, hats, or bananas balance the activities of the suppliers and demanders of those goods. This assures, on the one hand, that resources that are not being used to produce consumer goods are available for future-oriented investment, and, on the other, that investment doesn’t outrun the saved resources available to support it.

Interest rates higher than those that would balance saving with investment stimulate more saving than investors are willing to borrow, and interest rates below that balancing point stimulate more borrowing than savers are willing to supply.

There is one crucial difference, however, between the price of any other good that is pushed below that balancing point and interest rates being set below that point. If the price of hats, for example, is below the balancing point, the result is a shortage; that is, suppliers offer fewer hats than the number consumers are willing to buy at that price. Some consumers, therefore, will have to leave the market disappointed, without a hat in hand.


Central Bank-Caused Imbalances and Distortions

In contrast, in the market for borrowing and lending the Federal Reserve pushes interest rates below the point at which the market would have set them by increasing the supply of money on the loan market. Even though savers are not willing to supply more of their income for investors to borrow, the central bank provides the required funds by creating them out of thin air and making them available to banks for loans to investors. Investment spending now exceeds the amount of savings available to support the projects undertaken.

Investors who borrow the newly created money spend it to hire or purchase more resources, and their extra spending eventually starts putting upward pressure on prices. At the same time, more resources and workers are attracted to these new investment projects and away from other market activities.

The twin result of the Federal Reserve’s increase in the money supply, which pushes interest rates below that market-balancing point, is an emerging price inflation and an initial investment boom, both of which are unsustainable in the long run. Price inflation is unsustainable because it inescapably reduces the value of the money in everyone’s pockets, and threatens over time to undermine trust in the monetary system.

The boom is unsustainable because the imbalance between savings and investment will eventually necessitate a market correction when it is discovered that the resources available are not enough to produce all the consumer goods people want to buy, as well as all the investment projects borrowers have begun.
Central Bank  Produces Booms and Busts

The unsustainability of such a monetary-induced investment boom was shown, once again, to be true in the latest business cycle. Between 2003 and 2008, the Federal Reserve increased the money supply by at least 50 percent. Key interest rates, including the Federal Funds rate and the one-year Treasury yield, were either zero or negative for much of this time when adjusted for inflation. The rate on conventional mortgages, when inflation adjusted, was between two and four percent during this same period.

It is no wonder that there emerged the now infamous housing, investment, and consumer credit bubbles that burst in 2008-2009. None of these would have been possible and sustainable for so long as they were if not for the Fed’s flood of money creation and the resulting zero or negative lending rates when adjusted for inflation.

The monetary expansion and the artificially low interest rates generated wide imbalances between investment and housing borrowing on the one hand and low levels of real savings in the economy on the other. It was inevitable that the reality of scarcity would finally catch up with all these mismatches between market supplies and demands.

This was, of course, exacerbated by the Federal government’s housing market creations, Fannie Mae and Freddie Mac. They opened their financial spigots through buying up or guaranteeing ever more home mortgages that were issued to a growing number of uncredit worthy borrowers. But the financial institutions that issued and then marketed those dubious mortgages were, themselves, only responding to the perverse incentives that had been created by the Federal Reserve and by Fannie Mae and Freddie Mac.

Why not extend more and more loans to questionable homebuyers when the money to fund them was virtually interest-free thanks to the Federal Reserve? And why not package them together and pass them on to others, when Fannie Mae and Freddie Mac were subsidizing the risk on the basis of the “full faith and credit” of the United State government?


More Monetary Mischief in the Post-Bubble Era

What was the Federal Reserve’s response in the face of the busted bubbles its own policies helped to create? Between September 2008 and June 2014, the monetary base (currency in circulation and reserves in the banking system) has been increased by over 440 percent, from $905 billion to more than $4 trillion. At the same time, M-2 (currency in circulation plus demand and a variety of savings and time deposits) grew by 35 percent during this time period.

Why haven’t banks lent out more of this huge amount of newly created money, and generated a much higher degree of price inflation than has been observed so far? Partly it is due to the fact that after the wild bubble years, many financial institutions returned to the more traditional credit worthy benchmarks for extending loans to potential borrowers. This has slowed down the approval rate for new loans.

But more importantly, those excess reserves not being lent out by banks are collecting interest from the Federal Reserve. With continuing market uncertainties about government policies concerning environmental regulations, national health care costs, the burden of the Federal debt and other government unfunded liabilities (Social Security and Medicare), as well as other possible political interferences in the marketplace, banks have found it more attractive to be paid interest by the Federal Reserve rather than to lend money to private borrowers. And considering how low Fed policies have pushed down key market lending rates, leaving those excess reserves idle with first Ben Bernanke and now Janet Yellen has seemed the more profitable way of using all that lending power.
Even under the heavy-handed intervention of the government, markets are fundamentally resilient institutions that have the capacity to bounce back unless that governmental hand really chokes the competitive and profit-making life out of capitalism. Any real recovery in the private sector will result in increased demands to borrow that would be satisfied by all of that Fed-created funny money currently sitting idle. Once those hundreds of billions of dollars of excess reserves come flooding into the market, price inflation may not be far behind.


Central Banking as the Problem, Not the Solution

At the heart of the problem is that fact that the Federal Reserve’s manipulation of the money supply prevents interest rates from telling the truth: How much are people really choosing to save out of income, and therefore how much of the society’s resources – land, labor, capital – are really available to support sustainable investment activities in the longer run? What is the real cost of borrowing, independent of Fed distortions of interest rates, so businessmen could make realistic and fair estimates about which investment projects might be truly profitable, without the unnecessary risk of being drawn into unsustainable bubble ventures?

Unfortunately, as long as there are central banks, we will be the victims of the monetary central planners who have the monopoly power to control the amount of money and credit in the economy; manipulate interest rates by expanding or contracting bank reserves used for lending purposes; threaten the rollercoaster of business cycle booms and busts; and undermine the soundness of the monetary system through debasement of the currency and price inflation.

Interest rates, like market prices in general, cannot tell the truth about real supply and demand conditions when governments and their central banks prevent them from doing their job. All that government produces from their interventions, regulations and manipulations is false signals and bad information. And all of us suffer from this abridgement of our right to freedom of speech to talk honestly to each other through the competitive communication of market prices and interest rates, without governments and central banks getting in the way.

 

[Originally published at EpicTimes]

Categories: On the Blog

Greens are the Enemies of Energy

Somewhat Reasonable - August 19, 2014, 1:28 PM

Here in America and elsewhere around the world, Greens continue to war against any energy other than the “renewable” kind, wind and solar, that is more costly and next to useless. Only coal, oil, natural gas, and nuclear keeps the modern and developing world functioning and growing.

The most publicized aspect is Obama’s “War on Coal” and, thanks to the Environmental Protection Agency, it has been successful; responsible for shutting down several hundred coal-fired plants by issuing costly regulations based on the utterly false claim that carbon dioxide emissions must be reduced to save the Earth from “global warming.”

The EPA is the government’s ultimate enemy of energy, though the Department of the Interior and other elements of the government participate in limiting access to our vast energy reserves and energy use nationwide. By government edict, the incandescent light bulb has been banned. How insane is that?

The Earth has been cooling for seventeen years at this point, but the Greens call this a “pause.” That pause is going to last for many more years and could even become a new ice age.

A study commissioned by the National Association of Manufacturers (NAM) on the impact of the proposed new EPA regulation of emissions found that, as CNSNews reported, it “could be the costliest federal rule by reducing the Gross National Product by $270 billion a per year and $3.4 trillion from 2017 to 2040” adding $2.2 trillion in compliance costs for the same period. Jay Timmons, CEO and president of NAM, said, “This regulation has the capacity to stop the manufacturing comeback in its tracks.”

As Thomas Pyle, the president of the Institute for Energy Research(IER), said in June, “President Obama is delivering on his promise to send electricity prices skyrocketing.” Noting a proposed EPA regulation that would shut more plants, he said “With this new rule, Americans can expect to pay $200 more each year for their electricity.” Having failed to turn around the nation’s economy halfway into his second term, Obama is adding to the economic burdens of all Americans.

America could literally become energy independent given its vast reserves of energy sources. In the case of coal, the federal government owns 957 billion short tons of coal in the lower 48 States, of which about 550 billion short tons—about 57 percent—are available in the Powder River Basin. It is estimated to be worth $22.5 trillion to the U.S. economy, but as the IER notes, it “remains unrealized due to government barriers on coal production.” It would last 250 years, greater than Russia and China. When you add in Alaska, the U.S. has enough coal to last 9,000 years at today’s consumption rates!

In 2013 the IER estimated the worth of the government’s oil and coal technically recoverable resources to the economy to be $128 trillion, about eight times our national debt at the time.

There isn’t a day that goes by that environmental groups such as Friends of the Earth and the Sierra Club, Greenpeace, the National Resources Defense Council, and the Union of Concerned Scientists, along with dozens of others, do not speak out against the extracting and use of all forms of energy, calling coal “dirty” and claiming Big Oil is the enemy.

In the 1970s and 1980s, the Greens held off attacking the nuclear industry because it does not produce “greenhouse gas” emissions. Mind you, these gases, primarily carbon dioxide, represent no threat of warming and, indeed, as the main “food” of all vegetation on Earth, more carbon dioxide would be a good thing, increasing crop yields and healthy forests.

Events such as the 1979 partial meltdown at Three Mile Island and the 1986 Chernobyl disaster raised understandable fears. The Greens began opposing nuclear energy claiming that radiation would kill millions in the event of a meltdown. This simply is not true. Unlike France that reprocesses spent nuclear fuel, President Carter’s decision to not allow reprocessing proved to be very detrimental, requiring repositories for large quantities.

To this day, one of the largest, Yucca Mountain Repository, authorized in 1987, is opposed by Greens. Even so, it was approved in 2002 by Congress, but the funding for its development was terminated by the Obama administration in 2011. Today there are only four new nuclear power plants under construction and, in time, all one hundred existing plants will likely be retired starting in the mid-2030s.

The Greens’ attack on coal is based on claims that air quality must be protected, but today’s air quality has been steadily improving for years and new technologies have reduced emissions without the need to impose impossible regulatory standards. As the American Petroleum Institute recently noted, “These standards are not justified from a health perspective because the science is simply not showing a need to reduce ozone levels.”

The new EPA standards are expected to be announced in December. We better hope that the November midterm elections put enough new candidates into Congress to reject those standards or the cost of living in America, the capacity to produce electricity, the construction and expansion of our manufacturing sector will all worsen, putting America on a path to decline.

Categories: On the Blog

How to Talk About Climate Change So People Will Listen: A Skeptic’s View

Somewhat Reasonable - August 18, 2014, 2:06 PM

Joe Bast speaking at the Ninth International Conference on Climate Change in July 2014.

An essay in the current issue of The Atlantic purports to instruct readers on “How to Talk About Climate So People Will Listen.” The author, Charles C. Mann, is a long-time contributor to the magazine who writes about history, tourism, and energy issues. With this article, he tries to cut a path between the two warring tribes in the global warming debate, the Alarmists and the Skeptics.

He fails, rather spectacularly I think.

The first four paragraphs (out of 45) are good, as are a few paragraphs later on about enviro fruitcake Bill McKibben. But the rest of the article simply accepts the dubious and sometimes outrageous assertions and false narratives that gave rise to alarmism in the first place, the same ones skeptics delight in debunking. Surveys show most people know more about global warming than does Mann. If alarmists use this article as their guide to how to talk about the issue, skeptics once again will win most of the debates in bars and around grills this summer.

A Good Start

Mann starts out strong, reporting how the media turned an obscure modeling exercise about the melt rate of the western Antarctic ice shelf into hysterical headlines about coastal flooding. Had he waited a couple weeks, he could have written much the same about “Russian methane holes.” The lesson in both cases is that the mainstream media are utterly unreliable sources of information on the climate issue. They profit from exaggeration, rely on special interests for advertising revenue, and lack expertise to report on science matters.

Sadly, Mann doesn’t appear to have learned this lesson. In the rest of his article he treats mainstream media accounts of the climate debate as dispository. The public understands this: Surveys show nearly half believe the media exaggerate the climate change problem.

Mann reports, in a single but very nice paragraph, the world’s enormous debt to fossil fuels. The Industrial Revolution, he says, was “driven by the explosive energy of coal, oil, and natural gas, it inaugurated an unprecedented three-century wave of prosperity.” One might quibble with his take on this: The improvement in the human condition started before 1800 and was the result of changes in institutions (the arrival of markets, private property, and limited government) and embrace of new values (the Scottish Enlightenment) as well as the discovery of fossil fuels. Without the first and second discoveries, the third would have done little more than heat some feudal castles and light some cobblestone streets.

An Important Step?

After this promising start, the errors come fast. “In an important step, the Obama administration announced in June its decision to cut power-plant emissions 30 percent by 2030.” There’s a lot wrong with that single sentence.

The Obama administration can’t cut power-plant emissions, except possibly by turning down the heat in the Oval Office in the winter and the air conditioning in the summer. It can only start rule-making processes that would make it illegal for coal-powered plants to continue to operate, and hope the courts and Congress don’t block or repeal the rules. That’s what it did. Time will tell if emissions fall as a result.

The baseline for the administration’s proposed cut of 30% of carbon dioxide emissions is 2005, nearly 10 years ago. Emissions have already fallen by about 15% since then (depending on who is measuring it), or half the goal. Is it unrealistic to expect a “business as usual” scenario would result in emissions in 2030 being 30 percent lower than they were in 2005?

Economists and demographers are converging on forecasts of continued “decarbonization” of the U.S. economy as electrification spreads, the service and digital sectors displace old-style manufacturing, economic growth slows, young people stay home or return home and stay longer than before, and an older population grows more sedentary. If so, how is the Obama administration’s proposal “an important step” to anywhere?

And just to pile on for a moment, even the Obama administration admits a reduction of 30% from 2005 levels by 2030 will have no detectable impact on global temperatures. Global warming alarmists admit this and call for reducing global emissions by 80% or more by 2050. Since there is no chance China, India, Canada, Australia, or Russia will reduce their emissions (voluntarily) between now and 2050, U.S. emissions would need to go to zero or even negative to meet that goal. (Negative? Yes… our economy would need to become a net “carbon sink,” sequestering more carbon dioxide than we emit.) How is Obama’s “business as usual” proposal an “important step” toward that goal?

Those Pesky Economists

Mann correctly scolds alarmists for “rhetorical overreach, moral miscalculation, shouting at cross-purposes…,” a “toxic blend” that damages their cause and fuels the skeptic backlash. But then he miscategorizes their opponents as economists, who he calls “cheerleaders for industrial capitalism.” That line reveals how little Mann knows about public opinion or economics.

Surveys show two-thirds of the American people don’t think global warming is man-made or a serious problem. Are two thirds of the American people economists? Not the last time I checked.

In the national (and global) debate over global warming, economists aren’t prominent, despite some attempts and wishes it were otherwise. The skeptics’ strongest weapon isn’t economics, it’s common sense. Temperatures aren’t rising even though carbon dioxide levels are. Reducing our emissions won’t affect climate so long as other nations keep increasing theirs. Some continued warming would produce more benefits than harms. Future generations will be far wealthier than us despite a small increase in temperatures. Each of these common-sense (and true) observations are deadly to the alarmists’ cause.

Everybody knows we reap tremendous benefits from affordable fossil fuels today. You don’t need to be an economist to know that those benefits vastly exceed the benefits, two centuries from now, of slowing the advance of man-made climate change by one degree or two, assuming the alarmists’ dubious predictions are correct.

Mann’s appreciation for fossil fuels, so eloquently expressed in paragraph three, is missing now. He dismisses cost-benefit analysis as having “moral problems” due to the way it handles small risks and long time horizons. That will come as news to all the experts who made careers of conducting cost-benefit analyses on a wide range of programs and challenges. Why is global warming any different?

Politics and Environmental Protection

Mann says global warming legislation no longer wins congressional approval due to a polarization in views over the value of environmental protection that occurred in the 1970s and 1980s. In Mann’s telling of the story, concern for the environment began as a conservative movement, and then businesses “realized that environmental issues had a price tag. Increasingly, they balked. Reflexively, the anticorporate left pivoted; Earth Day, erstwhile snow job, became an opportunity to denounce capitalist greed.”

Some of us who were part of the environmental movement in the 1970s and 1980s saw something different taking place. The great environmental protection legislation of the 1970s passed with nearly unanimous support because the problems were real and begged for national solutions. After early major successes, an iron triangle of bureaucrats, grandstanding politicians, and yellow journalists started a drum-beat for pursuing ever-more stringent emission reductions regardless of their negligible benefits and soaring costs. The consensus that had produced lop-sided votes in favor of the Clean Air and Clean Water Acts disappeared, not because of some kind of “political stasis in the ‘90s,” but because the biggest environmental problems had been solved and further legislation wasn’t needed.

It was at this point, during the 1980s, that liberals (or “progressives”) saw the opportunity and the need to take over the environmental movement and use its members as shock troops in its war on “capitalism.” It was easy, since conservatives and libertarians were stepping down and moving on to organizations created to solve real problems. Many histories of the left’s takeover of the environmental movement have been written. A partial list appears in Jay Lehr’s recent Heartland Policy Brief on “Replacing the Environmental Protection Agency.”

Once in charge of the environmental movement, the left turned its erstwhile members into conscripts much like the others in its army: organized labor, feminists, African Americans, trial lawyers, and gays and lesbians. Donors to the environmental movement – solar and wind entrepreneurs, ethanol producers, lawyers, and billionaire financiers like Tom Steyer – are dunned for contributions to the Democratic Party and its affiliates. Propaganda replaces factual information, hysterical warnings of threats to rights and privileges lead to calls to action and “remember to vote on Tuesday.”

The politicization of the movement is made explicit by the League of Conservation Voters’ annual scorecards, which invariably reward Democrats and punish Republicans. The 2013 National Environmental Scorecard, which it says “represents the consensus of experts from about 20 respected environmental and conservation organizations,” includes this nice tribute to bipartisanship: “The Republican leadership of the U.S. House of Representatives continues to be controlled by Tea Party climate change deniers with an insatiable appetite for attacks on the environment and public health.”

More False Narratives

Mann says “a cap-and-trade mechanism… reduced acid rain at a fraction of the predicted cost; electric bills were barely affected.” Actually, research by energy economist Jim Johnston and others shows the cap-and-trade mechanism played only a minor role in reducing emissions. What drove the reductions while allowing prices to stay low was the opening of inexpensive low-sulfur coal mines in western states.

Mann says, “I remember winters as being colder in my childhood….” The 1970s saw some of the coldest winters in the twentieth century, so it’s no surprise many of us remember them that way. But the 1930s and 1940s were warmer than today … and human carbon dioxide emissions couldn’t have been responsible for that warm period. This past winter was the coldest, longest, and snowiest in my life (I live in Illinois and part-time in Wisconsin), and recent summers have been among the coolest I can recall. This morning it was 51 degrees when I walked to my train… on August 15. I don’t remember having to wear coats in August, do you?

Mann says “a few critics argue that for the past 17 years warming has mostly stopped. Still, most scientists believe that in the past century the Earth’s average temperature has gone up by about 1.5 degrees Fahrenheit.” This is wrong on a couple counts.

The United Nations’ Intergovernmental Panel on Climate Change (IPCC), which Mann and alarmists generally hold out as the gold standard of climate research, admitted there’s been no warming for the past 15 years in its “final draft” Summary for Policymakers, before politicians and environmental activists made them take it out. Is that “a few critics”? And skeptics don’t deny a warming of 1.5 degrees Fahrenheit occurred “in the past century.” Much of the increase occurred before it could have been attributed to the human presence. Why this peculiar and misleading phrasing?

Swallowing the Left’s Rhetoric

By now, most readers will have figured out that Mann isn’t the impartial observer of the global warming debate he pretends to be. I wasn’t surprised to read, “rising temperatures per se are not the primary concern,” which is the alarmists’ pat answer when confronted by the fact that global warming stopped 17 years ago. But here’s the problem with that: According to the National Oceanic and Atmospheric Administration (NOAA), the alarmists’ computer models “rule out” a zero trends for 15 years or more, meaning an observed absence of warming of this duration invalidates the models… and the alarmists’ theory.

(Here’s the source: National Oceanic and Atmospheric Administration (NOAA), 2009. Knight, J. et al., Comment in Peterson, T. C., and M. O. Baringer, Eds., “State of the Climate in 2008,” Bulletin of the American Meteorological Society, Vol. 90, p. S23.)

When data rise up and refute a theory, good scientists don’t reject the data, they reject the theory. Global warming alarmists just say “never mind” and move to the next bit of pseudoscience. Like this: “Note, too, that this policy comes with a public-health bonus: reining in coal pollution could ultimately avoid as many as 6,600 premature deaths and 150,000 children’s asthma attacks per year in the United States alone.”

Really, it doesn’t get much sillier than this. Carbon dioxide is a harmless, invisible, colorless gas. It doesn’t cause “premature deaths” or “asthma attacks.” Shutting down all the coal plants in the U.S. would reduce emissions of real pollutants, which is the basis for Mann’s claim, but those emissions already are too low to be associated with human health effects, and asthma attacks have been rising in frequency even as those emissions have dropped. The dramatically higher energy bills caused by shutting down coal plants, however, would cause morepremature deaths, and since asthma is correlated with family income, would cause moreasthma attacks.

It All Leads Up to This?

After a few paragraphs of criticism of easy-target Bill McKibben, presumably to throw skeptical readers off his alarmist scent, Mann delivers what those readers who haven’t given up already might think is the best talking point: “Let’s assume that rising carbon-dioxide levels will become a problem of some magnitude at some time and that we will want to do something practical about it.”

Yes, really, this is what 40 or so paragraphs have led up to: Let’s just assume it’s a big problem (or will be) and we should all just pitch in and try to solve it. This is where Uncle Jack leans over and says “Um, how about we not make a series of such dumb-ass assumptions and in the process save billions (even trillions) of dollars and millions (maybe billions) of human lives?”

This is the crux of the problem, both with Mann’s attempt to find a middle ground in the global warming debate and with the left’s obsession with the issue. Global warming alarmism rests on assumptions, not facts, logic, or reason. It’s got no game.

“Let’s just assume there’s a reason for government to take over a quarter of the nation’s economy and fix it, just like Obamacare will fix health care. Let’s simply assume the missing science exists, that the warming will be big enough to notice, that it will happen before mankind has found a substitute for fossil fuels or is colonizing other planets, and that the benefits of stopping or slowing climate change would be worth the expense.”

Anyone who stops and thinks about this, even for a moment, realizes it’s nonsense. Why would you make these assumptions? Why would you give up the benefits of affordable fossil fuels? “We may not be scientists,” says Uncle Jack, “but we’re not stupid.”

This is why alarmists always lose debates against skeptics. It’s why alarmists will look and act like fools this summer at countless cook-outs and family parties, while skeptics will sound thoughtful and reasonable. It’s not because, as Mann insists, people are too stupid to understand graphs. It’s because alarmists are wrong and skeptics are right. It’s just common sense.

And that, my friends, is how to talk about climate change so people will listen.

Categories: On the Blog

Greens are the Enemies of Energy

Somewhat Reasonable - August 18, 2014, 12:17 PM

Here in America and elsewhere around the world, Greens continue to war against any energy other than the “renewable” kind, wind and solar, that is more costly and next to useless. Only coal, oil, natural gas, and nuclear keeps the modern and developing world functioning and growing.

The most publicized aspect is Obama’s “War on Coal” and, thanks to the Environmental Protection Agency, it has been successful; responsible for shutting down several hundred coal-fired plants by issuing costly regulations based on the utterly false claim that carbon dioxide emissions must be reduced to save the Earth from “global warming.”

The EPA is the government’s ultimate enemy of energy, though the Department of the Interior and other elements of the government participate in limiting access to our vast energy reserves and energy use nationwide. By government edict, the incandescent light bulb has been banned. How insane is that?

The Earth has been cooling for seventeen years at this point, but the Greens call this a “pause.” That pause is going to last for many more years and could even become a new ice age.

A study commissioned by the National Association of Manufacturers (NAM) on the impact of the proposed new EPA regulation of emissions found that, as CNSNews reported, it “could be the costliest federal rule by reducing the Gross National Product by $270 billion a per year and $3.4 trillion from 2017 to 2040” adding $2.2 trillion in compliance costs for the same period. Jay Timmons, CEO and president of NAM, said, “This regulation has the capacity to stop the manufacturing comeback in its tracks.”

As Thomas Pyle, the president of theInstitute for Energy Research (IER), said in June, “President Obama is delivering on his promise to send electricity prices skyrocketing.” Noting a proposed EPA regulation that would shut more plants, he said “With this new rule, Americans can expect to pay $200 more each year for their electricity.” Having failed to turn around the nation’s economy halfway into his second term, Obama is adding to the economic burdens of all Americans.

America could literally become energy independent given its vast reserves of energy sources. In the case of coal, the federal government owns 957 billion short tons of coal in the lower 48 States, of which about 550 billion short tons—about 57 percent—are available in the Powder River Basin. It is estimated to be worth $22.5 trillion to the U.S. economy, but as the IER notes, it “remains unrealized due to government barriers on coal production.” It would last 250 years, greater than Russia and China. When you add in Alaska, the U.S. has enough coal to last 9,000 years at today’s consumption rates!

In 2013 the IER estimated the worth of the government’s oil and coal technically recoverable resources to the economy to be $128 trillion, about eight times our national debt at the time.

There isn’t a day that goes by that environmental groups such as Friends of the Earth and the Sierra Club, Greenpeace, the National Resources Defense Council, and the Union of Concerned Scientists, along with dozens of others, do not speak out against the extracting and use of all forms of energy, calling coal “dirty” and claiming Big Oil is the enemy.

In the 1970s and 1980s, the Greens held off attacking the nuclear industry because it does not produce “greenhouse gas” emissions. Mind you, these gases, primarily carbon dioxide, represent no threat of warming and, indeed, as the main “food” of all vegetation on Earth, more carbon dioxide would be a good thing, increasing crop yields and healthy forests.

Events such as the 1979 partial meltdown at Three Mile Island and the 1986 Chernobyl disaster raised understandable fears. The Greens began opposing nuclear energy claiming that radiation would kill millions in the event of a meltdown. This simply is not true. Unlike France that reprocesses spent nuclear fuel, President Carter’s decision to not allow reprocessing proved to be very detrimental, requiring repositories for large quantities.

To this day, one of the largest, Yucca Mountain Repository, authorized in 1987, is opposed by Greens. Even so, it was approved in 2002 by Congress, but the funding for its development was terminated by the Obama administration in 2011. Today there are only four new nuclear power plants under construction and, in time, all one hundred existing plants will likely be retired starting in the mid-2030s.

The Greens’ attack on coal is based on claims that air quality must be protected, but today’s air quality has been steadily improving for years and new technologies have reduced emissions without the need to impose impossible regulatory standards. As the American Petroleum Institute recently noted, “These standards are not justified from a health perspective because the science is simply not showing a need to reduce ozone levels.”

The new EPA standards are expected to be announced in December. We better hope that the November midterm elections put enough new candidates into Congress to reject those standards or the cost of living in America, the capacity to produce electricity, the construction and expansion of our manufacturing sector will all worsen, putting America on a path to decline.

© Alan Caruba, 2014

[Originally published at Warning Signs]

Categories: On the Blog

Academics Must Take Skeptics Seriously

Somewhat Reasonable - August 18, 2014, 10:28 AM

A review and comment on: Ferenc Jankó, Norbert Móricz, Judit Papp Vancsó, “Reviewing the climate change reviewers: Exploring controversy through report references and citations,” Geoforum, Volume 56, September 2014, pages 17–34.

An article published in the September, 2014 issue of Geoforum, a peer-reviewed academic journal published by Elsevier, reports 90.79% of source citations in Climate Change Reconsidered: The 2009 Report of the Nongovernmental International Panel on Climate Change (NIPCC) were to peer-reviewed journals, a higher percentage than was the case with the United Nations’ IPCC Third and Fourth Assessment Reports. The authors found “the scientific background of the NIPCC report is quite similar to the IPCC report,” and concluded, “when we take the contrarian arguments seriously, there is a chance to bring together the differing views and knowledge claims of the disputing ‘interpretive communities’ (Lahsen, 2013b).”

This is dramatic vindication for the lead authors (Craig Idso and S. Fred Singer), 35 contributors and reviewers, and coeditors (Diane Carol Bast and me) of the 2009 NIPCC report. On a shoe-string budget and tight time-line, we produced a report that is just as credible as those produced by an international bureaucracy involving thousands of scientists, activists, and politicians, spending many millions of dollars, and taking several years to produce.

Since 2009, NIPCC has produced three more volumes – an interim report in 2011 containing chiefly reviews of new research, and two hefty volumes in 2013 and earlier this year focusing on the physical science and biological impacts of climate change. Those volumes are even more comprehensive and authoritative than the 2009 report.

The Geoforum article is not the first time NIPCC has been recognized as a major contributor to the global warming debate. The volumes have been cited more than 100 times in peer-reviewed journal articles and by a long list of prominent climate scientists. In 2013, the Information Center for Global Change Studies, a division of the Chinese Academy of Sciences, translated and published an abridged edition of the 2009 and 2011 NIPCC reports in a single volume, and the Chinese Academy of Sciences organized a NIPCC Workshop in Beijing to allow the NIPCC principal authors to present summaries of their conclusions.

When the New York Times, Wall Street Journal, and Washington Post reported on the release of the IPCC’s latest report, in late 2013, their news articles also commented on the latest NIPCC report, noting that NIPCC reached the opposite conclusions, indicating that a legitimate scientific debate over the causes and consequences of climate change continued.

The Geoforum article contains statements and information worth noting. Regarding the NIPCC report’s use of peer-reviewed literature, the authors say, “The peer-reviewed material was 90.5% of the IPCC report (and 84% of the IPCC TAR WGI Report – Bjurström and Polk, 2011a) and 90.79% of the material used by the NIPCC.” The authors write that they had “assumed that the reference list of the NIPCC report would differ markedly” from that of IPCC reports due to the alarmist bias of the editors of mainstream science journals and the “malpractice” revealed during the Climategate scandal. “In fact,” they write, “considering the most cited journals (Journal of Geophysical Research, Geophysical Research Letters, Journal of Climate, Nature, Science), it seems that the scientific background of the NIPCC report is quite similar to the IPCC report.”

The authors found the 2009 NIPCC report apparently has 1,466 references, of which 1,331 were peer-reviewed. We never counted them ourselves, so we thank them for this hard work.

The penultimate paragraphs of the Geoforum article call out some findings, but are couched in language that obscures the points made above and reduces the findings to some rather arcane observations. Reviewing the same body of literature and coming to opposite conclusions is evidence that “the assessment process [is] flexible,” another way of saying disagreement can be honest and not due to fakery. Then the author write,

What are the implications for science? There is a real concern that the controversy has so far had a negative effect on the reputation of science. From the perspective of an idealised public view of science (Lahsen, 2013a), such a polarised debate about ‘truths’ may be confusing. Thus, social science with science studies in the forefront has a mission to change this obsolete view of science. Saying ‘yes’ to our first question we might have a somewhat ‘naive’ implication for the IPCC; improving and widening the reviewing process may be a possible answer to the contrarian criticisms. But when we take the contrarian arguments seriously, there is a chance to bring together the differing views and knowledge claims of the disputing ‘interpretive communities’ (Lahsen, 2013b).

The final paragraph reads as follows:

More broadly, we should consider that both reports purport to be based on the ideal of pure, value-free science, where the prevailing scientific practices may not lead to the end of the debate because citations are not solid bricks on which to build statements, conclusions and political decisions later on (cf. Sarewitz, 2004). Scientific reports should be viewed not only as a second level of peer review and canonization of scientific facts but also as a means of politicization of science. Our paper’s final conclusion, claiming a more constructive and iterative science-policy relation, is well echoed in the literature (e.g. Demeritt, 2006; Pielke, 2007; Hulme, 2009; van der Sluijs et al., 2010b; Latour, 2011). However, there will be hope for better science for the public and for policy, for better constructions of the problem only when we fully understand the knowledge controversy around climate change.

This is a little perplexing until you realize they are assuming, but don’t say, that NIPCC is comparable and just as credible (or not) as the IPCC report. Both studies, they say, demonstrate that survey reports like IPCC and NIPCC are not “pure, value-free science” nor are they sufficiently credible to serve as the basis for “statements, conclusions and political decisions later on.” Rather, such studies are “a second level of peer review and canonization of scientific facts but also … a means of politicization of science.”

I take this as an effort to poison a victory by global warming skeptics. NIPCC is just as good, just as credible or reliable, as the IPCC, and this message ought to be shouted from rooftops. But having achieved this despite lack of resources, editorial bias, and outright academic fraud, the significance of our victory is trivialized by saying it hardly matters because neither NIPCC nor IPCC is credible or reliable.

Such criticism of the IPCC is rare in the peer-reviewed literature, and if the price of getting “mainstream” academics to say it is to have our credibility disparaged as well, I suppose it is worth paying. Regardless, it is now clear that mainstream academics must take global warming skeptics seriously.

Categories: On the Blog

Boomers: Moving Further Out and Away

Somewhat Reasonable - August 15, 2014, 11:59 AM

There have been frequent press reports that baby boomers, those born between 1945 and 1964, are abandoning the suburbs and moving “back” to the urban cores (actually most suburban residents did not move from urban cores). Virtually without exception such stories are based on anecdotes, often gathered by reporters stationed in Manhattan, downtown San Francisco or Washington or elsewhere in urban cores around the nation. Clearly, the anecdotes about boomers who move to suburbs, exurbs, or to outside major metropolitan areas are not readily accessible (and perhaps not as interesting) to the downtown media.

Yet there is a wide gulf between the perceived reality of the media stories and what is actually occurring on the ground, as is indicated by comprehensive sources. The latest available small area data shows that baby boomers continue to leave the urban cores in large numbers. They have also left the earlier suburbs in such large numbers that their population gains in the later suburbs and exurbs have been insufficient to stem boomer movement out of the major metropolitan areas to smaller cities and rural areas.

These conclusions are drawn from an analysis of population at the zip code tabulation area (ZCTA) among those 35 to 54 years of age in 2000 and the same cohort in 2010 (then 45 to 64 years of age). This small area analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). This is described in further detail in the “City Sector Model” note below.

Overall Trend

The national population of the baby boomer generation declined 1.82 million between 2000 and 2010, a 2.2 percent loss (the result of an inevitably increasing death rate from the aging of cohorts). A small increase of 350,000 (1.0 percent) outside the largest cities was more than offset by a 2.17 million loss in the major metropolitan areas (over 1 million population), where the decline was of 4.7 percent.

Boomers and the Urban Core

The largest percentage loss occurred in the functional urban cores, which experienced a decline of 1.15 million baby boomers, a reduction of 16.7 percent. The functional urban cores are defined by the higher population densities that predominated before 1940 and a much higher dependence on transit, walking and cycling for work trips (further details are provided in the “City Sector Model” note below). In 2000, baby boomers accounted for 14.9 percent of the major metropolitan area population, a figure that declined to 13.0 percent by 2010 (Figure 1).

The losses were pervasive. Among the 24 major metropolitan areas with functional urban core populations above 100,000, all experienced reductions in their baby boomer population shares. The average share reduction was approximately 12 percent.

Not surprisingly, the leading urban core magnets of New York and San Francisco did the best, losing 4.3 percent and 5.8 percent of their boomer population share between 2000 and 2010. Providence, Los Angeles,and Boston rounded out the best five.

Among the 24 metropolitan areas with the largest functional urban cores, Detroit experienced the largest proportional boomer loss, at 21.2 percent. Kansas City, Washington, and Minneapolis-St. Paul lost from 17 percent to 19 percent, proportionally, of their boomer urban core populations. Despite its reputation for core renewal, Portland experienced an approximate 15 percent proportional loss of its urban core boomers, along with Milwaukee and Cleveland (Figure 2).

Boomers and the Earlier Suburbs

The reduction in baby boomer population was even greater in the earlier suburban areas (those with median house construction dates of 1979 or before). The 2.33 million earlier suburban population loss was double that of the functional urban core loss, but because of this population is much larger than the functional cores, the overall drop was a smaller 11.1 percent. Nonetheless, the earlier suburbs continue to house the largest share of major metropolitan boomers. This fell, however, from 45.3 percent in 2000 to 42.2 percent in 2010.

Combined, the urban cores and earlier suburbs lost 3.48 million boomers between 2000 and 2010.

Boomers and the Later Suburbs and Exurbs

In contrast, the later suburban areas (median house construction date 1980 or later) added approximately 750,000 baby boomers, for an increase of 6.8 percent. The later suburbs also experienced an increase in their share of major metropolitan boomers, rising from 24.0 percent in 2000 to 26.9 percent in 2010.

The exurban gain was greater than the later suburbs in percentage terms (7.7 percent) but less in population gain (560,000). This was enough to increase the exurban share of boomers from 15.8 percent in 2000 to 17.9 percent in 2010. Indeed, the exurban areas of the 24 major metropolitan areas with urban cores over 100,000 population all did better in attracting or retaining boomer populations than both the urban cores and the earlier suburbs.

Overall there was a 5.0 percentage point transfer of boomer share from the functional urban cores and earlier suburbs to the later suburbs and exurbs, reflecting their more than 1.3 million gain between 2000 and 2010.

Boomers and the Nation

Moreover, the data indicates that boomers are leaving the major metropolitan areas to move to smaller cities or even to rural areas. In contrast with the 2.17 million major metropolitan area loss, areas outside the major metropolitan areas added 350,000 boomers between 2000 and 2010. In 2000, smaller cities and rural areas housed 44.4 percent of the boomer population. By 2010, the smaller city and rural share had risen to 45.8 percent (Figure 3). By contrast, over the same period, the major metropolitan areas increased their proportion of the US population, from 54.5 percent in 2000 to 54.9 percent in 2010.

America’s downtowns (generally a smaller area than the larger urban cores), have done much better in recent years, as they have become safer and as a “100 year flood” of economic retrenchment has reduced many to renting rather than buying. Yet, overall, urban cores have done less well, with Census Bureau data showing that the population gains within two miles of largest municipality city halls being more than offset by losses in the two to five mile radius between 2000 and 2010. These loses are not limited to the overall population, but extend to share losses amongMillennials and population losses among the boomers.

Wendell Cox is principal of Demographia, an international public policy and demographics firm. He is co-author of the “Demographia International Housing Affordability Survey” and author of “Demographia World Urban Areas” and “War on the Dream: How Anti-Sprawl Policy Threatens the Quality of Life.” He was appointed to three terms on the Los Angeles County Transportation Commission, where he served with the leading city and county leadership as the only non-elected member. He was appointed to the Amtrak Reform Council to fill the unexpired term of Governor Christine Todd Whitman and has served as a visiting professor at the Conservatoire National des Arts et Metiers, a national university in Paris.

———–

City Sector Model Note: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before. All of these areas are defined at the zip code tabulation area (ZCTA) level.

[Originally published at New Geography]

 

Categories: On the Blog

On Presidential Leadership

Somewhat Reasonable - August 15, 2014, 11:22 AM

Leadership is the hallmark of all great presidents as characterized by the Schlesinger Poll, perhaps the most prestigious of all presidential surveys. As Barack Obama concludes the last two years of his presidency, the historians and political scientists who participate in such polls will begin to assess the administration’s successes and failures and whether that leadership quality has been clearly demonstrated. What, one may ask, are the distinguishing characteristics of a great leader? Probably the best litmus test of a strong leader is his performance during periods of crisis, such as wars or periods of major domestic upheaval. Great presidents such as Washington, Lincoln and FDR embraced these periods of challenge and rose to the occasion whereas the failed presidencies of Buchanan, Harding and Pierce exhibited an absence of leadership in times of crisis, relegating their names to the dustbin of history.

As historians attempt to rank President Obama, they will most likely focus on his foreign policy challenges, specifically those in the Middle East and North Africa, in order to assess his leadership skills during periods of crisis. Let’s take these geopolitical crises one by one.

In Iraq, Barack Obama inherited a vastly improved post-surge situation only to disavow a sincere effort to secure a longer lasting peace through a status-of-forces agreement. The President’s fulfillment of a campaign promise to withdraw all American troops degenerated into “the great bugout,” exemplifying his rejection of a perceived imperialist foreign policy. The U.S. had forfeited an opportunity to create a bulwark in the Middle East against the rise of radical Islam, the heroic efforts of thousands of U.S. armed forces were squandered and the ensuing void led directly to the re-emergence of al Qaeda in Iraq, known today as ISIS. A terrorist state now covers the western part of Iraq and eastern Syria and threatens the region. In Iran, Obama has likewise taken the low road, seeking to appease the mullahs who openly employ taqqiya (“strategic lying to infidels,” as eloquently described by Andrew McCarthy in Spring Fever) in gaming the negotiation process while aggressively advancing their nuclear program and fomenting instability in the Middle East. Meanwhile, Afghanistan threatens to become another disaster as the telegraphed departure of the last U.S. troops gives the green light to the Taliban’s efforts to reassert its power. Finally, China and Russia are now both actively pushing the limits of expansionism, the latter country seeking to regain the glory of the old Soviet Union. Ex-KGB agent Vladimir Putin is brazenly schooling Barack Obama in the fine art of Machiavellian politics, abetting insurrection in Ukraine after having already annexed the Crimea.

Who among our great presidents of the past would have accepted these examples of such naked aggression? In 1948, Harry Truman made a bold decision to confront the Soviet Union, demonstrating the leadership that helped earn him a “Near Great” ranking in the Schlesinger Poll.   In proposing his Marshall Plan to Congress, Truman defined the challenge, concluding: “We must be prepared to pay the price for peace, or assuredly we shall pay the price of war.” (Truman, David McCullough, 1992). In the many tests of Barack Obama’s foreign policy thus far, our President has been unwilling to pay the price for peace. Only the future will tell whether we will pay the price of war.

While this editorial has dealt exclusively with foreign policy, historians determining the leadership talents of President Obama will likewise judge domestic policy. The success of the Affordable Care Act is yet to be determined, though results so far appear questionable at best. The economic recovery still sputters as an anti free market philosophy espoused by the administration undermines private sector entrepreneurship, notwithstanding the robust revolution in the energy sector. Otherwise, the administration seems more interested in tilting at the windmills of climate change and income inequality rather than addressing more meaningful issues such as immigration policy and deplorable underlying unemployment trends.

How will the historians grade our President? In two short years the curtain will come down on the Obama administration and historians will scrutinize how well he has demonstrated leadership during his term in office. Will his abdication of that role, especially as regards to foreign affairs, remind future historians of the past failed presidencies of Harding, Buchanan and Pierce administrations? Only time will tell.

Categories: On the Blog

National Employee Freedom Week Educates Union Members On Their Rights

Somewhat Reasonable - August 14, 2014, 1:35 PM

Despite their somewhat pitiful record, I have remained a loyal fan of the Chicago Cubs. Many of my weekends are spent going to games sporting either my Rizzo or Castro jersey. A mile from Wrigley Field, my apartment houses Cubs memorabilia I have collected over the years. Sometimes disappointed by their poor play, I have never strayed. But what if I wanted to?

What if I decided I wanted to join the Cardinals fandom, but was only given a two-week time frame to leave the Cubs? What if, despite my lack of interest in the Cubs, I was forced to remain a fan and required to continue to pay for games and memorabilia, having missed my opt-out period? I am aware this would be absurd, but it exists in a different fashion.

It exists for union members.

Millions of workers currently reside in states or districts where they are required to pay dues or fees to a union as a condition of employment. Others are indebted to random “drop” periods secretly lengthening their union membership, or—worst of all—misled to believe joining a union is the only option available.

During the week of August 10–16, The Heartland Institute, along with nearly 80 other groups across 45 states, have partnered together to host National Employee Freedom Week (NEFW) in order to teach current union members about their freedom to leave their union entirely — and (depending on the state) abstain from paying all or a portion of their union dues.

Union members usually can leave their union only during a narrow, two-week window at the end of a three- or four-year contract. Labor unions often hide these opt-out provisions deep inside wordy contracts. With only a very thin and under-advertised window for leaving a union, workers are stuck paying for representation they may consider unwarranted and political contributions that oppose their interests and viewpoints. Other workers remain in unions believing that will advance their careers, unaware that alternative professional organizations provide many of the same services, often with better benefits, at a fraction of the cost of union membership.

According to a Google Consumer Survey poll, 28 percent of union members would like to leave their union if they could do so without consequences. That’s about 4 million of the nation’s 14.5 million union members that either don’t know how to leave or are told they are not permitted to do so.

Employees in right-to-work states have the independence to leave their union with no loss of employment, earnings, benefits, or ranking. Although workers have much less choice in forced-unionization states, in many cases these employees can elect to become agency-fee payers or classify as a religious objector.

The majority of the public supports this right-to-work belief. According to another Google Consumer Survey poll released by NEFW, 83 percent of Americans think that employees should “have the right to decide, without force or penalty, whether to join or leave a labor union.” These polls show that union members want to opt out of their union and that the public agrees they should be allowed to.

Every year, countless employees across the country pay union dues without knowing about their right to opt out partially or completely.  National Employee Freedom Week lets them know it’s possible and provides them with the understanding of how it’s done.

Being a union member, or even a Cubs fan, shouldn’t be a permanent commitment.

Categories: On the Blog

The Gases of Life are NOT Pollutants

Somewhat Reasonable - August 13, 2014, 2:14 PM

Our atmosphere contains the four gases of life – nitrogen, oxygen, water vapour and carbon dioxide.

Nitrogen is the most abundant gas-of-life in the atmosphere (78%). It is an essential building block of amino acids present in all proteins. It is a very stable unreactive gas, but micro-organisms in the soil and some plants are able to extract nitrogen from the atmosphere, making it available to growing plants. Lightning also manages to oxidise some atmospheric nitrogen.

Oxygen is the second most abundant gas-of-life in the atmosphere (21%). Every animal absorbs oxygen with every breath, using it to fuel bodily digestion of the foods they eat. This process builds bodies and provides the energy of muscles. In the great oxygen cycle, plants extract oxygen from carbon dioxide and exhale it to the atmosphere for animals to breathe.

Water vapour is the third most abundant gas-of-life in the atmosphere (varies up to 5%). Water vapour is part of the great water cycle where water is evaporated into the atmosphere from salty seas, surface water and plants. Rain and snow return it to the surface supply of fresh water. No animals, plants or sea creatures could exist without water.

Water vapour is the most effective “greenhouse gas” in the atmosphere with far more effect than carbon dioxide. It reduces incoming solar radiation by day, and reduces surface cooling at night. Water is also a global temperature stabiliser – heat is transferred from oceans and land as latent heat by evaporation, forming clouds that often cool the surface.

Finally, carbon dioxide is the least abundant gas-of-life in the atmosphere (0.04%) – just a mere trace, but a vitally important trace. No plants could exist on Earth without carbon dioxide and no animals could exist without plants. Plants extract carbon dioxide from the atmosphere to form proteins, sugars and carbo-hydrates and animals add essential minerals to turn these into protein, fat, sinews and bones. The carbon from carbon dioxide is the building block for all life on earth, for every bit of organic material, and for every carbon fuel – oil, gas and coal.

Carbon dioxide is a “greenhouse gas” which tends to retain some surface warmth, butthere is no evidence in ancient or modern temperature records to suggest that carbon dioxide is a dominant factor controlling global temperature.

None of these natural gases are toxic to life under any feasible atmospheric conditions, but all can be toxic above certain levels. For carbon dioxide, toxic effects on humans don’t even begin until it is fifty times the present atmospheric level – our exhaled breath is 100 times current levels in the atmosphere.

No thinking person could class any of them as an atmospheric pollutant – they are all naturally occurring, non-toxic, essential “Gases of Life”. Together they make up 95% of the human body, which is effectively 68% carbon dioxide – if carbon dioxide is a pollutant, the human body is badly polluted.

Categories: On the Blog

Böhm-Bawerk: Austrian Economist Who Said “No” to Big Government

Somewhat Reasonable - August 13, 2014, 10:47 AM

We live at a time when politicians and bureaucrats only know one public policy: more and bigger government. Yet, there was a time when even those who served in government defended limited and smaller government. One of the greatest of these died one hundred years ago on August 27, 1914, the Austrian economist Eugen von Böhm-Bawerk.

Böhm-Bawerk is most famous as one of the leading critics of Marxism and socialism in the years before the First World War. He is equally famous as one of the developers of “marginal utility” theory as the basis of showing the logic and workings of the competitive market price system.

But he also served three times as the finance minister of the old Austro-Hungarian Empire, during which he staunchly fought for lower government spending and taxing, balanced budgets, and a sound monetary system based on the gold standard.

Danger of Out-of-Control Government Spending

Even after Böhm-Bawerk had left public office he continued to warn of the dangers of uncontrolled government spending and borrowing as the road to ruin in his native Austria-Hungary, and in words that ring as true today as when he wrote them a century ago.

In January 1914, just a little more than a half a year before the start of the First World War, Böhm-Bawerk said in a series of articles in one of the most prominent Vienna newspapers that the Austrian government was following a policy of fiscal irresponsibility. During the preceding three years, government expenditures had increased by 60 percent, and for each of these years the government’s deficit had equaled approximately 15 percent of total spending.

The reason, Böhm-Bawerk said, was that the Austrian parliament and government were enveloped in a spider’s web of special-interest politics. Made up of a large number of different linguistic and national groups, the Austro-Hungarian Empire was being corrupted through abuse of the democratic process, with each interest group using the political system to gain privileges and favors at the expense of others.

Böhm-Bawerk explained:

“We have seen innumerable variations of the vexing game of trying to generate political contentment through material concessions. If formerly the Parliaments were the guardians of thrift, they are today far more like its sworn enemies.

“Nowadays the political and nationalist parties . . . are in the habit of cultivating a greed of all kinds of benefits for their co-nationals or constituencies that they regard as a veritable duty, and should the political situation be correspondingly favorable, that is to say correspondingly unfavorable for the Government, then political pressure will produce what is wanted. Often enough, though, because of the carefully calculated rivalry and jealousy between parties, what has been granted to one [group] has also to be conceded to others—from a single costly concession springs a whole bundle of costly concessions.”

He accused the Austrian government of having “squandered amidst our good fortune [of economic prosperity] everything, but everything, down to the last penny, that could be grabbed by tightening the tax-screw and anticipating future sources of income to the upper limit” by borrowing in the present at the expense of the future.

For some time, he said, “a very large number of our public authorities have been living beyond their means.” Such a fiscal policy, Böhm-Bawerk feared, was threatening the long-run financial stability and soundness of the entire country.

Eight months later, in August 1914, Austria-Hungary and the rest of Europe stumbled into the cataclysm that became World War I. And far more than merely the finances of the Austro-Hungarian Empire were in ruins when that war ended four years later, since the Empire itself disappeared from the map of Europe.

A Man of Honesty and Integrity                                                                                          

Eugen von Böhm-Bawerk was born on February 12, 1851 in Brno, capital of the Austrian province of Moravia (now the eastern portion of the Czech Republic). He died on August 27, 1914, at the age of 63, just as the First World War was beginning.

Ten years after Böhm-Bawerk’s death, one of his students, the Austrian economist Ludwig von Mises, wrote a memorial essay about his teacher. Mises said:

“Eugen von Böhm-Bawerk will remain unforgettable to all who have known him. The students who were fortunate enough to be members of his seminar [at the University of Vienna] will never lose what they have gained from the contact with this great mind. To the politicians who have come into contact with the statesman, his extreme honesty, selflessness and dedication to duty will forever remain a shining example.

“And no citizen of this country [Austria] should ever forget the last Austrian minister of finance who, in spite of all obstacles, was seriously trying to maintain order of the public finances and to prevent the approaching financial catastrophe. Even when all those who have been personally close to Böhm-Bawerk will have left this life, his scientific work will continue to live and bear fruit.”

Another of Böhm-Bawerk’s students, Joseph A. Schumpeter, spoke in the same glowing terms of his teacher, saying, “he was not only one of the most brilliant figures in the scientific life of his time, but also an example of that rarest of statesmen, a great minister of finance . . . As a public servant, he stood up to the most difficult and thankless task of politics, the task of defending sound financial principles.”

The scientific contributions to which both Mises and Schumpeter referred were Böhm-Bawerk’s writings on what has become known as the Austrian theory of capital and interest, and his equally insightful formulation of the Austrian theory of value and price.

The Austrian Theory of Subjective Value

The Austrian school of economics began 1871 with the publication of Carl Menger’sPrinciples of Economics. In this work, Menger challenged the fundamental premises of the classical economists, from Adam Smith through David Ricardo to John Stuart Mill. Menger argued that the labor theory of value was flawed in presuming that the value of goods was determined by the relative quantities of labor that had been expended in their manufacture.

Instead, Menger formulated a subjective theory of value, reasoning that value originates in the mind of an evaluator. The value of means reflects the value of the ends they might enable the evaluator to obtain. Labor, therefore, like raw materials and other resources, derives value from the value of the goods it can produce. From this starting point Menger outlined a theory of the value of goods and factors of production, and a theory of the limits of exchange and the formation of prices.

Böhm-Bawerk and his future brother-in-law and also later-to-be-famous contributor to the Austrian school, Friedrich von Wieser, came across Menger’s book shortly after its publication. Both immediately saw the significance of the new subjective approach for the development of economic theory.

In the mid-1870s, Böhm-Bawerk entered the Austrian civil service, soon rising in rank in the Ministry of Finance working on reforming the Austrian tax system. But in 1880, with Menger’s assistance, Böhm-Bawerk was appointed a professor at the University of Innsbruck, a position he held until 1889.

Böhm-Bawerk’s Writings on Value and Price                                                            

During this period he wrote the two books that were to establish his reputation as one of the leading economists of his time, Capital and Interest, Vol. I: History and Critique of Interest Theories (1884) and Vol. II: Positive Theory of Capital (1889). A third volume,Further Essays on Capital and Interest, appeared in 1914 shortly before his death.

In the first volume of Capital and Interest, Böhm-Bawerk presented a wide and detailed critical study of theories of the origin of and basis for interest from the ancient world to his own time.  But it was in the second work, in which he offered a Positive Theory of Capital, that Böhm-Bawerk’s major contribution to the body of Austrian economics may be found. In the middle of the volume is a 135-page digression in which he presents a refined statement of the Austrian subjective theory of value and price. He develops in meticulous detail the theory of marginal utility, showing the logic of how individuals come to evaluate and weigh alternatives among which they may choose and the process that leads to decisions to select certain preferred combinations guided by the marginal principle. And he shows how the same concept of marginal utility explains the origin and significance of cost and the assigned valuations to the factors of production.

In the section on price formation, Böhm-Bawerk develops a theory of how the subjective valuations of buyers and sellers create incentives for the parties on both sides of the market to initiate pricing bids and offers. He explains how the logic of price creation by the market participants also determines the range in which any market-clearing, or equilibrium, price must finally settle, given the maximum demand prices and the minimum supply prices, respectively, of the competing buyers and sellers.

Capital and Time Investment as the Sources of Prosperity

It is impossible to do full justice to Böhm-Bawerk’s theory of capital and interest. But in the barest of outlines, he argued that for man to attain his various desired ends he must discover the causal processes through which labor and resources at his disposal may be used for his purposes. Central to this discovery process is the insight that often the most effective path to a desired goal is through “roundabout” methods of production. A man will be able to catch more fish in a shorter amount of time if he first devotes the time to constructing a fishing net out of vines, hollowing out a tree trunk as a canoe, and carving a tree branch into a paddle.

Greater productivity will often be forthcoming in the future if the individual is willing to undertake, therefore, a certain “period of production,” during which resources and labor are set to work to manufacture the capital—the fishing net, canoe, and paddle—that is then employed to paddle out into the lagoon where larger and more fish may be available.

But the time involved to undertake and implement these more roundabout methods of production involve a cost. The individual must be willing to forgo (often less productive) production activities in the more immediate future (wading into the lagoon using a tree branch as a spear) because that labor and those resources are tied up in a more time-consuming method of production, the more productive results from which will only be forthcoming later.

Interest on a Loan Reflects the Value of Time

This led Böhm-Bawerk to his theory of interest. Obviously, individuals evaluating the production possibilities just discussed must weigh ends available sooner versus other (perhaps more productive) ends that might be obtainable later. As a rule, Böhm-Bawerk argued, individuals prefer goods sooner rather than later.

Each individual places a premium on goods available in the present and discounts to some degree goods that can only be achieved further in the future. Since individuals have different premiums and discounts (time-preferences), there are potential mutual gains from trade. That is the source of the rate of interest: it is the price of trading consumption and production goods across time.

Böhm-Bawerk Refutes Marx’s Critique of Capitalism

One of Böhm-Bawerk’s most important applications of his theory was the refutation of the Marxian exploitation theory that employers make profits by depriving workers of the full value of what their labor produces. He presented his critique of Marx’s theory in the first volume of Capital and Interest and in a long essay originally published in 1896 on the “Unresolved Contradictions in the Marxian Economic System.” In essence, Böhm-Bawerk argued that Marx had confused interest with profit. In the long run no profits can continue to be earned in a competitive market because entrepreneurs will bid up the prices of factors of production and compete down the prices of consumer goods.

But all production takes time. If that period is of any significant length, the workers must be able to sustain themselves until the product is ready for sale. If they are unwilling or unable to sustain themselves, someone else must advance the money (wages) to enable them to consume in the meantime.

This, Böhm-Bawerk explained, is what the capitalist does. He saves, forgoing consumption or other uses of his wealth, and those savings are the source of the workers’ wages during the production process. What Marx called the capitalists’ “exploitative profits” Böhm-Bawerk showed to be the implicit interest payment for advancing money to workers during the time-consuming, roundabout processes of production.

Defending Fiscal Restraint in the Austrian Finance Ministry

In 1889, Böhm-Bawerk was called back from the academic world to the Austrian Ministry of Finance, where he worked on reforming the systems of direct and indirect taxation. He was promoted to head of the tax department in 1891. A year later he was vice president of the national commission that proposed putting Austria-Hungary on a gold standard as a means of establishing a sound monetary system free from direct government manipulation of the monetary printing press.

Three times he served as minister of finance, briefly in 1895, again in 1896-1897, and then from 1900 to 1904. During the last four-year term Böhm-Bawerk demonstrated his commitment to fiscal conservatism, with government spending and taxing kept strictly under control.

However, Ernest von Koerber, the Austrian prime minister in whose government Böhm-Bawerk served, devised a grandiose and vastly expensive public works scheme in the name of economic development. An extensive network of railway lines and canals were to be constructed to connect various parts of the Austro-Hungarian Empire—subsidizing in the process a wide variety of special-interest groups in what today would be described as a “stimulus” program for supposed “jobs-creation.”

Böhm-Bawerk tirelessly fought against what he considered fiscal extravagance that would require higher taxes and greater debt when there was no persuasive evidence that the industrial benefits would justify the expense. At Council of Ministers meetings Böhm-Bawerk even boldly argued against spending proposals presented by the Austrian Emperor, Franz Josef, who presided over the sessions.

When finally he resigned from the Ministry of Finance in October 1904, Böhm-Bawerk had succeeded in preventing most of Prime Minister Koerber’s giant spending project. But he chose to step down because of what he considered to be corrupt financial “irregularities” in the defense budget of the Austrian military.

However, Böhm-Bawerk’s 1914 articles on government finance indicate that the wave of government spending he had battled so hard against broke through once he was no longer there to fight it.

Political Control or Economic Law

A few months after his passing, in December 1914, his last essay appeared in print, a lengthy piece on “Control or Economic Law?” He explained that various interest groups in society, most especially trade unions, suffer from a false conception that through their use or the threat of force, they are able to raise wages permanently above the market’s estimate of the value of various types of labor.

Arbitrarily setting wages and prices higher than what employers and buyers think labor and goods are worth – such as with a government-mandated minimum wage law – merely prices some labor and goods out of the market.

Furthermore, when unions impose high nonmarket wages on the employers in an industry, the unions succeed only in temporarily eating into the employers’ profit margins and creating the incentive for those employers to leave that sector of the economy and take with them those workers’ jobs.

What makes the real wages of workers rise in the long run, Böhm-Bawerk argued, was capital formation and investment in those more roundabout methods of production that increase the productivity of workers and therefore make their labor services more valuable in the long run, while also increasing the quantity of goods and services they can buy with their market wages.

To his last, Eugen von Böhm-Bawerk defended reason and the logic of the market against the emotional appeals and faulty reasoning of those who wished to use power and the government to acquire from others what they could not obtain through free competition.  His contributions to economic theory and economic policy show him as one of the greatest economists of all time, as well as his example as a principled man of uncompromising integrity who in the political arena unswervingly fought for the free market and limited government.

 

[Originally published at EpicTimes]

Categories: On the Blog

Will Radical Universities Dominated by Radical Professors Doom this Nation?

Somewhat Reasonable - August 12, 2014, 2:12 PM

By Nancy Thorner & Elizabeth Clarke - 

Co-author Elizabeth Clarke remembers attending a speech in Waukegan, IL with her late husband in the summer of 1967, at which Senator Everett Dirksen (Senator Dirksen represented Illinois in the U.S. House of Representatives from 1933 – 1939 and the U.S. Senate from 1951 until his death in 1969.) spoke passionately against the then-pending Supreme Court case of Keyishian et al v Board of Regents that ruled against loyalty oaths.

Dinsh D’Souza, star and director of the movie, “America:  Imagine the World without Her,” engaged in a recent one-on-one debate with Bill Ayers on Friday, January 31, at Dartmouth in which Ayers began by celebrating what he considered to be great about America. Ayers made no reference to the Founding Fathers, nor did Ayers mention Abraham Lincoln. Instead, Ayers spoke of a protest tradition in America, going back to the 19th-century socialists and continuing through the 20th-cntury progressives, right up to himself.

It is evident that Bill and Bernardine Dohrn, as unrepentant radicals of the sixties, figured out that by becoming professors they would be able to change the system to achieve their same revolutionary goals through the classroom where they could shape young minds. Listen here to the Dinesh D’Souza/Bill Ayers debate at Dartmouth College.

Expansiveness of Ayers/Obama relationship

Regarding Ayers’ radical background and his connection to Obama, which Ayers denies and which Senator Obama dismissed when running for president in 2008 as “just a guy who lives in my neighborhood,” reporter Stanley Kurtz relates how Obama’s first run for the Illinois State Senate was launched at a 1995 gathering at the Ayers’ Chicago home.  Moreover, Bill Ayers has ties to at least ten people in the White House Administration.

Kurtz also links Obama in a big way to the Chicago Annenberg Challenge.  Founded by Ayers in 1995, Obama was appointed the first chairman of the CAC board in 1995. In archives reviewed by Kurtz housed in the Richard J. Daley Library at the University of Illinois at Chicago, Kurtz uncovered information showing how Barack Obama and Bill Ayers worked as a team to advance the CAC agenda patterned after Bill Ayers’ radical educational philosophy which “called for infusing students and their parents with a radical political commitment.” Achievement tests were downplayed in favor of activism.

According to a lengthy article, titled, They’re all Together, appearing in the September 2011 issue of the “American Spectator” and written by Alfred S. Regnery, a former publisher of The American Spectator who served in the Justice Department during the Reagan Administration, “They’re all together” refers to Mr. and Mrs. Bill Ayers and their friend, the president.  Four factors were related in Mr. Regnery’s article as to why the importance of Bill and Bernadine Ayers’ long friendship with President Obama.

1. It is important, first, because Obama along with Ayers and Dohrn, went to great lengths to mislead voters during the fall of 2008.  They “just lived in the same neighborhood” and had little contact, they pretended.  On the contrary, The American Spectator’s investigation has concluded that Obama and his campaign staff, with the help of the mainstream media, lied outright about his relationship with Ayers.  It has also concluded that Ayers lied about it as well.

2. The relationship is important, second, because Ayers and Dohrn are not reformed former radicals who have abandoned their old habits. Indeed, they are unrepentant violent radicals. . . who may have adopted new tactics to upend the U.S. and what it stands for, but whose goals remain just what they were in 1970.

3. Furthermore, the relationship is important because of the policies and issues that both Ayers and, to a lesser degree, Dohrn worked on with Obama during the 20 years preceding his election to the presidency and to the extent to which these hard-core left-wingers influenced today’s president of the United States.

4. Finally, it is important because of Ayers’ relationship, through his powerful businessman father, with Chicago’s Daley family, who happens to be among the most ardent Obama supporters and promoters.  Interesting is that Richard M. Daley as Chicago’s states’s attorney presided over the plea bargain of Bernadine Dohrn when she surfaced from the underground.

If you still doubt the close and 20-plus-year-relationship between Bill and Bernardine Dohrn and President Obama, read this riveting and all-telling comprehensive article titled, “The Obama File, Bill Ayers”. In that the two menshared an office, Obama knew very well who he was associating with. In 1989, Bernadine Dohrn and Michelle Obama were associates at the Chicago law firm of Sidley & Austin, when Obama joined the firm as a summer intern. Claims by Ayers and Obama that they have encountered each other occasionally in pubic or in the neighborhood, is dead wrong.  Ayers had a part in bringing the 24-year old Obama to Chicago when Obama was hired by the Woods Fund in 1985 as an organizer on Chicago’s economically depressed South Side.”

Is this nation already The United Socialist States of America?

The liberal college campus professors who helped reelect Barack Obama are now hard at work indoctrinating a new generation of students to turn into radical leftists.  Radical liberal professors, such as Ayers now retired, and Dohrn are entrenched at America’s universities and have tenure with scores of allies in the media and at academic institutions throughout the world. They despise America and are working to turn out yet another generation of American students who don’t understand why our country is unique and our freedom so precious.

An interesting perspective of this nation was presented by Jeffrey T. Kuhner in his published article in March of 2010, titled, The United Socialist State of America.  Kuhner believed four years ago that President Obama was close to completing his socialist revolution to transform America. In Kuhner’s words:

From his days as a student radical, Mr. Obama has been obsessed with smashing the traditional free-market system.  Like most leftists, he thinks capitalism is the enemy.  ‘He was a Marxist-socialist in college,’ according to John C. Drew who knew Mr. Obama as a university student.  In an interview Drew told Kuhner that ‘He [Obama] kept talking about the need to overthrow capitalism in favor of a working-class revolution.’

It is not by chance that both Standley Kurtz and Jeffrey Kuhner linked Obama with his longtime associates William Ayers and Bernadine Dohrn, who along with Obama’s minister, Jeremiah Wright — all supporters of Marxist liberation, — expressed deep hatred for the United States, further believing that “only fundamental, sweeping change can redeem America.”

The claim Kuhner made in 2010 that President Obama is giving birth to a new nation, the United Socialist States of America (USSA) is not far-fetched.  Obama learned his lessons well as a student of activism in “Rules for Radicals” by neo-Trotskyite Saul Alinsky, which Obama applied when he worked with Bill Ayes as a Chicago community organizer in association with the Annenberg Challenge. Jeffrey Kuhner’s article is a must read in its entirety.

Fast forward to today when Obama continues to undermine the traditional system of checks and balances established by the Founding Fathers.  Much quoted is what was said in 1887 by Alexander Tyler, a Scottish history professor at the University of Edinburgh, about the fall of the Athenian Republic some 2,000 years before:

 A democracy will continue to exist up until the time that voters discover that they can vote themselves generous gifts from the public treasury. . . From that moment on the majority always vote for the candidates who promises the most benefits from the public treasury. . . every democracy will finally collapse . . . always followed by a dictatorship.  The average age of the world’s greatest civilizations has been about 200 years.

Common Core as the final nail in the coffin

As a final thought, radical progressive educators, enabled by massive funding from left-leaning Bill Gates, concocted Common Core.  States bought into it in 2010, sight unseen, as Obama’s new and improved education program. Common Core History standards, which reflect the thinking of progressive educators like Bill and Bernadine Ayers and President Obama, are already being taught to our children. Are you fine with this as parents?

  • A relentlessly negative view of American history, which emphasizes every problem and failing of our ancestors while ignoring or minimizing their achievements.
  • Almost total silence about the Founding Fathers, including no mention of Jefferson, Franklin, Madison, and Adams, and almost none of the Declaration of Independence.
  • Omission of military history, battles, commanders, and heroes.
  • A biased and inaccurate view of many important facets of American history, including the motivations and actions of 17th-19th-century settlers, American involvement in World War II, and the conduct of and victory in the Cold War.

If allowed to continue unchecked in the public schools, the progressive Common Core standards will be the final nail in the coffin that will complete the transition from a once proud, prosperous and strong Republic to the United Socialist States of America, without a shot being fired, through the indoctrination of impressionable children with socialist ideas and doctrine.

This is not the time to sit back and do nothing. Time is running out. Take an active role in what your children are learning in school. Furthermore, become involved in promoting candidates who will, if elected, follow constitutional principles and fight for what is right and just.

******

Other articles by Nancy Thorner and Elizabeth Clarke on the radicalization of colleges where nontenured, radical  professors indoctrinate young people with ideas and policies akin to Socialism.

1.  Cloward and Piven’s Marxist-based radicalism alive today:  http://illinoisreview.typepad.com/illinoisreview/2014/07/thorner-clarke-cloward-and-pivens-marxist-based-radicalism-alive-today.html#more

2. Past time for Ayers to confess past terrorism acts and Obama ties:http://illinoisreview.typepad.com/illinoisreview/2014/08/thorner-clarke-past-time-for-ayers-to-confess-past-terrorism-acts-and-obama-ties.html#more

3. Terrorists Bill and Bernadine Ayers slip unchallenged into roles as disinguished prfessors:http://illinoisreview.typepad.com/illinoisreview/2014/08/thorner-clarke-terrorists-bill-and-bernadine-ayers-slip-unchallenged-into-roles-as-distinguished-pro.html#comments

 

 

[Originally published at Illinois Review]

Categories: On the Blog

Colorado Dems Frack Backtrack is all about November

Somewhat Reasonable - August 12, 2014, 1:55 PM

In June, in a sparsely populated county in northern New Mexico, a primary electionsurprisingly unseated an incumbent County Commissioner. No one seemed to notice. But, apparently, high-ranking Democrats to the north were paying attention.

The northern New Mexico county is Mora. The high-ranking Democrats: from Colorado. The election upset was about Mora County’s oil-and-gas drilling ban.

In April 2013, the Mora County Commission voted, 2 to 1, and passed the first-in-the-nation county-wide ban on all oil-and-gas drilling. It was spearheaded by Commission Chairman John Olivas—who also served as northern director for the New Mexico Wilderness Alliance. Since then, two lawsuits have been filed against the little county because of the anti-drilling ordinance.

A little more than a year after Olivas’ pet project, the Mora County Water Rights and Self-Governance Ordinance, was passed, he was ousted. Olivas didn’t just lose in the Democrat primary election, he was, according to the Albuquerque Journal, “soundly beaten” by George Trujillo—59.8% to 34.2%. Both Olivas and Trujillo acknowledged that the ban had an impact on the outcome, with Olivas saying: “In my opinion, it was a referendum on oil and gas.” Trujillo campaigned on a repeal of the ordinance (which, due to the language of the ordinance will be difficult to do) and has said he is open to a limited amount of drilling in the eastern edge of the county.

Mora County’s ban on all drilling for hydro-carbons, not just fracking, was incited by an out-of-state group: the Pennsylvania-based Community Environmental Legal Defense Fund (CELDF),which has also been active in Colorado.

CEDLF holds Democracy Schools around the country where attendees are taught the “secrets” of peoples’ movements focusing on the rights of communities, people, and the earth. In Mora, CELDF’s Democracy School was organized by Olivas’ mother—who, along with his friends, also chaired subcommitteesbelieved to have been organized to monitor Olivas’ interests.

In Colorado, a Boulder-based Democrat Congressman and environmental activist, Jared Polis, has worked hard to collect thousands of signatures—spending, according to the Wall Street Journal (WSJ), “millions of dollars of his own cash to promote the measures”—to get two anti-oil-and-gas initiatives on November’s ballot. His blue-haired mother (No, I am not elder-bashing. She has it dyed blue and purple.) has campaigned with him.

Polis’ proposed initiative 89 would have given local governments control over environmental regulations under an “environmental bill of rights”—which mirrors language promoted by CELDF and used in Mora County. Polis also backed ballot measure 88 that would have limited where hydraulic fracturing could be conducted.

The presence of 88 and 89 on the ballot, sparked two opposing measures: 121 and 137.  121 would have blocked any oil-or-gas revenue from any local government that limits or bans that industry—an idea also proposed, but not passed, in the New Mexico legislature. 137 would have required proponents of initiatives to submit fiscal impact estimates.

Much to the horror of environmental activists, the battle of ballot initiatives ended before anyone ever got to vote on them.

On Monday, August 4, Polis and Colorado Governor John Hickenlooper held a news conference where they pushed for a compromise to avoid a “messy ballot fight.” Instead, they are proposing an18-member task force to issue recommendations to the Colorado Legislature next year on how to minimize conflicts between residents and the energy industry. Later in the day, an agreement was reached and both sides pulled the opposing measures.

Backers of proposed initiatives 88 and 89 are outraged. They feel Polis sold out.

Hickenlooper said the suggested restrictions, if passed, posed “a significant threat to Colorado’s economy”—which they would. However, given the history of the lowly New Mexico county commissioner, the compromise may be more about “a significant threat to Colorado’s” Democrat party.

A November 2013 Quinnipiac poll found that most Coloradans support fracking—only 34 percent oppose it. Noteworthy is the political divide: 80 percent of Republicans support fracking, only 9 percent oppose it. More Democrats oppose fracking, 54 percent, while only 26 percent support it. But the numbers indicate that Republicans are most likely to come to the polls in November to insure the economically advantageous activity is not curtailed—and this scares Democrats such as Hickenlohooper and Senator Mark Udall, who are both up for reelection in November. Udall, according to the WSJ, “ran in 2008 as a full-throated green-energy champion.” His 2014 Republican opponent Congressman Cory Gardner points to the economic benefits of fracking, as seen in North Dakota and Texas.

Had the measures not been pulled, the WSJ reports: “the issue would have been at the center of the fall debate.”

In addition to driving Republicans to the polls, the anti-fracking measures didn’t have a high probability of survival. While Colorado communities have previously passed anti-drilling initiatives—Boulder, Broomfield, Fort Collins, Lafayette, and Longmont—the most recent attempt in Loveland failed after an organized industry effort to educate voters on the safe track record of fracking and its economic benefits. Additionally, in late July, a Boulder County District Court judge struck down Longmont’s fracking ban. The Denver Post reported: “Under Colorado law, cities cannot ban drilling entirely but can regulate aspects of it that don’t cause an ‘operational conflict’ with state law.”

In New Mexico, the lawsuits have not yet made their way into court, but it is expected that, like Colorado, the courts will rule in favor of state statutes. Constitutionally protected private property rightsshould triumph.

Polis, who made his millions from the sale of the Blue Mountain Arts greeting card website, presented his initiatives as a “national referendum on fracking.” As the WSJ states: “In that sense he was right.” Colorado Democrats realize that allowing an anti-fracking fervor to drive an election is a dangerous decision. The Democrats support for banning fracking—while killing jobs, hurting the local and national economy, damaging America’s energy security, and threatening private property rights—should unseat two top Democrats by driving Republicans to the polls. And, this could become the national referendum on fracking.

 

[Originally published at RedState]

 

Categories: On the Blog
Syndicate content