On the Blog

Primary Musings from Oklahoma, Colorado, and Mississippi

Somewhat Reasonable - June 26, 2014, 11:27 AM

With a surprisingly wide margin of victory, Congressman James Lankford won the Oklahoma Republican U.S. Senate primary, defeating former Speaker of the State House of Representatives T.W. Shannon by 23 points and avoiding a runoff election. Lankford now becomes the prohibitive favorite to replace outgoing Senator Tom Coburn, who is retiring with two years remaining in his current term.

This was a very different race from the one taking place in Mississippi. Despite negative ads run against Lankford by conservative groups, the Oklahoma contest was not an example of an “establishment” Republican or RINO versus a Tea Party candidate. In short, both Lankford and Shannon are credible, likeable conservatives, both are qualified for higher elected office, and both are likely to be on the scene in the future—to Oklahoma’s credit.

A former Baptist minister (or is a Baptist minister, like a Marine, never “former”?), Lankford directed a large Christian youth camp for more than a decade before winning election to Congress in 2010 in the Tea Party tsunami.

T.W. Shannon, the first black Speaker of the House in Oklahoma and a member of the Chickasaw Nation, has worked for former Oklahoma Congressman J.C. Watts and current Rep. Tom Cole (who won his primary on Tuesday and will seek a 7th term in Congress). He is a business consultant with a law degree from Oklahoma City University.

Although it makes life a little dull for reporters, the two candidates were exceptionally similar in their positions on issues. This made the race about retail politics, framing the opponent, and eventually about the perhaps back-firing impact of out-of-state and PAC money spent trying to influence the race.

Shannon was boosted by an Tea Party blitz, drawing support from Senator Ted Cruz representing the Senate Conservatives Fund. FreedomWorks PAC also endorsed Shannon, calling him “a principled leader…He has blocked ObamaCare implementation in Oklahoma, signed a pledge to fight Common Core, founded the first States’ Rights Committee to protect Oklahomans from overreaching federal regulation, and consistently voted for lower taxes and more individual freedom.”

The Sunlight Foundation, a campaign finance watchdog group, argues that “dark money” was “the key factor driving Oklahoma’s Senate battle,” referencing especially a group called Oklahomans for a Conservative Future which spent $1.3 million, mostly attacking Congressman Lankford.

But primary voters are better informed than the electorate overall, so attacks against Lankford for “voting with liberals to raise the debt ceiling twice”—despite the fact that both Tom Coburn and Oklahoma’s other conservative Republican senator, James Inhofe, also voted for the debt ceiling measure—landed with a thud. Instead, it seems that Oklahomans took minor offense at being told what to do, including by groups that consistently support conservatives but whose mailing addresses are within spitting distance of Capitol Hill and therefore little more than possibly-well-intended interlopers.

This result was predicted five months ago by Congressman Tom Cole, who said in an interview with Roll Call that “Groups coming from outside the state, coming to try and set the agenda, sorry. You are welcome to come, but you ought to look at your track record.”

Oklahomans should hope that T.W. Shannon runs for office again in the future. That said, nothing in James Lankford’s two terms in Congress should have made him unappealing to Sooner voters. And they were not going to let negative ads, whether by outsiders or even Oklahomans, fool them.

A similar story played out in Colorado’s Republican primary for governor, in which former Congressman Bob Beauprez eked out a victory in a four-man field. The race ended up far more competitive than most elections with a handful of candidates: Beauprez received 30 percent in victory, beating former Congressman Tom Tancredo (26.5 percent) and Colorado Secretary of State Scott Gessler (23 percent), while former State Senate Minority Leader Mike Kopp came in fourth with nearly 20 percent. It was as tight a four-person race as I have seen, with Gessler and Kopp outperforming many people’s expectations.Beauprez, who lost a prior race for governor by a wide margin to Democrat Bill Ritter in 2006, never abandoned a vision of himself returning to office. Seeing what he perceived to be a weak field encouraged him to throw his hat into the ring; with Tuesday’s victory, Beauprez faces a difficult challenge in defeating incumbent Democrat Governor John Hickenlooper who, despite angering many Coloradoans with attacks on gun rights, refusing to execute a mass-murderer, and supporting radical environmentalist plans to increase electricity costs (through increased renewable energy mandates) in rural Colorado, remains a fairly popular figure in the state.

The media has already reported the Colorado result as “establishment” win. While Bob Beauprez is reasonably characterized as an Establishment candidate, the others were hardly Tea Party representatives.

Given his outspoken opposition to both illegal and legal immigration, Tancredo is a breed unto himself. To be fair, he has principled constitutional and libertarian leanings that I admire, and I belatedly endorsed him in 2010. But I believe that his reputation as a one-trick pony would not only have made him unelectable, but also would have poisoned the ticket for other Republicans, particularly Congressman Cory Gardner, whose race to unseat Senator Mark Udall is winnable.

There was very little public polling done in Colorado in recent months. Earlier in the campaign, the front-runner appeared to be Tancredo, who lost a three-way contest for governor in 2010 when he switched to the American Constitution Party after the Colorado GOP nominated an unelectable candidate in a bit of misdirected Tea Party mania. Why did Tancredo, whose name recognition is roughly equal to Beauprez’s (both of whom are better known than Mssrs. Gessler and Kopp) lose his early lead? In part, similar to what happened in Oklahoma, because political ads backfired.

One of the first widely run ads in the campaign accused Tancredo of being “too conservative for Colorado” because of his strong opposition to Obamacare. This transparent ploy to make Tancredo more appealing to Republican primary voters by pretending to criticize him was paid for by a Democrat-affiliated 527 group called Protect Colorado Values. Clearly the Democrats perceived Tancredo’s potential negatives the same way I did, but their obvious involvement was a major miscalculation.

The same Democrats ran an ad accusing Bob Beauprez of supporting an individual health insurance mandate—which in fact he “reluctantly” did in 2007, though it never translated into support for Obamacare and he later changed his view. But despite Beauprez’s imperfect record (which is no worse than most other Republicans who served during the George W. Bush years) nobody who follows Colorado politics believes him to be anything but a solid conservative.

Again, as primary voters, who tend to be better-informed than the population overall, took umbrage at the transparent attempt at manipulation.

Republicans also ran unfair—and almost certainly ineffective, despite Tuesday’s results—ads against Tancredo, such as one supported by the popular former Senator Bill Armstrong that suggested Tancredo would legalize heroin and other hard drugs. In fact, Tancredo has taken a bold position for marijuana legalization and had said he would consider legalizing other drugs (mostly in the interest of reducing violence caused by gangs protecting drug profits), but the ad was so hyperbolic that its effect was likely minimal.

The outspoken social conservative Mike Kopp campaigned aggressively on opposition to marijuana legalization, but Colorado voters were not overwhelmed with a backward-looking message on an issue where the people have spoken.

Perhaps with the memory of Republicans’ enormous mistake in 2010 of nominating an unelectable small businessman whose personal story was, to put it kindly, exaggerated, and perhaps because many voices (such as on my radio show) urged GOP primary voters to consider first and foremost the candidate most likely to win in November, Bob Beauprez came from behind to earn his second shot at the Governor’s Mansion. While I think it will be a serious challenge to beat John Hickenlooper in November, Beauprez’s victory is welcome news to Republican senate candidate Cory Gardner and other Republicans down the ticket. My suggested motto, borrowing 1,500-year old wisdom, for participants in Tuesday’s primary: First, do no harm. By selecting Beauprez, they’ve heeded that advice.

In an under-the-radar local election in Loveland, Colorado, voters rejected by 52 percent to 48 percent a moratorium on fracking, despite an onslaught of misleading ads from liberal opponents of energy development. Voters may have noticed that Weld County, which Loveland borders, produces most of the oil in Colorado and, according to a pro-energy development group, “had the largest percentage increase in employment in the US in 2013.” Fracking bans, many disguised as measures supporting “local control”—the backing of which by Democrats should make anyone suspicious since liberals always want political power to be as far from the people as possible—may be on many other ballots across the state in November. Thus, Tuesday’s result is a welcome potential harbinger of sanity when it comes to one of Colorado’s most important industries.

Just a few comments on Mississippi (which my colleague Matt Purple is covering here): Thad Cochran represents everything that is wrong with the Republican Party; if that weren’t already clear, the fact that John McCain campaigned for him should have been the final necessary proof.

Pork king Cochran won his race by using Mississippi’s unfortunate election rules—which allow Democrats to vote in the Republican primary if they haven’t already voted in the Democratic primary—to win support from the opposition party by unashamedly promising more federal spending for his state. A typically inept mainstream media analysis was provided by CNN’s Gloria Borger who suggested that the GOP could learn something from Cochran’s winning coalition of establishment Republicans and Democrats since many of those Democrats had never before in their lives voted for a Republican. The problem is that approximately none of those Democrats will ever again vote for a Republican. In the meantime, Cochran’s supporters unsubtly played up the worst (e.g. racist) stereotypes of Tea Party candidates.

Republicans like Thad Cochran are the raison d’être for the Tea Party and candidates like the unsuccessful Chris McDaniel. A Republican senator who wins a primary election on the strength of Democratic support by making promises that should come from Democrats and other proponents of redistribution, pork, wasteful spending, and fundamentally unlimited government is the very definition of been-there-too-long. (Cochran has been in Congress, including the House, for more than 40 years—and it shows.) The GOP and every Republican who supported Cochran should feel something between slight embarrassment and outright shame.

A final note: One has to wonder how Thomas Carey feels today. Carey was the third Republican candidate in the original Mississippi primary race. He had no business in the race and no chance to win. Yet his presence almost certainly cost McDaniel the outright win on June 3, forcing the run-off election and allowing Cochran the time to organize Democrats to hold on to the seat he uses to buy votes with our money. Mr. Carey owes the nation an apology.

 

[Originally published at The American Spectator]

Categories: On the Blog

There’s Only One Meaningful Metric That Will Determine Obamacare’s Future

Somewhat Reasonable - June 26, 2014, 10:11 AM

Since the end of the initial open enrollment period, there has been a marked rise in the frequency of a certain type of argument – an argument which I hear with regularity inside the Acela corridor, but almost never outside of it. The argument goes something like this: regardless of the political toxicity of Obamacare, it is here to stay, and the laws opponents and Congressional Republicans need to wake up to that fact, or else.

The “or else” could be anything, and is essentially interchangeable. The most common prediction is of electoral doom; less so are predictions of revolutionary protests in the streets, turning to violence in defense of their Medicaid benefits, or losing broad swathes of traditionally red states in the Senate contests this year, or most recently, a prediction that Republicans will lose 90 percent of women voters in 2016. And yes, I’ve heard all of these and more in recent weeks.

This argument has a milder version which is repeated in the more sensible press. These observers concede that yes, Obamacare is still very unpopular, and yes, premiums are still going up, and yes, it’s signed up fewer uninsured than we expected and even those newly insured are barely favorable of it… but still, they insist, talk of repeal and replace is just politicians irresponsibly playing to the more radical elements of their conservative base. Forget the polls – Obamacare is here to stay.

I think this is a mistaken view of the political realities at play here. Perhaps this is driven by the drumbeat of “good news, everyone” which has been put forward by supporters of the law. But in an era when wonks are so plentiful, data journalists fall fully ripened from the trees, and explainers flower with the glorious frequency of endless summer, it’s easy to lose sight of the simplicity of factors which will determine whether policies maintain their permanence or are dramatically reformed.

It’s a mistake to assume there is a magic number, a point of uninsured who gained insurance, a statistic of Medicaid signups, or a percentage of average premium increases which will mark the point where Obamacare is safe from Republican assault. The average American voter and policymaker is not watching these factors – they are aware of Obamacare’s performance primarily through how it impacts their livelihoods, costs, and constituents. The opponents of the law are far louder and more motivated than its supporters. And that is very unlikely to change any time soon.

This is why I do not understand the assumptions of inevitability on the part of the law’s supporters. The Republican Party has put the repeal of President Obama’s signature law at the center of its agenda for years. It has taken repeal vote after repeal vote and made pledge after pledge. As a matter of partisan priority, there is nothing greater. And one more year of Obamacare will not change that.

Every single feasible candidate for the 2016 Republican nomination will loudly declare their support for repealing the law. Most will also offer a policy replacement, culled from the various technocratic and free market think tanks or from the legislation currently introduced in Congress. Whoever Republicans choose as their nominee, their favored replacement will become the de facto alternative Republican plan which party leaders and elected officials will all be expected to defend. And should the Republican candidate win, it is inconceivable that they will not have run on making the replacement of Obamacare a top priority for the first 100 days in office.

Republicans are not going to back off their efforts for repeal. It is a top priority for their national base, for their donors, and for their constituents. If Republicans have the Senate, it becomes that much easier – but even without it, the margin will be narrow, and the possibility for dealmaking outranks the likelihood that every single Democratic Senator will toe the line and pass on the opportunity to help remake health policy as they see fit. And while the election of Hillary Clinton or another Democrat would prevent this circumstance and protect Obamacare from assault, assuming that such an election is inevitable is really what you’re saying when you say Obamacare is here to stay.

The political legacy of Obamacare and the 2012 election is a vindication of monopartisan governance. Great domestic policies are no longer achieved via bipartisan give and take or the leadership of careful compromisers – they are rammed through with the support of your party and your base when you have the power to do so. I fully expect to see Republicans attempt to do that should they retake the White House.

So what are we to do in the time until November 2016? Well, in the meantime, we can discuss the other factors and outcomes of this policy in the ways they impact America’s insurers, hospitals, drugmakers, and industries. But we should not lose sight of the fact that it is this political outcome, and this outcome alone, which will determine whether Obamacare survives or not. It’s just not that complicated.

Categories: On the Blog

Linking The Dollar To Gold: Completing The Recipe For Restoring An Economic Boom For America

Somewhat Reasonable - June 26, 2014, 9:55 AM

Alexander Hamilton was America’s first Secretary of Treasury under President George Washington. When he first entered office in 1789, America was an agricultural nation of just 4 million still broke from its financially costly victory over the British Empire in the Revolutionary War.

The states had accumulated relatively massive debts to finance that war, which mostly remained unpaid. The United States did not even have a national currency, with Spanish coins still in wide circulation and use. Steve Forbes explains in his recently published definitive work, Money: How the Destruction of the Dollar Threatens the Global Economy and What We Can Do About It, “America’s finances were in a state of disarray after the wild inflation resulting from massive money printing during the American Revolution.” As a result, “Hamilton faced the challenge of restoring the economy of the young republic that had been devastated by the Revolutionary War….”

Hamilton boosted America’s economy first by advancing legislation for the federal government to assume and pay off the debts of the states, establishing the foundation for America’s historic creditworthiness. That was recognized by America’s AAA credit rating for over 200 years, until 2011 when the relentless spending of the Obama Democrats led to the first credit downgrade of the nation in history.

But even more importantly for the nation’s long term economic growth and prosperity, Hamilton promoted The Coinage Act of 1792, which established the first U.S. Mint, and fixed the value of the dollar at $19.39 per ounce. That was devalued slightly in 1834 to $20.67, which prevailed for 100 years, until President Roosevelt adopted the only major U.S. devaluation in history during the Depression, to $35 an ounce. That prevailed until President Nixon took America off the gold standard in 1971.

Forbes explained the results: “Overnight the economy sprang to life. Capital poured in from the Dutch and also America’s former enemies, the British. Barely a century after Hamilton’s reforms, the United States was the premier industrial power in the world, surpassing even Great Britain.” He added, “Hamilton’s system of banking and stable money quickly attracted and generated capital. It turned the American economy into the leading industrial power in the world.”

Forbes further explains that while America was under the gold standard, the economy boomed at an astounding 4% real rate of economic growth. At that rate, our economy, incomes and standard of living would double every 17 years. That was the foundation of the American dream and our historic, geometric explosion into the world’s leading “hyperpower.” Forbes adds that in the U.S., “Between 1870 and 1914, real wages more than doubled even though the country had millions of immigrants [greatly expanding the supply of labor]. Agricultural output tripled. Industrial production…surged a jaw-dropping 682%.”

Question is why did Hamilton understand economics so much better than the Ivy League poobahs of today, like Paul Krugman, who are more interested in promoting the socially hip stagnation of socialist equality than the dynamic economic growth of capitalism. If only Colonel Hamilton was alive today, he would be more worthy of the Nobel prize in economics than at least half of those prize winners living today.

Great Britain experienced quite similar results under the gold standard. In 1696, the Enlightenment philosopher John Locke was joined by the path-breaking scientist and physicist Isaac Newton in arguing against devaluation in the process of Britain replacing or “recoining” its debased currency with new, unshaved, fully restored coins. By 1717, Newton was Master of the Royal Mint, and he fixed the British pound to the value in gold of 3.89 pounds an ounce. That exact same historic value remained the same for more than 200 years, until 1931.

Forbes notes, “When it tied the pound to gold, Britain was a second-tier nation. Soon all of that would change.” A century later, “By the end of the Napoleonic Wars in 1815, Great Britain emerged indisputably as the world’s major power and global center of innovation.”

Economic Benefits of the Gold Standard

Fixing a nation’s currency to gold assures that the currency maintains a stable long term value, without inflation, or deflation. That enables a nation’s money to serve as a measure of value, like a ruler measures inches, or a clock measures time. Such a stable measure of value, in turn, means money can best perform its most essential function in facilitating transactions.

When money serves as a stable measure of value, it most clearly expresses the value of everything in terms of everything else. That best enables producers to determine whether their production is adding or wasting value as compared to the value of the inputs to that production. Or whether they should be producing something else instead that might create greater value. That information is essential for an economy to maximize output and economic growth over time.

When a farmer trades his crop for such stable money, he immediately knows what that crop is worth. And he knows that he can keep that value of his production in the currency because it will hold its value over time, until he is ready to buy something with it. That stability of the reward for production undisturbed by monetary fluctuations adds further to the incentive for such production.

Similarly, with a stable value for money, investors know the money they will receive back from their investment will be worth the same as the money they put in it, undepreciated by inflation. That encourages greater savings, investment and capital formation from within the country. And it encourages investment and capital to flow into the country from abroad. This maximizes overall investment, production and economic growth.

Nixon Takes America Off the Gold Standard

On August 15, 1971, President Nixon took America, and the world, off the gold standard completely, leaving a world of unanchored fiat currencies, by terminating the postwar Bretton Woods monetary regime. Nixon and his advisors mistakenly believed that this would help the economy by promoting American exports, which Forbes recognizes as 18th century mercantilist thinking.

But it was a decisive turn for the worse for the American economy, and the entire global economy. Since that time, real annual U.S. economic growth has averaged 3%, down 25% from the prior gold standard long term trend. Forbes explains, “If America had grown for all of its history at the lower post-Bretton Woods rate, its economy [today] would be about one quarter of the size of China’s. The United States would have ended up much smaller, less affluent, and less powerful.”

Moreover, “Since 1971, the dollar’s purchasing power has declined by more than 80%,” with about a third of that (26%) since 2000. Real incomes have been stagnant, or even declined. “[A] man in his thirties or forties who earned $54,163 in 1972 today earns around $45,224 in inflation adjusted dollars—a 17% cut in pay.” Unemployment has been significantly higher on average. Globally, “After the 1970s, world economic growth has been a full percentage point lower; inflation 1.5% higher.”

Forbes observes, “The correlation between unstable money and an unstable global economy would seem obvious.” Indeed, the termination of any link between the dollar and gold immediately inaugurated worsening boom and bust cycles of inflation and recession in the 1970s, with inflation soaring into double digits for several years. Inflation peaked at 25% over just two years in 1979 and 1980.

It took the worst recession since the Great Depression in 1981-1982 to tame that inflation, with double digit interest rates for years, and unemployment peaking at 10.8%. The Reagan/Volcker/Greenspan strong dollar monetary policies effectively restored a discretionary link to gold, with gold stabilizing around $300 to $350 for 20 years. That kept close control over inflation.

But this discretionary standard broke down as 2000 approached. The Fed loosened money and reduced interest rates over the Y2K scare, contributing to the tech stock bubble. Much worse, the Bush Administration supported a weak dollar monetary policy again on the mercantilist/Keynesian confusion that would help the economy by promoting exports. That included more loose money and 2½ years of negative real interest rates which served to pump up the housing bubble and lead, along with Clinton’s wild overregulation (in the name of affordable housing), to the 2008 financial crisis and recession.

Restoring a Dollar Link to Gold for the 21st Century

The best thing about Steve Forbes’ new book, Money, is that it discusses exactly the specific reforms that should be adopted today to establish a modern, 21st century link to gold for the dollar. That new system would not require the federal government to hold any gold stockpiles, and the money supply would not be limited to the availability of any quantity of gold.

Federal law would fix the dollar’s value in gold at a specified market price. That price would be set by some index to recent market prices for gold, perhaps the average gold price for the last 5 to 10 years, marked up by 10% as a hedge against causing deflation in the process. Federal law would mandate that the Fed conduct its monetary policy to ensure a stable value of the dollar at that market price.

The Fed would enforce that price through its open market operations buying and selling U.S. government bonds. If the price of gold began wandering in the market above the specified market price, that would signal the threat of inflation, and the Fed would begin tightening monetary policy by selling bonds to the market in return for cash withdrawn from the market. That reduced money supply would hold down price increases in the market, including for gold. The Fed would continue this policy, until the market price for gold returned to its specified target value.

If the price of gold began wandering in the market below the specified market price, that would signal the threat of deflation. The Fed would then begin loosening monetary policy by printing cash to buy U.S. government bonds in the market. That would increase the money supply, which would tend to increase prices in the marketplace, including for gold. The Fed would continue this policy until the market price for gold returned to its specified target value. The Fed would be required by the federal law to take such actions to prevent the price of gold from varying from the target price by more than 1%, which was the range permitted under the Bretton Woods system for currencies to fluctuate against the then gold backed dollar.

The federal law would provide that this new monetary policy would become effective at a specific date set in the future, perhaps 12 months away, to enable the private economy to plan for and adjust to the new policy. The law should grant the President or some other federal official the power to adjust the target price for gold to reflect more recent market prices as the implementation date approaches. Those more recent market prices would better reflect what the target gold price should be when the dollar is based on this new link to gold. A lesson learned from experience with President Obama, the law should also specify that any member of Congress would have standing to sue the President or other designated official if he or she did not carry out the law regarding this later market based adjustment as provided, and that federal courts would have the power to enforce relief. For example, not following more recent market prices in adjusting the target price would be a violation of the law.

This would effectively mean that the Fed would no longer have any power to pursue discretionary monetary policies to try to guide the economy in one direction or another. The new federal law would bar the Fed from attempting to manipulate interest rates, for example. The Fed would no longer have the power to set the federal funds rate, which is the rate banks pay to one another to borrow reserves. The Fed would continue to have the power to act as a lender of last resort to deal with financial panics that might temporarily threaten an otherwise sound bank. So the Fed could continue to set the “discount rate” that it would charge for such short term, lender of last resort borrowing. But even that would be required to be set above market rates, so that the Fed would not become a cheap source of funds for banks to borrow to lend out.

Along with a federal balanced budget amendment to the Constitution, this would effectively make Keynesian economics illegal. That would be highly desirable, because Keynesian economics is proven not to work, and Keynesian advocates are so oblivious to reasoned discussion on the point.

As a safeguard to help ensure that the Fed did follow its responsibilities under this new law, the law should specify that anyone could turn dollars into the Fed, and get gold at the legally specified target price. If the Fed was following the law, it could always buy gold in the market to pay for such a redemption in return for the target price for gold. If the Fed was not following the law, then it would likely not be able to finance such mandatory redemptions. The new federal gold law should again specify that any member of Congress would have automatic standing to sue the Fed to enforce the law.

Another safeguard would involve removing all barriers to the rise of private, competing, alternative currencies, to challenge the Fed to enforce and follow the law. That would mean no taxes, including capital gains taxes, could be assessed on sales of gold and silver. If the Fed did not follow the law, then these competing currencies could displace the dollar.

Such a new gold link to the dollar would be the last, missing component to any comprehensive strategy to restore traditional, world leading, American prosperity. Such a strategy would include as well personal and corporate tax reform to lower tax rates, deregulation of unnecessary regulatory costs and barriers, reduced federal spending to balance the budget and reduce the national debt as a percent of GDP, and free trade. Those policies could be expected to restore long term U.S. economic growth to 4% of GDP, which would leapfrog the American economy another generation ahead of the rest of the world.

 

Categories: On the Blog

Securing America’s Interests in the Middle East

Somewhat Reasonable - June 26, 2014, 8:32 AM

So much blood and treasure was wasted during the long occupation in Iraq that there was a sigh of relief across America when the troops finally left. Yet the end of the American presence has resulted in chaos. Islamist extremists in recent days have been making gains against the Iraqi military, seizing several towns, including the city of Mosul. The sheer rapidity of the collapse of law and order in Iraq led to a lot of hand-wringing in the White House. President Obama finally decided to send a few hundred troops to bolster the beleaguered regime of Prime Minister Nouri al-Maliki. This choice will only serve to further diminish the status of the United States in the region.

There is a better course of action: let Iraq break up. For nearly a decade the United States has been trying to keep the three distinct ethno-national groups in Iraq cooperating. This policy has failed disastrously, with Shi’ites and Sunnis still at each other’s throats and the Kurds finding their semi-autonomy threatened by the central government in Baghdad. The only way to salvage something from the wreckage of Iraq is to effectively strip it for parts.

Broken up along largely ethnic lines, the Shia region would be relatively safe, as the Sunni minority would not risk the ire of Iran, which has always seen itself as a guarantor of the Shia population. The problem of Islamic extremism could then be dealt with on a more targeted basis. It would also lend greater clarity to the growing cross-border threat in Syria.

What should America do? First, Obama should send representatives to the Kurdish regional government in Northern Iraq. To date, the United States has kept a lid on the clear desire of the Kurds to declare independence. Now they should help smooth the process out. Kurdistan would be a relatively free, stable, and potentially prosperous ally for the United States in a region that has soured of late to the Stars and Stripes.

Guaranteeing Kurdish political autonomy could also be made cost-neutral. The significant oil wealth recently discovered in Kurdish territory could easily fund the American military presence. This is an arrangement that could be worked out. The Kurdish leaders have proven very pragmatic in their outlook and could easily be prevailed upon to support a US military presence in the region to guarantee their security.

The result of this policy maneuver would be an Iraq region that is no longer riven with so much ethnic conflicts, though the threat of Islamist extremism would remain serious. It is a big ask for America to spend even more to prop up a teetering regime, and Americans have largely lost the stomach for prolonged conflicts. And rightly so. If the situation is to be salvaged, it must be faced in the knowledge that the sort of full-scale, boots on the ground, operation necessary to even temporarily resuscitate the central government and secure its borders is not going to happen.

The way forward is to take the least costly steps that will guarantee a modicum of stability, support peaceful and friendly governments, and secure the safety of Americans at home and abroad. A broken up Iraq and a free Kurdistan now seems to be the way to make the best of a dire situation.

It is time for the Obama administration to change its rhetoric from condemning pro-independence actions in Kurdistan to a policy of facilitating peaceful secession.

Categories: On the Blog

States Push Back Against Common Core in Their Schools

Somewhat Reasonable - June 25, 2014, 3:32 PM

It’s vacation time for the nation’s school kids and, while they play, states are beginning to push back against the latest effort of the federal government to exert total control over the nation’s schools; Common Core, whose curriculum standards and content rapidly revealed it to be a nightmare.

As I frequently note, the word “education” does not appear in the U.S. Constitution because the Founding Fathers knew full well that education was the job of localities and states to ensure quality and the opportunity that it provides everyone willing to learn the basics and beyond. From its earliest days, Americans would create a town, build a church, and follow up with a school. Until liberals complained about it, school days began with a prayer.

Liberals know that whoever controls schools controls the future. Dictatorships of all descriptions in particular place heavy emphasis on raising new generations with the kind of indoctrination that only the early years in school can impart. It should come as no surprise that the last failed liberal President, Jimmy Carter, ushered in the creation of the U.S. Department of Education

Still largely unknown to the general public is the control of the Department by teachers unions, the National Education Association and the American Federation of Teachers, and their support of the Democratic Party. This accounts for much of the well documented decline of education in America. The union’s chief concern is higher pay and benefits for teachers, not the welfare of the children in their care. Their focus is on politics, not teaching.

In March, the Cato Institute’s Center for Educational Freedom issued a new study on “Academic Performance and Spending over the Past 40 years” which revealed that “the average state has seen a three percent decline in academic performance despite a more than doubling in inflation-adjusted per-pupil spending.” Sometimes the spending increases are astonishing as in the case of New York State in which spending rose by 115%. California and Florida are not far behind with an increase of 80%.

Common Core has rapidly become a political hot potato as parents have let their state governors and legislators know how bad it is. Writing in the Heartland Institute’s May edition of its newsletter, School Reform News, Joy Pullmann, its managing editor, reported that Indiana Gov. Mike Pense was the first to sign a bill in March rejecting Common Core national standards, “but the parents and curriculum experts whose criticism led to the change also criticized the first draft of replacement standards for looking very similar to the Common Core mandates it is meant to replace.”

In a Heartland booklet, “The Common Core: A Bad Choice for America”, Pullmann notes that “States may not change Common Core standards, must adopt all of them at once, and may only add up to an additional 15 percent of requirements. The standards themselves have no clear governance, meaning there is no procedure for states to follow to make changes they feel are necessary. It is highly unlikely individual states would control or greatly influence any such process.”

At the very heart of the debate concerning Common Core is the notion that every single school in America should teach the exact same thing in the exact same way. That’s not how real education works and any teacher will tell you that different students learn at different rates and some require some extra help. Schools free of such one-size-fits-all thinking educated generations of Americans who made the nation the greatest economic power in the world.

Thus far, in addition to Indiana, state legislatures in Oklahoma, South Carolina, and Missouri have approved measures to exit Common Core’s national standards. Louisiana’s Gov. Bobby Jindal in mid-June said “We want out of Common Core” and is taking steps to reject it.

Common Core is the fulfillment of liberal’s dream of education. It was developed in 2009 by the National Governors Association and the Council of Chief State School Officers. It was quickly incentivized by the Obama administration with $4.35 billion in Race to the Top competitive grants and waivers from the federal No Child Left Behind law for states that signed on. Minus the states that have rejected Common Core, there are still 42 who have adopted it.

Ron Paul, commenting on the Oklahoma opt-out, said “Common Core is the latest attempt to bribe states, with money taken from the American people, into adopting a curriculum developed by federal bureaucrats and education “experts.” In exchange for federal funds, states must change their curriculum by, for example, replacing traditional mathematics with ‘reform math.’ Reform math turns real mathematics on its head by focusing on ‘abstract thinking’ instead of traditional concepts like addition and subtraction. Schools must also replace classic works of literature with ‘informational’ texts, such as studies by the Federal Reserve Bank of San Francisco. Those poor kids!”

Common Core’s curriculum standards are testimony to why abandoning local control over a community’s or city’s educational program is a very bad idea and why, once again, the federal government has demonstrated why it makes worse virtually any program that should be left to the states.

Categories: On the Blog

The FCC Plan to Steamroll State Laws Against Government Broadband

Somewhat Reasonable - June 25, 2014, 1:47 PM

The 2009 “Stimulus” bill contained $7.2 billion for local government broadband — the federal government giving city, county and municipal governments money to get into the Internet Service Provider (ISP) business.

Shocker: government broadband is a disaster. It already was, for years before the 2009 stimulus, which just funded many, many more disasters.

Everyone in Utah may be charged $20 a month to bail out UTOPIA, their woefully mis-named, decade-long disaster government broadband project. The government broadband iProvo lost tens of millions, then Google swooped in and purchased everything for one dollar.

Shocker: Google loves government broadband.

Government broadband is so terrible, in fact, that twenty states have passed laws limiting it.

Meanwhile, these local governments have been just as awful as stewards for their residents when it comes to private broadband. You know, the kind that actually works, the competition with government broadband.

Local governments shakedown the living daylights out of any wired company that comes asking to provide service — making it nearly impossible for them to do so. Which has resulted in many areas suffering a dearth of hardline options.

Government is (yet again) the problem. The answer to government isn’t more government. Unfortunately, no one has told this to Federal Communications Commission (FCC) Chairman Tom Wheeler.

Decrying this government-created lack of options, Wheeler has declared he will issue another Obama Administration fiat, steamrolling the laws of the twenty states and ramping up federal government spending on local government broadband.

Does the federal government have the authority to do this? Of course not.

And why not start with the thirty states that don’t have these laws? You know, be a little less dictatorial about it?

We answer all of this — and much more — in the accompanying video.

You’ll find the answers … disquieting.

Read more: http://dailycaller.com/2014/06/24/the-fccs-plan-to-steamroll-state-laws-against-government-broadband/#ixzz35g5g9aMV

Categories: On the Blog

The Injustice of Opt-Out Organ Donation

Somewhat Reasonable - June 25, 2014, 10:48 AM

The dearth of transplantable organs remains a serious problem in the United States and in much of the world. There are 123,000 Americans currently waiting for an organ. 18 of them die every day because demand continues to exceed supply. The problem has drawn the attention of many activists and policymakers, but sometimes the proposed solutions have proven more unpleasant than the problem. Chief among these unsavory solutions is the policy of opt-out organ donation.

Opt-out organ donation operates on the principle of presumed consent. This means that the government assumes that an individual is willing to have their organs harvested upon their death unless that individual has explicitly opted out of being a donor. Advocates for this system argue that this would greatly increase the number of organs available for transplantation and would save many lives.

The advocates for opt-out organ donation ignore something very important in their rush to claim dominion over the bodies of the dead: ordinary people’s views of the human body. To most Americans, the inanimate human body is more than a mere container of usable tissues. Even absent the spark of life, a body is usually seen as still being part of the deceased person.

This is not so much a religious or even spiritual sentiment, but a deeply human one. We attach significance to the body, whether it is a shell or all that remains of a person who was. We see it often as something worthy of respect.

This is why the body is not only essential to many funerary rituals, but is also a critical part of many people’s personal mourning and remembrance. It is why, in the wake of natural disaster, a huge amount of effort is put into the recovery of bodies that could have no medical use. It is why soldiers risk their lives to recover the remains of their fallen comrades. In essence, there is a personhood that we acknowledge by convention and sentiment even in the case of the dead.

Why is this perception of the body antagonistic to opt-out organ donation? Because it gives the presumption of ownership and control to the government.

Defenders of an opt-out policy might retort that because no one is obliged to donate their organs and can tick the box to remove themselves from the list, the self-ownership of the individual is not compromised. That reasoning is deeply flawed because it ignores the fundamental quality of the very idea of presumed consent. By presuming consent, the government essentially says that it owns your remains unless you go through a process that explicitly tells them otherwise. That completely turns on its head the idea of self-ownership as a baseline assumption.

Self-ownership, the underlying right of an individual to be independent of external domination is nullified when an individual has to sign a petition to prevent the state from harvesting their organs. What an opt-out system does is change the relationship of the individual and the state in such a way that the state has a much greater presumptive power over the individual’s very humanity.

Furthermore, there is an unpleasant smell of utilitarianism about opt-out programs. It seems to relegate choice to a secondary concern to the overall welfare of the polity. When the state begins making such a calculus about the disposition of its citizens, it does not take long for it to view them as means rather than ends. For citizens to be truly free they must not simply be agents of the state apparatus. There must always be some distinction between individuals and their societies.

There are other ways to increase organ donations. Donor drives are just one example. Whatever encouragements they offer, the burden must be on the state to encourage people to make the decision to donate their organs, not to just assume people have already consented.

Categories: On the Blog

Dropcam Key to Google’s New Ubiquitous Physical Surveillance Network

Somewhat Reasonable - June 25, 2014, 9:44 AM

Google recently boughtDropcam for $555m, a company which makes inexpensive, easy-to-install, WiFi-video-streaming-cameras that connect to cloud-based networks for convenient monitoring, set-up and retrieval.

Photo courtesy of www.yourstory.com.

Please don’t miss this graphic – here – of how the Dropcam acquisition fits into Google’s plans for a new ubiquitous physical surveillance network that will complement and leverage its existing virtual surveillance network.

Dropcam fills a big missing part of Google’s vision – literally to see, hear and track everything – in order to fulfill Google’s mission “to organize the world’s information.”

Most Rapid and Complete Vertical-Integration

What is remarkable here is that in only about six months Google has bought six key companies (Boston DynamicsNestDeepMindTitan AerospaceSkyBox, & Dropcam) that comprise many of the key building blocks necessary to create a ubiquitous surveillance network that can physically track most everyone and everything from the sky and on the ground.

Effectively Google is taking its dominant ad-driven surveillance model to the next level. Obviously it is not content with dominating just the virtual world of data and monetization of software products and services. Apparently, Google has ambitions to leverage its virtual dominance to dominate large swaths of the physical economy as well: e.g. wearables, devices, aerial mapping, robots, cars, energy management, smart home services, Internet access, etc.

Importantly, physical surveillance, involving hardware and people, is much more difficult-to-scale, costly and people-intensive than Google’s virtual surveillance via cookies and other easy-to-scale software tracking technologies.

Evidently, no other company/entity is looking at the 21st century world/economy as holistically as Google’s apparent vision of fully integrating virtual and physical surveillance networks.

One could argue that these strategic acquisitions over the last-half year could be more cumulatively transformative of Google’s strategic direction, business mix and capabilities long term than any other half-year in Google’s storied history.

Simply, like the Google+ effort seamlessly integrated dozens of online products and services into a unified offering, expect Google to embark on another integration effort to secretly and seamlessly integrate these many new physical assets into a unified physical surveillance network. Once complete, expect Google’s dominance to be much greater than it is now because they are vertically-integrating much faster and more completely than any other entity — by far.

Accelerating & Compounding Privacy/Wiretapping Problems  

The privacy problems with physical surveillance in the real world are dramatically greater than in the largely-privacy-free virtual world.

For example, consider the two big privacy problems Google got into when it effectively wiretapped both Gmail and home WiFi via Street View. For Gmail, a Federal Judge has ruledthat Google’s installation of a physical “Content One Box” to scan Gmails to create advertising profiles was effectively illegal interception or “wiretapping.” For Street View, a Federal Appeals Court also has ruled that Google’s Street View interception of home WiFi signals was effectively wiretapping because the signals were judged to be private and not public.

The super big problem here for Google is that in at least two of Google’s highest-profile and longstanding services, Google did not believe it needed to either disclose what they were doing with others’ communications, or ask anyone for permission to do what they were doing with their private information.

If surveillance innovation-without-permission is the norm at Google, and Google continues to maintain the legal position that people “have no expectation of privacy,” Google’s physical surveillance using Dropcam, and other physical surveillance technologies, for Google’s business purposes, could be at risk of being ruled illegal wiretapping as well.

There are obvious potential privacy problems with Google owning Dropcam, because Google announced first that Google-owned Nest was the entity buying Dropcam (not Google itself), and second that Nest’s separate privacy policy would not allow the sharing of private Dropcam monitoring information with Google.

Ironically and tellingly, it took only a couple of days for Google to undermine the public assertion that Google could not access private Dropcam information under Nest’s privacy policy. Google just announced that Nest will allow Google and some App developers to have access to some of the private information that Nest (and now Dropcam) collects on its users. Apparently, the claimed privacy “Chinese Wall” policy may be more like a screen door in practice.

A Profound Business Conflict-of-Interest

In conclusion, the acquisition of Dropcam, potentially provides Google’s engineers and advertising business model with arguably some of the most private, intimate, and valuable personal information available — a continuous, inside-look into someone’s inner sanctum where the public and competitors could never go or see. The temptation for Google to use and leverage this valuable private information will be enormous.

With Nest, but even more so with Dropcam, Google has created a profoundly serious business conflict-of-interest by putting a paid-privacy-based-service inside a privacy-hostile advertising business model thirsting for access to the most valuable private info.

If there is one thing that we’ve learned about Google — from its world’s worst privacy rap sheet, and its latest ambitions for a ubiquitous physical surveillance network – is that Google has very serious problems in respecting boundaries and asking for permission to use others’ private data.

George Orwell in his classic dystopian novel “1984,” envisioned a surveillance-technology called a telescreen that is eerily similar to Google-Dropcam’s capabilities today. It appears Google’s latest acquisition spree to assemble a ubiquitous physical surveillance networkenables Google to be the 21st century’s Big Brother Inc.

Forewarned is forearmed.

Orignially published at www.precursorblog.com.

 

Categories: On the Blog

New York, Legacy Cities Dominate Transit Urban Core Gains

Somewhat Reasonable - June 25, 2014, 9:18 AM

Much attention has been given the increase in transit use in America. In context, the gains have been small, and very concentrated (see: No Fundamental Shift to Transit, Not Even a Shift). Much of the gain has been in the urban cores, which house only 14 percent of metropolitan area population. Virtually all of the urban core gain (99 percent) has been in the six metropolitan areas with transit legacy cities (New York, Chicago, Philadelphia, San Francisco, Boston, and Washington).

In recent articles, I have detailed a finer grained, more representative picture of urban cores, suburbs and exurbs than is possible with conventional jurisdictional (core city versus suburban) analysis. The articles published so far are indicated in the “City Sector Articles Note,” below.

Transit Commuting in the Urban Core

As is so often the case with transit statistics, recent urban cores trends are largely a New York story. New York accounted for nearly 80 percent of the increase in urban core transit commuting. New York and the other five metropolitan areas with “transit legacy cities” represented more than 99 percent of the increase in urban core transit commuting (Figure 1). This is not surprising, because the urban cores of these metropolitan areas developed during the heyday of transit dominance, and before broad automobile availability. Indeed, urban core transit commuting became even more concentrated over the past decade. The 99 percent of new commuting (600,000 one-way trips) by transit in the legacy city metropolitan areas was as well above their 88 percent of urban core transit commuting in 2000.

New York’s transit commute share was 49.7 percent in 2010, well above the 27.6 percent posted by the other five metropolitan areas with transit legacy cities. The urban cores of the remaining 45 major metropolitan areas (those over 1,000,000 population) had a much lower combined transit work trip market share, at 12.8 percent.

The suburban and exurban areas, with 86 percent of the major metropolitan area population, had much lower transit commute shares. The Earlier Suburban areas (generally median house construction dates of 1946 to 1979, with significant automobile orientation) had a transit market share of 5.7 percent, the Later Suburban areas 2.3 percent and the Exurban areas 1.4 percent (Figure 2).

The 2000s were indeed a relatively good decade for transit, after nearly 50 years that saw its ridership (passenger miles) drop by nearly three-quarters to its 1992 nadir. Since that time, transit has recovered 20 percent of its loss. Transit commuting has always been the strongest in urban cores, because the intense concentration of destinations in the larger downtown areas (central business districts) that can be effectively served by transit, unlike the dispersed patterns that exist in the much larger parts of metropolitan areas that are suburban or exurban. Transit’s share of work trips by urban core residents rose a full 10 percent, from 29.7 percent to 32.7 percent (Figure 3).

There were also transit commuting gains in the suburbs and exurbs. However, similar gains over the next quarter century would leave transit’s share at below 5 percent in the suburbs and exurbs, because of its small base or ridership in these areas.

Walking and Cycling

The share of commuters walking and cycling (referred to as “active transportation” in the Queen’s University research on Canada’s metropolitan areas) rose 12 percent in the urban core (from 9.2 percent to 10.3 percent), even more than transit. This is considerably higher than in suburban and exurban areas, where walking and cycling remained at a 1.9 percent market share from 2000 to 2010.

Working at Home

Working at home (including telecommuting) continues to grow faster than any work access mode, though like transit, from a small base. Working at home experienced strong increases in each of the four metropolitan sectors, rising a full percentage point or more in each. At the beginning of the decade, working at home accounted for less work commutes than walking and cycling, and by 2010 was nearly 30 percent larger.

The working at home largest gain was in the Earlier Suburban Areas, with a nearly 500,000 person increase. Unlike transit, working at home does not require concentrated destinations, effectively accessing employment throughout the metropolitan area, the nation and the world. As a result, working at home’s growth is fairly constant across the urban core, suburbs and exurbs (Figure 4). Working at home has a number of advantages. For example, working at home (1) eliminates the work trip, freeing additional leisure or work time for the employee, (2) eliminates greenhouse gas emissions from the work trip, (3) expands the geographical area and the efficiency of the labor market (important because larger labor markets tend to have greater economic growth and job creation, and it does all this without (4) requiring government expenditure.

Driving Alone

Despite empty premises about transit’s potential, driving remains the only mode of transport capable of comprehensively serving the modern metropolitan area. Driving alone has continued its domination, rising from 73.4 percent to 73.5 percent of major metropolitan area commuting and accounting for three quarters of new work trips. In the past decade, driving alone added 6.1 million commuters, nearly equal to the total of 6.3 million major metropolitan area transit commuters and more than the working at home figure of 3.5 million. To be sure, driving alone added commuters in the urban core, but lost share to transit, dropping from 45.2 percent to 43.4 percent. In suburban and exurban areas, driving alone continued to increase, from 78.2 percent to 78.5 percent of all commuting.).

Density of Cars

The urban cores have by far the highest car densities, despite their strong transit market shares. With a 4,200 household vehicles available per square mile (1,600 per square kilometer), the concentration of cars in urban cores was nearly three times that of the Earlier Suburban areas (1,550 per square mile or  600 per square kilometer) and five times that of the Later Suburban areas (950 per square kilometer). Exurban areas, with their largely rural densities had a car density of 100 per square mile (40 per square kilometer).

Work Trip Travel Times

Despite largely anecdotal stories about the super-long commutes of those living in suburbs and exurbs, the longest work trip travel times were in the urban cores, at 31.8 minutes one-way. The shortest travel times were in the Earlier Suburbs (26.3 minutes) and slightly longer in the Later Suburbs (27.7 minutes). Exurban travel times were 29.2 minutes. Work trip travel times declined slightly between 2000 and 2010, except in exurban areas, where they stayed the same. The shorter travel times are to be expected with the continuing evolution from monocentric to polycentric and even “non-centric” employment patterns and a stagnant job market (Figure 5).

Contrasting Transportation in the City Sectors

The examination of metropolitan transportation data by city sector highlights the huge differences that exist between urban cores and the much more extensive suburbs and exurbs. Overall the transit market share in the urban core approaches nine times the share in the suburbs and exurbs. The walking and cycling commute share in the urban core is more than five times that of the suburbs and exurbs. Moreover, the trends of the past 10 years indicate virtually no retrenchment in automobile orientation, as major metropolitan areas rose from 84 percent suburban and exurban in 2000 to 86 percent in 2010. This is despite unprecedented increases is gasoline prices and the disruption of the housing market during worst economic downturn since the Great Depression.

[Originally published at New Geography]

Categories: On the Blog

Bilderberg: The Most Important Event You’ve Never Heard Of

Somewhat Reasonable - June 24, 2014, 2:36 PM

One of the world’s oldest and most important political conferences celebrated its 60th anniversary this month. The Bilderberg Group met in Copenhagen, Denmark from May 29th to June 1st to discuss matters of global import. Named after the Hotel Bilderberg where the first conference was held in 1954, Bilderberg has held meetings every year since then between many of the world’s top political, economic, and business leaders.

Yet, thanks to a culture of deep secrecy, very few people know much about Bilderberg or its objectives. This has led to a great deal of speculation, and quite a few conspiracy theories. It is important to separate myth from reality in order to understand Bilderberg because it is one of the western world’s most significant political meetings.

So what is Bilderberg, and what are they talking about this year? Here are the five things you need to know:

1. It’s the Talking Shop of the Global Elite

Bilderberg is, at its core, a gathering of the major players in the world of politics and business. The stated aim is to provide a mostly informal setting away from the prying eyes of the press to discuss some of the issues facing the world that demand international attention. It is a talking shop, a place where leaders can hob-nob and share ideas.

Technically a private event, invitations are presented on an individual basis. In other words, attendees do not go to Bilderberg as representatives of their governments or businesses, but as private individuals. This procedure has raised some eyebrows, since it is deeply questionable whether our elected leaders should be attending conferences with their international counterparts without any sort of oversight. There is also the risk of business leaders lobbying politicians, given their privileged access during the conference.

2. This Year’s Agenda

Every year Bilderberg sets an agenda concerning issues of note. Frequent subjects have been global security and economic intregration. According to public statements by the group’s steering committee, Bilderberg 2014 was focused on privacy and government transparency. This is of course a serious issue currently being faced across the world, with the internet and other forms of media making private citizens’ lives far more public, data more readily available, and the potential for abuse all the greater. Recent scandals, such as the Edward Snowden leaks, have likewise raised concerns over privacy and the need for international cooperation on enforcing standards.

3. This Year’s Guest List

This year, as every year, features a star-studded guest list. From the United States, General Keith Alexander, former head of the N.S.A., and Marie-Josee Kravis of the New York Fed, and her billionaire husband Henry Kravis, are among the guests. Christine LaGarde, head of the I.M.F., George Osborne, British Chancellor of the Exchequer, and many other major European leaders also attended. The sheer amount of wealth and power gathered in one place must necessarily be a cause for concern. Yet, no one seems to pay it much attention.

4. The Proceedings are Kept Totally Secret

A common question people ask when they hear about what Bilderberg is and the sorts of people who attend it is: “If it’s so important, why the hell haven’t I heard of it?”. The answer is simple: the organizers and attendees work very hard to keep the event secret. In fact the only reason we have a general agenda and guest list is because independent journalists have been scrutinizing them for years.

It seems like a bit of a no-brainer that any gathering of the most important elected and appointed officials who govern the lives of the citizens of the Western world with top business and economic leaders would demand extreme scrutiny. Yet much of the mainstream media has for decades only casually observed the event.

It is that secrecy that is perhaps most worrying about the Bilderberg Conference. If our political and corporate leaders meet without any sort of oversight, how can we hold them accountable?

5. Despite the Conspiracy Theories, Bilderberg is not that Sinister

The wealth, power, and secrecy of Bilderberg blend into a irresistible cocktail to the conspiracy-minded. Some people have tried to claim that Bilderberg is some sort of shadow government that secretly runs the world. These rumors have little foundation outside the fevered imaginations of a few fringe observers. That does not mean there is no cause for concern.

Human beings are corruptible, politicians even more so. The presence of moneyed interests and powerful individuals all gathered together for a secret conference presents a potential temptation.

As private citizens we have very limited power, and thus must always be wary when those who would lead us choose to keep us in the dark.

 

[Originally published at IOnTheScene]

 

Categories: On the Blog

Uber-Left Free Press’ ’Net Neutrality’ Isn’t What Most Supporters Think It Is

Somewhat Reasonable - June 24, 2014, 1:56 PM

In the cinematic classic “The Princess Bride,” Inigo Montoya utters thenow oft-repeated “You keep using that word.  I do not think it means what you think it means.”Uber-Left government-media outfit Free Press is highly practiced in this disingenuous art.  Their name is one shining example.  It sounds good, but when you find out for what they actually stand – not so much.And they use “Net Neutrality” one way publicly to engender support for the already heinous policy – but their ultimate intent with it is something drastically different, and dramatically worse.

Free Press’ presented Net Neutrality persona sounds benign and innocuous.

When we log on to the Internet via our computer or smartphone, we take a lot for granted. We assume we’ll be able to access any website or use any application we want, whenever we want, at the fastest speed, whether it’s a corporate site or a friend’s blog. We assume we can use any service we like — watch online videos, update our Facebook status, read the news — any time we choose, on any device we choose. What makes all these assumptions possible is a principle called Net Neutrality.

But there are a lot of things Free Press isn’t telling you.

Net Neutrality is socialism for the Internet – it guarantees everyone equal amounts of nothing.  It is the government mandating that everything on the World Wide Web be delivered at the exact same speed.  As with all things government – the Veterans Administration debacle being the latest terrible example – that speed is S…L…O…W.

Net Neutrality is a “solution” desperately running around in search of a problem.  All of the nightmare scenarios Free Press and their fellow proponents put forward are hypothetical – they aren’t actually occurring.  And they haven’t occurred.  And they won’t occur – because the free market dictates that they won’t.  (Netflix is currentlyclaiming Net Neutrality violations – but they too are mis-defining it, and have been proven to be faking the evidence.)

The Internet has since its commercial inception been a virtually regulation-free zone. As a result of this government-less-ness, the Web has exploded into the free speech-free market Xanadu we all know and love.

The government doesn’t have a regulatory hook in the Web – so it can’t begin reeling it in.  And that drives Free Press crazy.  So they weave their Net Neutrality fairy tale – we need the government to save us from…this unbelievably amazing Internet?  Really?

Free Press wants the government to reel in the private sector Web – because they don’t want there to be a private sector Web.  How do we know this?  Because Free Press’s co-founder said so.

Meet Robert McChesney – avowed Marxist and college professor (please pardon the redundancy).

Avowed Marxist?  McChesney writes for and was editor of Monthly Review – about which he wrote:

“Although Monthly Review has a current circulation of 8,500 – and has never seen its circulation rise much above the 12,000 mark – it is one of the most important Marxist publications in the world, let alone the United States.

In Monthly Review and elsewhere, McChesney has written things like:

“There is no real answer but to remove brick by brick the capitalist system itself, rebuilding the entire society on socialist principles.”

And:

“Any serious effort to reform the media system would have to necessarily be part of a revolutionary program to overthrow the capitalist system itself.”

So it comes as no surprise that Free Press co-founder McChesney also says this:

“At the moment, the battle over network neutrality is not to completely eliminate the telephone and cable companies. We are not at that point yet. But the ultimate goal is to get rid of the media capitalists in the phone and cable companies and to divest them from control.

How very Hugo Chavez of them.

So what McChesney and his Free Press want is to “remove brick by brick the capitalist (Internet) system itself” and “overthrow” “the media capitalists” and “divest them from control.”

Leaving us with government as our sole Internet Service Provider (ISP) – single-payer government Internet.  How’s that system working for veterans’ health care?

All of which is not exactly the innocuous Net Neutrality that Free Press has been selling, now is it?

Inigo Montoya – call your office

Categories: On the Blog

Never-Ending Green Disasters.

Somewhat Reasonable - June 24, 2014, 9:24 AM

Newton’s 3rd law of motion, if applied to bureaucracy, would state: “Whenever politicians attempt to force change on a market, the long-term results will be equal and opposite to those intended”.

This law explains the never-ending Green energy policy disasters. 

Greens have long pretended to be guardians of wild natural places, but their legislative promotion of ethanol biofuel has resulted in massive clearance of tropical forests for palm oil, sugar cane and soy beans.  Their policies have also managed to covert cheap food into expensive motor fuel and degraded land devoted to bush, pastures or crops into mono-cultures of corn for bio-fuel. This has wasted water, increased world hunger and corrupted the political process for zero climate benefits.

Greens also pretend to be protectors of wildlife and habitat but their force-feeding of wind power has uglified wild places and disturbed peaceful neighbourhoods with noisy windmills and networks of access roads and transmission lines. These whirling bird-choppers kill thousands of raptors and bats without attracting the penalties that would be applied heavily to any other energy producers – all this damage to produce trivial amounts of intermittent, expensive and blackout-prone electricity supplies.

Greens have long waged a vicious war on coal, but their parallel war on nuclear power and the predictably intermittent performance of wind/solar energy has forced power generators to turn to hydro-carbon gases to backup green power. But Greens have also made war on shale-gas fracking – this has left countries like Germany with no option but to return to reliable economical coal, or increase their usage of Russian gas and French nuclear power. Their war on coal has lifted world coal usage to a 44 year high.

Greens also say they support renewable energy, but they oppose any expansion of hydro-power, the best renewable energy option. For example, they scuppered the Gordon-below-Franklin hydro-electric project, which would have given Tasmania everlasting cheap green electricity. But they never mention their awkward secret – the Basslink under-sea cable goes to Loy Yang power station in Victoria and allows Tasmania to import coal-powered electricity from the mainland.

Robbie Burns warned us over 200 years ago:

“The best laid schemes of Mice and Men
Gang aft agley,
An’ lea’e us nought but grief an’ pain,
For promis’d joy!”

Categories: On the Blog

The Truth About the Global Warming: Heartland’s 9th International Conference on Climate Change, July 7-9 in Las Vegas

Somewhat Reasonable - June 24, 2014, 8:50 AM

Come to fabulous Las Vegas July 7-9 to meet leading scientists from around the world who question whether “man-made global warming” will be harmful to plants, animals, or human welfare. Learn from top economists and policy experts about the real costs and futility of trying to stop global warming.

Meet the leaders of think tanks and grassroots organizations who are speaking out against global warming alarmism.

Don’t just wonder about global warming … understand it!

Read testimonals from previous happy attendees!

#ICCC9 takes place at the Mandalay Bay Resort and Casino. Rooms start at only $80 per night plus fees and taxes. Fly American or United and get a discount of up to 10%!

We are hosting the event in Las Vegas that week in partnership with our friends at FreedomFest, who are cosponsors of #ICCC9 and host their excellent annual conference July 9 – 12 at Planet Hollywood.

A preliminary schedule for the event is here. Speakers already confirmed include Fred Singer, Craig Idso, Willie, Soon, Roy Spencer, Marc Morano, Christopher Monckton, Patrick Moore, and Anthony Watts. For more speakers and their bios, click here.

Register for the event here, or call 312/377-4000 and ask for Ms. McElrath or reach her via email at zmcelrath@heartland.org.

Exhibiting and sponsorship opportunities are available starting at only $150! Contact Taylor Smith at tsmith@heartland.org for information about promotional opportunities and prices.

Several prizes will be awarded to scholars, elected officials, and activists for outstanding contributions to the debate over global warming. To nominate someone or to suggest a prize, contact Robin Knox at rknox@heartland.org.

To watch videos from the previous eight International Conferences on Climate Change, click here. For more information about The Heartland Institute, visit our website.

Categories: On the Blog

Redskins Brouhaha Shows How Politics Is Ruining Sports Talk Radio

Somewhat Reasonable - June 23, 2014, 3:13 PM

One of the few simple joys I have in life, shared with Camille Paglia, is listening to sports radio. She describes it as one of the few arenas still safe for an old-fashioned sort of masculinity – I think of it more as a respite from reading and thinking about politics and policy, second only to leaning back in an easy chair with a good simple future-noir detective story about hunting Chinese Martians or a word that could end the world. There is a simple rhythm and cadence to good sports talk radio which allows for an undercurrent of wit and humor juxtaposed with statistical argumentation, hitting the high and the low.

Of course, in the ESPN age, the realm of sports is often invaded by politics. This is typically in the form of mild irritants, and the more sports-minded hosts will back away slowly from guests who suddenly feel the need to expound on their deeply held and often clumsily constructed theories about politics to troll their listeners. Some guests are serial offenders in this regard: Kevin Blackistone, for instance, has decried the playing of the national anthem at ballgames as jingoistic warmongering, and said the U.S. should boycott the Olympic Games over Israel’s actions toward the Gaza Flotilla. So you learn to avoid those segments and head over to the ones talking about whether the Vernon Davis holdout is justified and what roster moves need to be made if LeBron is going to stay in Miami.

So it is with great irritation that I have experienced the invasion of sports radio over the past few months by a voice I am more familiar with for its meandering conspiracy-theorizing over the rampant influences of the Brothers Koch: Harry Reid, whose funereal nagging about the name of the Washington Redskins has elevated this battle over political correctness from a low simmer to a hot summer topic. No one particularly cared about this fight when the Redskins were horrid (which has been pretty much every year since I was ten), but since they looked like they were getting good again a year ago, the fight is back in a big way, with all Democratic Senators (save Virginia’s Mark Warner and Tim Kaine) endorsing a name change.

Mostly, this is a sideline issue, as Redskins owner Daniel Snyder has reiterated that the team’s name will never change as long as he owns them, and as the franchise is one of the NFL’s most valuable and a gigantic money-printing machine, there seems to be no possibility of a financial incentive from advertisers or the NFL to make a change. What’s more, the poll data on Native Americans across the country shows overwhelming support for the name. There has never been a poll showing even a plurality of Native Americans in favor of a name change. Were it 90-10 in the other direction, I think the NFL would be more interested in the issue.

As a legal matter, this all changed yesterday with the ThinkProgress report that the U.S. Patent and Trademark Office’s Trademark Trial and Appeal Board decision to cancel six federal trademark registrations of the franchise, under the reasoning that they were derogatory at the time of their registration in the 1960s. Now that the lawyers have explained what this means, it actually looks like the answer is: not a lot.

ESPN.com Sports Business reporter Darren Rovell wrote, “[w]ithout protection, any fan can produce and sell Washington Redskins gear without having to pay the league or the team for royalties and wouldn’t be in violation of any law for doing so.” That is simply not true. The decision by the TTAB does not require the Washington D.C.-based NFL team to change its name, stop using the “Redskins” marks and it does not mean that the organization loses all legal rights in the marks. There are benefits to having a federal registration attached to an owned trademark, including but not limited to a legal presumption of ownership of the mark and the ability to bring an infringement action in federal court seeking statutory damages. Importantly, a lack of a federal registration in place does not equate to anarchy where any individual can create merchandise bearing “Redskins” marks and sell same in commerce.

So there is no open-season on Redskins merchandizing – and even if there were, it would serve to undercut only a small portion of the team’s revenue. The Redskins intend to appeal, as they have done before, and successfully. For the time being at least, the issue is no closer to a name change.

That being said, the trendlines of politics are such that I expect a name change to be inevitable in my lifetime because of where the team is located and the pressure exerted by our ruling elite. One of the big lessons of life in the Obama era is that it’s important to avoid the attention of the ruling class – lest you be audited, harassed, or generally become a hot topic of media conversation as a proxy for some other battle. There’s a reason this is happening to the Washington Redskins and not the Cleveland Indians or the Chicago Blackhawks or the Florida State Seminoles. If you live within the consciousness of a critical mass of people in power for whom all life is politicized, you will be made to bend to their will, by whatever means necessary. The last thing in the world you ought to want is for President Obama to be asked his opinion about your enterprise, and then have those around him work to make that opinion a reality.

That’s why it’s important to learn how not to be seen. We are a country now where perceptive people develop skills to go unnoticed by the imperial center. Survival now means avoiding having DC and its cohorts notice you at all costs. In this town, they understand that freedom of speech sounds like a good idea, after all, right up until the point where someone’s feelings are hurt. So in retrospect, if the Redskins wanted to remain the Redskins, they should have just left town. The Richmond Redskins would have done just fine. Either that or draft Michael Sam.

Honest opponents of the name would concede that it wasn’t a historical epithet; concede that the polling shows overwhelmingly Native Americans don’t think it’s an epithet today; concede that it’s not the same as the N-word and no one thinks it is lest everyone with an R shirt be a giant racist. They would concede they’re just opposed to it because it’s the 15 minute PC hate. What Bob Costas, Keith Olbermann, Mike Wise understand, as people who have personally experienced the hardships of abiding racism in their lives, is that the only way you can demonstrate you’re not a racist in the post-Obama era is to find new racists to attack. I don’t really mind it that much that these white liberal elitists want to demonstrate that they’re down with the struggle, but I really wish they didn’t have to ruin sports radio to do it.

 

[Originally published at The Federalist]

Categories: On the Blog

Executive fiats in the other Washington

Somewhat Reasonable - June 23, 2014, 1:52 PM

Two western state governors intend to get low carbon fuel standards, by legislation or decree

 

Progressives believe in free speech, robust debate, sound science and economics, transparency, government by the people and especially compassion for the poor – except when they don’t. These days, their commitment to these principles seems to be at low ebb … in both Washingtons.

A perfect example is the Oregon and Washington governors’ determined effort to enact Low Carbon Fuel Standards – via deceptive tax-funded campaigns, tilted legislative processes and executive fiat.

The standards require that conventional vehicle fuels be blended with alternative manmade fuels said to have less carbon in their chemical makeup or across the life cycle of creating and using the fuels. They comport with political viewpoints that oppose hydrocarbon use, prefer mass transit, are enchanted by the idea of growing fuels instead of drilling and fracking for them, and/or are convinced that even slightly reduced carbon dioxide will help reduce or prevent “dangerous manmade climate change.”

LCFS fuels include ethanol, biodiesel and still essentially nonexistent cellulosic biofuels, but the concept of lower carbon and CO2 naturally extends to boosting the number of electric and hybrid vehicles.

Putting aside the swirling controversies over natural versus manmade climate change, its dangers to humans and wildlife, the phony 97% consensus, and the failure of climate models – addressed in Climate Change Reconsidered and at the Heartland Institute’s Climate Conference – the LCFS agenda itself is highly contentious, for economic, technological, environmental and especially political reasons.

California has long led the nation on climate and “green” energy initiatives, spending billions on subsidies, while relying heavily on other states for its energy needs. The programs have sent the cost of energy steadily upward, driven thousands of families and businesses out of the state, and made it the fourth worst jobless state in America. Governors Jerry Brown, John Kitzhaber and Jay Inslee (of California, Oregon and Washington, respectively) recently joined British Columbia Premier Christy Clark in signing an agreement that had been developed behind closed doors, to coordinate policies on climate change, low carbon fuel standards and greenhouse gas emission limits throughout the region.

California and BC have already implemented LCFS and other rules. Oregon has LCFS, but its law terminates the program at the end of 2015, unless the legislature extends it. As that seems unlikely, Mr. Kitzhaber has promised that he will use an executive order to impose an extension and “fully implement” the state’s Clean Fuels Program. “We have the opportunity to spark a homegrown clean fuels industry,” the governor said, and he is determined to use “every tool at my disposal” to make that happen. He is convinced it will create jobs, though experience elsewhere suggests the opposite is much more likely.

Mr. Inslee is equally committed to implementing a climate agenda, LCFS and “carbon market.” If the legislature won’t support his plans, he will use his executive authority, a state-wide ballot initiative or campaigns against recalcitrant legislators – utilizing support from coal and hedge fundbillionaire Tom Steyer. Indeed, Inslee attended a closed-door fundraiser in Steyer’s home the very day he signed the climate agreement. The governor says he won’t proceed until a “rigorous analysis” of LCFS costs and technologies has been conducted, but he plans to sole-source that task to a liberal California company.

Their ultimate goal is simple. As Mother Jones magazine put it, “if Washington acts strongly on climate, the impact will extend far beyond Washington…. The more these Pacific coast states are unified, the more the United States and even the world will have to take notice.”

But to what end? In a world that is surging ahead economically, to lift billions out of abject poverty and disease – with over 80% of the energy provided by coal, oil and natural gas – few countries (or states) are likely to follow. They would be crazy to do so. Supposed environmental and climate benefits will therefore be few, whereas damage to economies, families and habitats will be extensive.

The Oregonian says the LCFS is “ultimately a complicated way of forcing people who use conventional fuels to subsidize those who use low-carbon fuels. It’s a hidden tax to support ‘green’ transportation. It will raise fuel prices … create a costly compliance burden … [and] harm Oregon’s competitiveness far more than it will help the environment. And that assumes it works as intended.” It will not and cannot.

LCFS laws will raise the cost of motor fuels by up to 170% over the next ten years – on top of all the other price hikes like minimum wages and the $1.86 trillion in total annual federal (only) regulatory compliance costs that businesses and families already have to pay – the Charles River Associates economic forecasting firm calculates. If these LCFS standards were applied nationally, CRA concluded, they would also destroy between 2.5 million and 4.5 million American jobs.

Ethanol gets 30% less mileage than gasoline, so motorists pay the same price per tank but can drive fewer miles. It collects water, clogs fuel lines, corrodes engine parts, and wreaks havoc on lawn mowers and other small engines. E15 fuel blends (15% ethanol) exacerbate these problems, and low-carbon mandates (“goals”) would likely require 20% ethanol and biodiesel blends, trucking and other groups point out.

Those blends would void vehicle engine warranties and cause extensive damages and repair costs. The higher fuel costs would affect small business expansion, hiring, profitability and survival. The impact of lost jobs, repair costs, and soaring food and fuel bills will hit poor and minority families especially hard.

Some farmers make a lot of money off ethanol. However, beef, pork, chicken, egg and fish producers must pay more for feed, which means family food bills go up. Biofuel mandates also mean international aid agencies must pay more for corn and wheat, so more starving people remain malnourished longer.

Biofuels harm the environment. America has at least a century of petroleum right under our feet, right here in the United States, but “renewable” energy advocates don’t want us to lease, drill, frack or use that energy. However, the per-acre energy from biofuels is minuscule compared to what we get from oil and gas production. In fact, to grow corn for ethanol, we are already plowing an area bigger than Iowa – millions of acres that could be food crops or wildlife habitat. To meet the latest biodiesel mandate of 1.3 billion gallons, producers will have to extract oil from 430 million bushels of soybeans – which means converting countless more acres from food or habitat to energy.

Producing biofuels also requires massive quantities of pesticides, fertilizers, fossil fuels – and water. The US Department of Energy calculates that fracking requires 0.6 to 6.0 gallons of fresh or brackish water per million Btu of energy produced. By comparison, corn-based ethanol requires 2,500 to 29,000 gallons of fresh water per million Btu of energy – and biodiesel from soybeans consumes an astounding and unsustainable 14,000 to 75,000 gallons of fresh water per million Btu!

Moreover, biofuels bring no net “carbon” benefits. In terms of carbon molecules consumed and carbon dioxide emitted over the entire planting, growing, harvesting, refining, shipping and fuel use cycle, ethanol, biodiesel and other “green” fuels are no better than conventional gasoline and diesel.

Put bluntly, giving politicians, bureaucrats and eco-activists power over our energy would be even worse than having them run our healthcare system and insurance websites. Spend enough billions (much of it  taxpayer money) on subsidies and propaganda campaigns – and you might convince a lot of people they should pay more at the pump and grocery store, and maybe lose their jobs, for illusory environmental benefits. But low-carbon mandates are a horrid idea that must be scrutinized in open, robust debate.

It’s time we stopped letting ideology trump science, economics and sanity. We certainly cannot afford to let despotic presidents and governors continue using executive orders to trample on our legislative processes, government by the people, constitutions, laws, freedoms, livelihoods and living standards.

Fiats are fun cars to drive. Executive fiats are dictatorial paths to bad public policy.

 

Paul driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.

Categories: On the Blog

Common Core Violates Privacy of Students and Families

Somewhat Reasonable - June 23, 2014, 1:43 PM

The public, even parents of school aged children, tend to trust those in authority to make good decisions and enact credible laws regarding our public education system, believing that any changes made would be in the public’s best interest. While that is largely true, citizens should remain vigilant and carefully examine any and all new laws and mandates. Complacency invites corruption. Our nation’s education system must always be one in which we can fully trust. Anything else is unacceptable.

The implementation of Common Core Standards, and its resulting curriculum, initiated a major shift in our nation’s education system, and the changes it requires have caused enormous controversy throughout America for numerous reasons that we have outlined in previous articles.

Let’s focus on the Data Mining element of Common Core. Now that the public has had a chance to “read the rules”, we discover Common Core violates the privacy of students and their families, through the gathering and sharing of personal information and worse yet, that the private information is being sent and shared with the federal government.

Parents are particularly concerned about three major issues: 1. The safety aspect of schools and government entities being able to keep personal data safe from “hackers”; 2. The reasons our federal government intervened and interfered with state rights, and require the gathering of personal data from students and their families; and 3. How parents can use legal ways to avoid divulging intrusive private information to schools.

The Problem of Keeping Private Information Safe

We are living in an age in which most information is being stored electronically.  It is popular due to the ease, convenience, and ability to store so much data without requiring massive space to do so.  With these wonderful attributes though, there is one unfortunate problem.  The stored data is not as safe as we once had believed.   A new study indicated almost half of all Americans’ private information was compromised/revealed due to hackers.  Hackers have successfully infiltrated and gleaned information from sources that were once considered impossible to “hack”, such as chain stores like Target and even our government agencies.  For that matter, our government has used sophisticated tech equipment to spy on other countries.  Nobody is safe from prying technology today, and thus neither is any electronically stored information garnered through schools.

Therefore, parents should be exceedingly cautious about giving personal information to schools. Some have suggested Common Core itself could be considered one of the more dangerous domestic spying programs.  This came about when Bill Gates, one of the leaders and most avid promoters of Common Core, put millions of dollars of his own personal money into its development, implementation and advertising of the new national education program. Consider that much of the data mining will occur via Microsoft’s Cloud system.

Even the Department of Education is concerned with the issue of privacy, admitting that some of the data gathered may be “of a sensitive nature.”   This is indeed an understatement by the DOE as much of the data collected will be completely unrelated to education.  Data collected will not only include grades, test scores, name, date of birth and social security number, it will also include parents’ political affiliations, individual or familial mental or psychological problems, beliefs, religious practices, income and other incredibly sensitive, highly private information about the student and the student’s family.

There is also concern that private companies donate education apps to schools in exchange for children’s information, increasing the threat of children’s personal data being abused.

According to The New American, schools in Delaware, Colorado, Massachusetts, Kentucky, Illinois, Louisiana, Georgia and North Carolina have committed to “pilot testing” and information dissemination via sending students’ personal information to the InBloom database (a non-profit group funded by the Gates Foundation and supported by Amazon). Not yet known iswhether parents know and/or approve of the dissemination of that personal information.

Reasons for the accumulation of student/family Data

We have all heard the quote:  “Information is Power”.  New York TimesColumnist Matthew Lesko expanded upon that theme with this statement: “Information is the currency of today’s world. Those who control information are the most powerful people on the planet – and the ones with the most bulging bank accounts.”  Imagine the power of those who receive the collection of student data from most every student in America.

Common Core supporters will point out that there is nothing within the standards or rules which requires personal data be acquired; and that any data gathering is entirely up to the individual states.  Ah, but it isn’t that simple or even true!  That statement is highly disputed, with a little research.

The federal government had been prohibited from gathering students’ specific data for a national database, but shortly after Obama became president, the Stimulus Bill provided a loophole.  Money was given to each of the states to develop longitudinal data systems to catalog data generated by Common Core aligned tests.  Permission to release student information collected since 2009 was then authorized to be shared among federal agencies without the consent of parents.

The federal government encouraged states to participate in data collection initiatives such as the Data Quality Campaign, the Early Childhood Data Collaborative, and the National Student Clearinghouse, all of which helped to increase the collection and sharing of children’s formally protected data.

In addition, the National Education Data Model suggests that states increase their collection of information about students to over 400 data points on each one.  That leaves little doubt that the construction of their data systems has been purposely increased.

Beginning in the 2014-2015 school year, students under Common Core will begin taking state standardized tests, and student-specific specific data will be stored by the states in their newly create longitudinal data system, designed to track student progress from K through 12th grades.  That data will be dissected, supposedly for the purpose of improving education.  However, as a nation, we must ask ourselves whether we want to respect individual rights of privacy or whether we want a more “collective” approach that claims it is permissible if the action benefits the common majority.  Consider, if such a benefit is at the expense of others.  It is moral?  Leo Tolstoy said: “Wrong does not cease to be wrong because the majority share in it.”

What will be collected?  

The type of material being collected due to the changes by the current administration, is so extensive, one could say “almost everything will be included, some of which is highly personal “.  Of course test scores will be collected, and be aware, Common Core encourages massive testing.  What is strange and should be a red flag to reasonable people is why schools are also asking about student’s hobbies, psychological evaluations, medical records, religious affiliation, political affiliation, family income, behavioral problems, disciplinary history, career goals, addresses, and bus stop times, with their locations.  It was even suggested schools use cameras and/or special equipment to judge facial expressions and a student’s posture in the classroom, supposedly for the purpose of assigning stress levels.

The Department of Education claims to be concerned with the issue of privacy, admitting that some of the data gathered may be “of a sensitive nature.”  This is indeed an understatement by the DOE.  Knowing much of the data collected will be completely unrelated to education, in 2012, a combination of 24 states and territories struck a deal to implement data mining to receive federal grants. “Personally Identifiable Information” was allowed to be extracted from each student.  Examples below are some of the more extreme examples of data mining, causing reasonable people to question why the government would venture into such an invasion of our privacy.

1. Political affiliations or beliefs of the student or parent;

2. Mental and psychological problems of the student or the student’s family;

3. Sex behavior or attitudes;

4. Illegal, anti-social, self-incriminating, and demeaning behavior,

5. Critical appraisals of other individuals with whom respondents have close family relationships;

6. Legally recognized privileged or analogous relationships, such as those of lawyers, physicians, and ministers;

7. Religious practices, affiliations, or beliefs of the student or the student’s parent; and

8. Details of Income.

The information, will be sent to federal agencies that were put in place once the States accepted Common Core.

Local Control Compromised

When the federal government first interfered with the States’ responsibility to educate our children, a line was tragically crossed.  Local control was compromised, as higher levels of officials took more responsibility and dictated more rules from their level of government.  While Common Core apologists try to minimize problems their changes caused, discerning people know there has been this breach in America’s laws and traditions.  Power transferred from the local governing agencies to the federal government.  Any advantage parents had for any significant control over their children’s school or curriculum has been greatly reduced.  It is easier to facilitate potential changes, act on complaints, and make specific adjustments when local government has the power to consider logical adjustments, rather than have to go to a state or federal level to be heard.

While Common Core supporters argued states still have the same control as always, many parents remained skeptical.  It did not take long to discover just how much control the federal government now has.  Our wise forefathers did not want the federal government in charge of the education of our children.  Too much power!  Remember the warning by Sir John Acton in the 1500′s.  “Power Corrupts and Absolute Power Corrupts Absolutely.”  When we see that power has corrupted a local politician, it is fairly easy to remove and replace the person.  That is not as easily discovered or accomplished then the official lives and works outside of our community.

What Parents Can do to Protect their Children from Data Mining

A California law firm, the Pacific Justice Institute has developed a from parents can use to opt-out of all statewide performance assessments, including academic, achievement tests, and Common Core assessments, as well as any questionnaire, survey, or evaluation containing personal questions about their child’s beliefs or practices in sex, family life, morality, politics, income, religion, and other highly personal information.

Parents in other states can contact The Pacific Justice Institute for specific information, and to see if there is a similar agency in their state with a similar “opt out” form.

Conclusion

There was a time in our history in which schools needed the permission of parents for their children to go to school.  Decades later that was reversed and a law enacted that made it mandatory for all children to attend school.  Laws were eventually enacted giving schools more authority than the parents over their children’s schooling.  The current administration has taken federal control to a whole new level, which includes loss of local control and parents subjected to invasive data mining.  This did not make the front page of our newspapers.  In fact Common Core was a surprise to most teachers and local school boards, who scrambled to comply with the new law and education standards and curriculum.

Something as important as major changes in our nation’s education system deserved more input, more openness, public involvement, a public comment period, and certainly proof through trial programs that the new system is superior to the one it replaced.

Instead, our federal government and most every state government unleashed an unproven education program, resulting in our nation’s children becoming guinea pigs in an experimental program that could prove disastrous.  That is why concerned citizens throughout America are having meetings and conferences to educate other about Common Core problems to encourage state officials to enact legislation that would stop Common Core, or at the very least put a “hold” on the program until it can be proven the new system has merit, and to enact strong privacy laws that will protect both students and their families from invasive data mining.

Categories: On the Blog

Climate Change–Less of a Scientific Agenda and More of a Political Agenda

Somewhat Reasonable - June 23, 2014, 11:04 AM

Those who don’t believe in climate change are “a threat to the future,” says the Washington Post in a June 14 article on President Obama’s commencement address for the University of California-Irvine. Regarding the speech, the Associated Press reported: “President Obama said denying climate change is like arguing the moon is made of cheese.” He declared: “Scientists have long established that the world needs to fight climate change.”

The emphasis on a single government policy strays far from the flowery rhetoric found at the traditional graduation ceremony—especially in light of the timing. While the president was speaking, all of the progress made by America’s investment of blood and treasure in Iraq was under immediate threat. And, as I pointed out last week, what is taking place right now in Iraq has the potential of an imminent impact to our economic security. Instead of addressing the threat now, why is he talking about “a threat to the future” that might happen in the next 100 years?

The answer, I believe, is found later in his comments.

In his speech, Obama accused “some in Congress” of knowing that climate change is real, but refusing to admit it because they’ll “be run out of town by a radical fringe that thinks climate science is a liberal plot.”

Perhaps he’s read a new book by a climatologist with more than forty years of experience in the discipline: The Deliberate Corruption of Climate Science by Tim Ball, PhD—which convincingly lays out the case for believing that the current climate change narrative is “a liberal plot.” (Read a reviewfrom Principia Scientific International.) In the preface, Ball states: “I’ve watched my chosen profession—climatology—get hijacked and exploited in service of a political agenda.” He indirectly calls the actions of the president and his environmental allies: “the greatest deception in history” and claims: “the extent of the damage has yet to be exposed and measured.”

It is not that Ball doesn’t believe in climate change. In fact, he does. He posits: “Climate change has happened, is happening and will always happen.” Being literal, Obama’s cheese comment is accurate. No scientist, and no one is Congress, denies climate change. However, what is in question is the global warming agenda that has been pushed for the past several decades that claims that the globe is warming because of human caused escalation of CO2. When global warming alarmists use “climate change,” they mean human-caused. Due to lack of “warming,” they’ve changed the term to climate change.

Nor is he against the environment, or even environmentalism. He says: “Environmentalism was a necessary paradigm shift that took shape and gained acceptance in western society in the 1960s. The idea that we shouldn’t despoil our nest and must live within the limits of global resources is fundamental and self-evident. Every rational person embraces those concepts, but some took different approaches that brought us to where we are now.”

Ball continues: “Environmentalism made us aware we had to live within the limits of our home and its resources: we had a responsibility for good stewardship.” But, “the shift to environmentalism was hijacked for a political agenda.” He points out: “extremists demand a complete and unsustainable restructuring of world economies in the guise of environmentalism” and claims: “the world has never before suffered from deception on such a grand scale.”

Though it is difficult to comprehend that a deception on such a grand scale, as Ball projects, could occur, he cites history to explain how the scientific method was bypassed and perverted. “We don’t just suddenly arrive at situations unless it is pure catastrophe. There is always a history, and the current situation can be understood when it is placed in context.”

In The Deliberate Corruption of Climate Science, Ball takes the reader through history and paints a picture based on the work of thought leaders in their day such as Thomas Malthus, The Club of Rome, Paul Erlich, Maurice Strong, and John Holdren. Their collective ideas lead to an anti-development mindset. As a result, Ball says: “Politics and emotion overtook science and logic.”

Having only been in this line of work for the past seven-and-a-half years, I was unfamiliar with the aforementioned. But Ball outlines their works. Two quotes, one from Erlich, author of, the now fully discredited, The Population Bomb, and the other from Strong, who established the United Nations Environment Program (the precursor to the Intergovernmental Panel on Climate Change), resulted in an epiphany for me. I now know that the two sides of the energy debate are fighting apples and oranges.

I’ve been fighting for cost-effective energy, jobs, and economic growth. I point out, as I do in a video clip on the home page of my website, that the countries with the best human health and the most physical wealth are those with the highest energy consumption. I state that abundant, available, and affordable energy is essential to a growing economy. I see that only economically strong countries can afford to care about the environment.

While the other side has an entirely different goal—and it’s not just about energy.

Erlich: “Actually, the problem in the world is there are too many rich people.” And: “We’ve already had too much economic growth in the United States. Economic growth in rich countries like ours is the disease not the cure.” 

Strong: “Isn’t the only hope for the planet that the industrialized nations collapse? Isn’t it our responsibility to bring that about?” 

When the other side of the energy debate claims that wind turbines and solar panels will create jobs and lower energy costs—despite overwhelming evidence to the contrary, I’d mistakenly assumed that we had similar goals but different paths toward achieving them. But it isn’t really about renewable energy, which explains why climate alarmists don’t cheer when China produces cheap solar panels that make solar energy more affordable for the average person, and instead demand tariffs that increase the cost of Chinese solar panels in the U.S.

Ball states: “In the political climate engendered by environmentalism and its exploitation, some demand a new world order and they believe this can be achieved by shutting down the industrialized nations.”

He cites Strong, a senior member of The Club of Rome, who in 1990 asked: “What if a small group of these world leaders were to conclude the principal risk to the earth comes from the actions of rich countries?” A year later, The Club of Rome released a report, The First Global Revolution, in which the authors state: “In searching for a common enemy against whom we can unite, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like, would fit the bill. …The real enemy then is humanity itself.”

Throughout the pages of The Deliberate Corruption of Climate Science, Ball goes on to show how in attempting to meet the challenge of collapsing an industrialized civilization, CO2 becomes the focus. “Foolishly we’ve developed global energy policies based on incorrect science promulgated by extremists.”

Ball concludes: “Because they applied politics to science they perverted the scientific method by proving their hypothesis to predetermine the result.” The results? “The sad truth is none of the energy and economic policies triggered by the demonization of CO2 were necessary.”

Obama said: “Scientists have long established that the world needs to fight climate change.” Yes, some have—many for reasons outlined in Ball’s easy-to-read new book. But, surely not all. Next month, hundreds of scientists, policy analysts, and thought leaders, who don’t agree with the president’s statement (including Ball and myself), will gather together for the Ninth International Conference on Climate Change. There, they won’t all agree on the reasons, but they’ll discuss and debate why each believes climate change is not a man-caused crisis. In real science, debate is welcome.

The computer models used to produce the scientific evidence and to provide legitimacy in support of the political agenda have a record of failed projections that would have doomed any other area of research and policy. Ball points out: “The error of their predictions didn’t stop extremists seeing the need for total control.”

The claim of consensus is continually touted and those who disagree are accused of thinking the moon is made of cheese. According to Ball: “Consensus is neither a scientific fact nor important in science, but it is very important in politics.”

Do you want to live in a world with “the best human health” or in one where “the real enemy is humanity itself?” Energy is at the center of this battle.

“It is time to expose their failures [and true motives] to the public before their work does too much more damage.”

Author’s Note: The title is taken from a 2011 quote from India’s Union Environment Minister Jairam Ramesh.

The author of Energy Freedom, Marita Noon serves as the executive director for Energy Makes America Great Inc. and the companion educational organization, the Citizens’ Alliance for Responsible Energy (CARE). Together they work to educate the public and influence policy makers regarding energy, its role in freedom, and the American way of life. Combining energy, news, politics, and, the environment through public events, speaking engagements, and media, the organizations’ combined efforts serve as America’s voice for energy.

[Originally published at Red State]

 

Categories: On the Blog

The Rebirth of Austrian Economics

Somewhat Reasonable - June 23, 2014, 10:45 AM

Forty years ago, during the week of June 15-22, 1974, the Austrian School of Economics was reborn during a conference in the small New England town of South Royalton, Vermont. Why was this important? Because the economists of the Austrian School have developed the most persuasive understanding of why only economic freedom can give mankind both liberty and prosperity.

During the Great Depression of the 1930s, many economists and political policy-makers argued that capitalism was a “failure” and only wisely guided government intervention and regulation of the market place could bring stability and fairness to society.

The Domination of Big Government Ideas

For the next thirty years following the Second World War, Keynesian Economics dominated economic policy decision-making. Government, it was said, had to have the discretionary authority to manipulate spending and taxing as well as the monetary system to assure full employment and stable economic growth.

This was matched by a rarified mathematical formalism in the higher levels of economic theory in which the everyday individual was reduced to a mere passive variable in a series of equations, with the assistance of which it was presumed government could successfully micro-manage the market. Unless regulated and guided by the superior hands of the government policy-makers, society would fall into waste and inefficiencies due to people’s wrong choices and misplaced actions when left on their own.

The Beginning of Austrian Economics

Almost 145 years ago, Carl Menger founded the Austrian School of Economics. One of the pathfinders to break asunder the myth of the labor theory of value, which had dominated economics from the time of Adam Smith to that of Karl Marx, Menger developed the subjective theory of value. The value of a good, Menger explained, was not determined by the amount of labor devoted to making a product, but rather the labor was given value by the intensity felt for the product by the individual who would finally use or consume it. Since individuals valued things differently and by different scales of importance, there was no way to objectively determine the value any market-traded good might have other than relating it back to the personal (“subjective”) judgments of the individual valuator.

Menger was soon followed by two disciples who refined Austrian theory to such a point that it became a major force in the world of ideas. Friedrich von Wieser formulated the concept of opportunity cost, by which is meant that nothing is free. The fact that most of the means that we use to achieve our various ends are scarce (too limited in supply to enable us to attain all the goals for which those means might be used) means we always have to make trade-offs.

The cost of anything is the alternative goal, purpose or end for which some scarce means might have been used if we had not instead valued more highly some other use for which we ended up applying those limited means. The idea that government can give people a “free lunch” is fundamentally wrong; what the government gives to someone with one hand it must take from someone else by the other hand, because the available means are not enough to fully satisfy both uses at the same time.

Eugen von Böhm-Bawerk, developed Menger’s theory of subjective value and applied it to the problem of savings, investment, and the creation of capital. Everything we do involves time. Whether we are boiling an egg or constructing a tunnel through a mountain, or planting a crop for food, all of our production activities take time.

This requires that individuals must save enough to free up the resources needed to build the capital goods and cover people’s living expenses until the production processes are completed at some point in the future when more and better goods and services will be forthcoming as the benefit from having waited for them.

Government taxation and regulation can undermine if not destroy the ability and motive of people to do the savings and investing that is essential if we are all to benefit from rising standards of living in the future.

Ludwig von Mises and the Case for the Free Market

In the twentieth century, Ludwig von Mises extended the Austrian approach. Mises applied Menger’s subjective value theory to the area of money and developed the “Austrian” theory of the business cycle. Government manipulation of money and credit in the banking system throws savings and investment out of balance, resulting in misdirected investment projects that are eventually found to be unsustainable, at which point the economy has to rebalance itself through a period of a corrective recession.

The only wise policy for government is to leave money and the banking system to the competitive forces of a free market to eliminate the inflationary booms and recessionary busts of the business cycle, so markets can effectively keep people’s saving and investing decisions in balance for well-coordinated economic stability and growth.

Mises also demonstrated in the early 1920s why the new experiment with socialist central planning in communist Russia would eventually fail. Rational and efficient economic decision-making requires market-generated money prices to determine and calculate the relative values of the finished goods that consumers might wish to buy in comparison with the costs of using the means of production – land, labor, and capital – in one alternative production activity instead of another, on the basis of which entrepreneurs can estimate likely profits or losses from producing one product rather than some other.

Comprehensive socialism abolishes private property, bans market ownership and trading of goods and resources, and places all economic decision-making in the hands of a government central planning agency.

But without private property, there is nothing to buy and sell. With nothing to buy and sell there is no bargaining to determine possible terms-of-trade. With no agreed-upon terms-of-trade, there are no market prices.

Without market prices to tell market decision-makers the value of what consumers might want and the actual value of scarce resources in competing uses for their employment, there is no rational way for the socialist planner to efficiently and effectively know what to produce and at the lowest costs to maximize total desired production. Socialist central planning creates a society of “planned chaos.”

Based on his critique of the unworkability of socialist central planning Ludwig von Mises developed a theory of how the competitive market process works, and the important role of the entrepreneur for guiding production in the pursuit of profits and the avoidance of losses.

This also led Mises to a detailed critical analysis of how and why various forms of government regulation and intervention in the market economy can only distort and bring about imbalance in the market’s own coordination of multitudes of supplies and demands in the service of consumer desires. The only viable economic system for freedom and prosperity, Mises concluded, is laissez-faire capitalism.

F. A. Hayek and the Use of Knowledge in Society

Further developments in Austrian theory were the product of the versatile mind of Friedrich von Hayek, who won the Nobel Prize in Economics in 1974 a few months after this Austrian Economics conference in South Royalton, Vermont.

In the 1930s, Hayek refined Mises’ theory of money and the business cycle, and became the leading free market critic of John Maynard Keynes at the time when “Keynesian Economics” was just being developed. He insisted that government deficit spending and manipulation of spending in the economy would only slowdown the normal market-generated recovery from a recession, and ran the danger of creating a future inflation that would be followed by another economic downturn.

Hayek, like Mises, was a leading critic of socialism. His core argument centered on the impossibility of even the wisest and most intelligent central planners ever having the ability to master, integrate and effectively use all the needed knowledge to successful guide an entire economy from the offices of a government planning bureau.

The division of labor in society is matched by a division of knowledge in which each of us possesses only a limited and small amount of all the knowledge of the world in our individual minds. We all must admit and accept how ignorant any one of us is about all the forms of knowledge that exist in the world, and which must somehow be successful brought to bear if all of us are to benefit from what one or a few people may know that we do not.

Hayek’s answer to this problem was to explain that market-generated prices serve as the communications devise through which we can inform each other about our desires as consumers and our abilities as producers, while leaving us free to use the knowledge that each of us individually possesses as we find it most advantageous. Thus, freedom and prosperity are combined through the market system of prices and competition to find out who can do better in satisfying the wants of others in the pursuit of self-interested profit.

Austrian Voices at the South Royalton Conference

The Institute for Humane Studies (IHS) organized the South Royalton Austrian Economics conference, and brought to Vermont three of the leading Austrian economists of that time to deliver a series of unique and important lectures: Israel M. Kirzner, Ludwig M. Lachmann, and Murray N. Rothbard.

Israel Kirzner had studied under Mises at New York University, and in 1973 had written, “Competition and Entrepreneurship,” the first of many books explaining the importance of the alert and creative market-based entrepreneur who brings about the balance and coordination of supplies with our consumer demands through his pursuit of profit opportunities.

Murray Rothbard had already made an outstanding name for himself as an Austrian economist with his two-volume work, “Man, Economy and State” (1962), in which he developed the entire edifice of economic understanding following in the footsteps of Ludwig von Mises. His 1963 book, “America’s Great Depression,” demonstrated that the economic depression of the 1930s had its origin in bad Federal Reserve monetary policy in the 1920s, and made far worse than it needed to be due to the wrong-headed interventionist policies of the Hoover Administration in the early 1930s.

Ludwig Lachmann had studied with F. A. Hayek at the London School of Economics in the 1930s, and went on to challenge the Keynesian misconception that the economy should be viewed and treated as one single aggregate lump of economic output. He subtly showed that the market is an intricate web of multitudes of individual supplies and demands interconnected in ways that could have no harmonious order to them other than through the free competitive actions of people, themselves, in a dynamic world of unexpected change.

Austrian Economics as Good Economics

The first day of the conference was highlighted by an opening evening banquet. At the dinner, free market economist, Henry Hazlitt, (the author of “Economics in One Lesson”) reminisced about how he first met Ludwig von Mises in the 1940s. The noted anti-Keynesian economist, W.H. Hutt, talked about the contributions that Mises made to economics And Murray Rothbard related some of the amusing anecdotes Mises would tell during the graduate seminars that Mises taught at New York University from 1945 until his retirement in 1969 at the age of 89.

Milton Friedman, who had a summer home in Vermont and who had been invited to the dinner, was asked to make a few comments. He admitted that Mises had made a number of notable contributions to economics, but that he was much too “extreme” in his views on economics and public policy. Besides which, Friedman added, there was no such thing as “Austrian economics,” only good economics and bad economics.

Clearly Freidman considered that the attendees at that conference were on a “fool’s errand” in focusing on something called “Austrian” economics. But for those of us attending that conference that week, we considered that Austrian Economics was a good economics for understanding the nature and workings of the real world of the free market place.

Human Action and Man as Unique Chooser

Starting the next day, a week of rigorous and incisive lectures began dealing with every aspect of “Austrian” theory. Rothbard and Kirzner laid the foundation by explaining the implications of the Austrian theory of human action and choice. The study of economics, Rothbard pointed out, begins with the fundamental axiom that man acts, that conscious action is taken to achieve chosen goals. This also implies that all action is purposeful and rational from the point of view of the actor.

All action, besides which, occurs through time. Action is taken now with the expected attainment of some result in the future. It also means that man acts without omniscience, for if an individual knew what the future would be in all its rich detail, then his action to replace one state of affairs with another would be pointless. With a guaranteed and certain future, action becomes worthless, because nothing can be changed in that future and the idea of people making their free choices becomes meaningless.

The fact that action is purposeful, chosen, and personally subjective also means that any statistical or historical studies that attempt to measure or predict human activity must be seen as having limited usefulness. Kirzner used the example of a man from Mars looking down at the earth through a telescope. The Martian observes that out of a box every day comes an object that enters another rectangular box that then moves away through a maze of canals and intersections. The Martian notices that on certain days the object that comes from the first box moves rapidly to catch up to the second, rectangular box. He then draws up a statistical study showing that one out of ten times the object will move rapidly to reach the rectangular box and uses this for predictions of “earthly” activities.

What has been totally overlooked by this method is that the first box happens to be an apartment building out of which comes an individual who goes to the street corner to catch the morning bus to work. The fact that on occasion the individual in question oversleeps and has to rapidly chase after the bus, so as not to miss it, does in no way guarantee that he may not get a better alarm clock, go to sleep earlier, or in the future, oversleep even more often. Nor does one individual’s actions determine how another individual will act in the same circumstances. Thus, to base one’s understanding of man on statistics and historical studies alone is to ignore that human action is volitional, purposeful, and changeable, dependent on the goals and means of the acting individual.

The inability of the economics profession to grasp the mainsprings of human action has resulted from their adoption of economic models totally outside of reality. In the models put forth as explanations of market phenomena, equilibrium — that point at which all market activities come to rest and all market participants possess perfect knowledge with unchanging tastes and preferences — has become the cornerstone of most economic theory.

The Market Process and the Entrepreneur

Lachmann, in an illuminating lecture, explained that the market is not a series of equilibrium points on a curve, but rather, it’s a constant process kept moving because the underlying currents of human action never rest. Men, lacking omniscience, integrate within their plans the information provided by a constant stream of knowledge about changes in resource availabilities, the relevant actions of other men, and unexpected occurrences. But because each man’s perspective and interpretation of this stream of knowledge may be different from that of others, what seems relevant to one individual may be discarded as insignificant by another.

The unknowability of the future means that individuals draw conclusions based upon expectations of what will happen over time. Divergent expectations and unexpected change, therefore, results in potential inconsistency of interpersonal plans. When errors become visible to individuals, each market participant will learn different lessons from the revised, available information. And, thus, we are again faced with the possibility of inconsistency of different market plans.

But if the plans of market participants can never be expected to smoothly and automatically mesh, what forces in the market tend toward an equilibrating, or coordinating, of the actions of multitudes of human actors? At this point, Professor Kirzner’s follow-up lecture offered the clue. Acting man is not merely a blind “taker” of prices and resource offerings; rather, because of the fact that unexpected change occurs in an uncertain future, man is also “watchful.”

Alertness to previously unseen opportunities serves as the key to the equilibrating market forces. This human capacity for alertness, said Kirzner, is the entrepreneurial role. It is not merely the difficult task of knowing when to hire and where to place the worker. It’s a much more subtle and rarified knowledge; it’s the ability of knowing where to get knowledge, of picking up bits of information that others around you have passed up and seeing the value of it for bringing into consistency a human plan or plans that otherwise would have remained in disequilibrium. The chance to profit from information about market opportunities that others have failed to see acts as the incentive for people to keep their eyes open for inconsistencies and opportunities in human plans.

Production, Time and Money in the Market Process

Lachmann and Kirzner continued this train of thought the following day with lectures on the Austrian theory of capital. Capital is the intermediate product – often the tool or machine – used to produce a finished good for consumption. Yet the many attempts to measure and quantify “society’s” capital stock fall apart when we once again emphasize the nature of purposeful action. A particular good is seen as a “production good” useful for a particular purpose only within the context of a human plan. That object that may be seen as a capital good in one instance may become totally worthless or shift to a consumer good tomorrow, depending upon the changing subjective valuations and judgments of the individuals interacting in the market.

The elusiveness of market equilibrium often means, as well, that, as Lachmann pointed out, a tendency for structural integration of interpersonal plans may exist, but some combinations that are found not to fit within existing plans may result in a scrapping of some of these goods and, therefore, are not really “capital” any longer in the eyes of the valuator. Kirzner continued the discussion pointing out that capital is the complex of “half-baked cakes,” the interim form the resource takes in the process of a human plan leading to the final stage of producing a product to satisfy the wants of some consumers.

Rothbard delivered an interesting and comprehensive lecture on the Austrian theory of money. It was Ludwig von Mises, Rothbard pointed out, who first applied the principles of marginal utility to money, showing how money originated and how exchange values were established on the market. Professor Rothbard suggested three areas for possible future research: (1) how to separate the state from money; (2) the question of free banking vs. 100-percent-gold dollars; and (3) the defining of the supply of money.

He followed up with a lecture on “New Light on the Pre-History of the Austrian School,” and showed the development of marginal-utility theories through the Middle Ages in Spain and Italy.

The Central Error in Keynesian Economics

Lachmann finished his series of lectures with critiques of macroeconomics and its recent controversies.  He argued that the market is a complex and ever-changing network of multitudes of individual actions and reactions to what everyone else is attempting to do in the pursuit of their desired goals and ends.

The Keynesian attempt to reduce all the rich complexity of human activity to a few simple statistical aggregates for government manipulation and control not only misunderstand the real and true nature of a dynamic and competitive market system, but was likely to lead to government policy mishaps that would create far more instability and disorder than if the political authorities simply left the market alone.

Showing How Government Policy Goes Wrong

On the last day of the conference, Kirzner and Rothbard summed up the Austrian approach within a consideration of the “Philosophical and Ethical Implications of Austrian Economic Theory.” Kirzner restated the principle of “value-freedom,” in economic analysis. As an economist, the Austrian theorist does not make judgments on ends chosen by people in the market. The economist’s task is to objectively analyze whether or not the means proposed to achieve a particular goal or end are the most appropriate or efficient to that purpose. The economist on his own cannot say or judge whether the goal or end being pursued by an individual, with whatever means chosen, is in itself “good” or “bad.”

While admitting this, Rothbard wondered if the economist could be totally value-free in all instances. What if a politician has as his goal the economic impoverishment of the nation so as to use demagoguery for gaining political power? Are we to tell him that this is a “good” means to achieve his end? Thus, Rothbard concluded, it may often be necessary to have certain value-laden principles to judge ends as well as means.

Conference Life in South Royalton

The evenings during the week were partly spent with the participants discussing the topics lectured about that day. But in addition, Murray Rothbard would “hold court” every night until the wee hours of the morning. He would tell funny stories, and relate an unending stream of hilarious anecdotes about famous people alive and dead. He amused his audience with a repertoire of “left-wing” and “right-wing” political songs that he knew in several languages. And he optimistically argued for the importance of Austrian Economics and a political philosophy of liberty if the human race was to free itself from the dangers of oppressive and harmful government.

The rustic appearance and the somewhat antiquated facilities and features of the town of South Royalton led Ludwig Lachmann to observe at the end of the week that he could now say that he knew what life had been like in the nineteenth century!

The slanted floor in the room I was staying in required me to spend the night holding on to the sides of the bed so I would not slide out the window behind the low headboard. And some strange mishap seemed to have occurred to one of the female attendees while alone taking a shower that it was all too “shocking” for her to relate all the details.

Another participant, who originally came from Yugoslavia, said that some things in the town seemed so “scary” at night that he admitted the following: “I lived under Nazi occupation and I endured life under communist rule in my native Yugoslavia. But last night was the first time in my life that I slept with the light on!”

The Catalyst for Austrian Economics Reborn

The organizers at the Institute for Humane Studies had sensed the rightness of the time for arranging such a conference as a catalyst for expanding interest in the Austrian School of Economics. And with that goal in mind it can only be said, forty years later, that it was a resounding success.

Between the South Royalton conference in June of 1974 and the awarding of the Nobel Prize in Economics to F.A. Hayek in October of that same year, the Austrian School began a brilliant renaissance that has once more made it one of the most important forces for sound ideas on economics and public policy making in the world today. This was assisted at first, also, by the publication of those lectures delivered at South Royalton in book form in 1976 under the title, “The Foundations of Modern Austrian Economics.”

After near oblivion in the decades immediately after the Second World War due to the dominance of Keynesian Economics, the Austrian School has been reborn. There are universities at which undergraduate and graduate students can take courses on Austrian economics with professors knowledgeable about and dedicated to the tradition that began with Carl Menger and then grew under the ideas of Ludwig von Mises and F. A. Hayek.

There are, now, at least three scholarly journals devoted to the further development of Austrian Economic ideas, plus online websites, blogs, and printed publications explaining and applying “Austrian” ideas to the contemporary policy problems of the day. In addition, well-known and respected publishing houses print both scholarly and popular books on Austrian Economics every year.

Even some prominent political figures have publicly advocated the implementation of free market-oriented policies on the basis of Austrian economic insights – including abolishing the Federal Reserve and moving money and banking into the arena of the competitive free market.

All of this has had a good part of its beginning with that conference on Austrian Economics forty years ago in a small, out-of-the-way New England town.

[Originally published at Epic Times]

 

Categories: On the Blog

Movement of the Permanent Internet Tax Moratorium

Somewhat Reasonable - June 22, 2014, 3:42 PM

This morning the House Judiciary Committee will undertake the markup of the Permanent Internet Tax Freedom Act.  The Act would protect consumers from the increased costs in accessing and using the Internet by permanently extending the moratorium on Internet access taxes, and would prevent multiple and discriminatory taxation of Internet sales.

The legislation already boasts deep bipartisan support with 138 Republican and 76 Democrat co-sponsors. That’s 214 members of the House supporting it, and rumors of more to join soon would bring the total to more than 50 percent. The Senate version of the bill has 50 co-sponsors. So, there is already enough support for a permanent moratorium that doesn’t add extraneous elements that could cause the moratorium to fail.

The legislation also enjoys broad support of thought leaders and citizens, as was made clear in an April letter to Congress. But time to pass the measure is of the essence since the moratorium will expire on November 1 of this year. If allowed to expire, states would begin to collect taxes on Internet access, or apply other discriminatory taxes that may already be in place but which have been held at bay during the moratorium.

Scott Mackey, former chief economist for the National Conference of State Legislatures and currently a consultant to the wireless industry, has estimated that an average household’s taxes would increase by $50 to $75 a year if states decide to apply their sales or telecommunications taxes to Internet access. While that doesn’t seem like much, keep in mind that that’s about what a low-income family spends in a year on subsidized school lunches. Those who qualify for such programs are exactly those who will be most negatively affected by a lapsed moratorium.

Businesses also lose money when Congress doesn’t send a clear message. If Congress dallies—and history has proven that Congress rarely acts in time—telecommunications providers would need to prepare to collect the new taxes. That effort would be a waste of time and resources if Congress were to ride to the rescue at the last minute—a result of the cavalier attitude by government. Less economic growth and fewer jobs are the result. 

Hopefully, the next step on the right path will be taken today with the House Judiciary Committee deciding that the moratorium must continue and refraining from introducing other issues  which will end its progress  in the House.

[Originally published at The Institute for Policy Innovation]

Categories: On the Blog

Just What Is the Perfect Level of CO2?

Somewhat Reasonable - June 22, 2014, 10:06 AM

Ever in an argument with a AGW proponent?

I have stopped trying to argue with someone who refuses to look at anything but that which supports his own position. It’s pointless. So in an effort to end a debate quickly, I now politely ask individuals to explain how CO2, given how small it is relative to all around it, actually changes the entire system. That usually stops it with most of the crowd. Like many things I see with new age forecasters today, they will jump on one weather factor and not understand its behavior is because of everything around it.

The second thing I do is put the ball in their court. This requires knowing what went on historically with weather/climate. So I ask what the perfect number is for CO2 in the atmosphere. An example: Dr. Bill McKibben – one of the people I am frequently amazed with because his comments indicate he either does not know and understand what the weather has done before, or does and refuses to let that get in the way – runs a group called 350.org. He and his team want CO2 at 350 ppm (parts per million). So let’s just go to 350 ppm and see what it was like.

First, here is CO2 on the “correct” scale, which is the percentage of the atmosphere. This is not what you commonly see, which is the amount of CO2 in parts per million, where CO2 is grossly over-represented. The scale should be from one to a million, not a tiny fraction of a million.

Now, by using the very tiny increment they do, and by not informing you that if you actually used the scale from one to a million, this would hardly show up, they’re guilty of creative distortion of reality. After all, aren’t we measuring this against the entire atmosphere? Just think how absurd it would be if we measured against the *entire system: ocean plus atmosphere. The oceans play a huge role in the climate. It’s the reason for Dr. William Gray’s spot on assessment of this whole charade.

Anyway, on the graph below, the numbers on the left are in part per million. We are near 400 ppm now, and the last time it was near 350 ppm was back around 1988.

Here are just a few samples of the weather that year.

Summer:

Average since then:

That was the summer all the hysteria began on the upcoming climate disaster. But what about precipitation?

Since then:

What about hurricanes? What did the ACE Index look like? Gee, about the same as now.

In fact after the peak when the Pacific and Atlantic were warm in tandem, it looks like this recent downturn is lower than the late ‘80s. This may be because whenever there is a “climatic shift” (in the late 1970s the shift was to warming because the PDO turned warm; it’s now opposite), the atmosphere needs to adjust so that the processes which leads to above normal activity can readjust.

What about ice caps? Look at the Arctic when the Atlantic was in its cold mode. 1988 had much higher anomalies than now.

But the Southern Hemisphere ice anomaly is much higher than it was then! In fact, it’s trying for a record!

1988 was as low against the averages in the Southern Hemisphere (more so, it dropped to -1.5) than it is now in the Northern Hemisphere, and the forecast continues to call for Arctic sea ice extent to rise above average against the late summer minimum. This would be the first time this has happened since the Atlantic went into its warm mode.

Globally we’re well above average. Are we not supposed to consider the whole globe on this crucial matter? It was the ice caps – plural – that were supposed to melt. Could it be like almost everything in nature – a cyclical back and forth swing?

So far, the Arctic “warm season” has been colder than 1988 (last year was the coldest ever recorded).

Here it was in 1988:

The fact is, most of the “global” warming has occurred in the Arctic during the winter seasons, where temps 5-10 degrees Fahrenheit above normal are frigid anyway. Given the amount of water vapor in such low temperatures – water vapor being the number 1 greenhouse gas (100x CO2) – it’s a stretch to think this is affecting the entire global climate against anything that can be measured against normal stochastic and cyclical events.

Now you may say, “You are cherry picking.” I can cherry pick any time and find it worse. The fact I can instantly bring up any time where weather has been more extreme says that in the past, the weather has been more extreme! We can go on forever, believe me. Here’s is another sample: How is it most of the states’ high temperatures and the greatest decade for low temperatures were in the 1930s, when CO2 was under 300 ppm?

We are not even close now. Anyone ever consider this? We have added considerably more weather stations, yet the state records set during a time with less stations than now have not been exceeded. And even though it was hotter in summer, it was colder extreme wise in winter.

Here’s a fact: CO2, like anything, has some effect on the weather and climate, probably relative to its relationship with water vapor, which is most likely influenced by the greatest store of heat (energy) to the system (and its also the greatest store of CO2) – the oceans. But can you measure it against the natural cyclical reactions driven by much greater forces and even stochastic events? Can you assign a value when every single point brought up by the AGW side can be easily countered by anyone who knows and understands what has happened in weather and climate in the past? How do you know? And given what is facing us today, is CO2’s value to the climate effectively rounded so close to zero that the whole issue is a red herring?

Look at this. The title says it all.

The answer is, you can’t.

Finally, from IPCC reviewer Dr. Vincent Gray:

Faith in things unseen defines something that is preached in religion. But with all the counter evidence here, it seems like this worship of CO2 as the climate control knob is more religion than science. I don’t force my religion on another man; why is it these folks seem to be pushing theirs on us? And like so many other religions that believe they must convert all men to their belief, this too is a recipe for widespread misery and as in most cases, disaster.

So just what is the perfect level of CO2, and who among men thinks they are fit to decide that, given the overwhelming evidence that nature is in control?

Joe Bastardi is chief forecaster at WeatherBELL Analytics, a meteorological consulting firm.

© Copyright 2014 The Patriot Post

 

[Originally published at The Patriot Post]

Categories: On the Blog
Syndicate content