On the Blog

Dropcam Key to Google’s New Ubiquitous Physical Surveillance Network

Somewhat Reasonable - June 25, 2014, 9:44 AM

Google recently boughtDropcam for $555m, a company which makes inexpensive, easy-to-install, WiFi-video-streaming-cameras that connect to cloud-based networks for convenient monitoring, set-up and retrieval.

Photo courtesy of www.yourstory.com.

Please don’t miss this graphic – here – of how the Dropcam acquisition fits into Google’s plans for a new ubiquitous physical surveillance network that will complement and leverage its existing virtual surveillance network.

Dropcam fills a big missing part of Google’s vision – literally to see, hear and track everything – in order to fulfill Google’s mission “to organize the world’s information.”

Most Rapid and Complete Vertical-Integration

What is remarkable here is that in only about six months Google has bought six key companies (Boston DynamicsNestDeepMindTitan AerospaceSkyBox, & Dropcam) that comprise many of the key building blocks necessary to create a ubiquitous surveillance network that can physically track most everyone and everything from the sky and on the ground.

Effectively Google is taking its dominant ad-driven surveillance model to the next level. Obviously it is not content with dominating just the virtual world of data and monetization of software products and services. Apparently, Google has ambitions to leverage its virtual dominance to dominate large swaths of the physical economy as well: e.g. wearables, devices, aerial mapping, robots, cars, energy management, smart home services, Internet access, etc.

Importantly, physical surveillance, involving hardware and people, is much more difficult-to-scale, costly and people-intensive than Google’s virtual surveillance via cookies and other easy-to-scale software tracking technologies.

Evidently, no other company/entity is looking at the 21st century world/economy as holistically as Google’s apparent vision of fully integrating virtual and physical surveillance networks.

One could argue that these strategic acquisitions over the last-half year could be more cumulatively transformative of Google’s strategic direction, business mix and capabilities long term than any other half-year in Google’s storied history.

Simply, like the Google+ effort seamlessly integrated dozens of online products and services into a unified offering, expect Google to embark on another integration effort to secretly and seamlessly integrate these many new physical assets into a unified physical surveillance network. Once complete, expect Google’s dominance to be much greater than it is now because they are vertically-integrating much faster and more completely than any other entity — by far.

Accelerating & Compounding Privacy/Wiretapping Problems  

The privacy problems with physical surveillance in the real world are dramatically greater than in the largely-privacy-free virtual world.

For example, consider the two big privacy problems Google got into when it effectively wiretapped both Gmail and home WiFi via Street View. For Gmail, a Federal Judge has ruledthat Google’s installation of a physical “Content One Box” to scan Gmails to create advertising profiles was effectively illegal interception or “wiretapping.” For Street View, a Federal Appeals Court also has ruled that Google’s Street View interception of home WiFi signals was effectively wiretapping because the signals were judged to be private and not public.

The super big problem here for Google is that in at least two of Google’s highest-profile and longstanding services, Google did not believe it needed to either disclose what they were doing with others’ communications, or ask anyone for permission to do what they were doing with their private information.

If surveillance innovation-without-permission is the norm at Google, and Google continues to maintain the legal position that people “have no expectation of privacy,” Google’s physical surveillance using Dropcam, and other physical surveillance technologies, for Google’s business purposes, could be at risk of being ruled illegal wiretapping as well.

There are obvious potential privacy problems with Google owning Dropcam, because Google announced first that Google-owned Nest was the entity buying Dropcam (not Google itself), and second that Nest’s separate privacy policy would not allow the sharing of private Dropcam monitoring information with Google.

Ironically and tellingly, it took only a couple of days for Google to undermine the public assertion that Google could not access private Dropcam information under Nest’s privacy policy. Google just announced that Nest will allow Google and some App developers to have access to some of the private information that Nest (and now Dropcam) collects on its users. Apparently, the claimed privacy “Chinese Wall” policy may be more like a screen door in practice.

A Profound Business Conflict-of-Interest

In conclusion, the acquisition of Dropcam, potentially provides Google’s engineers and advertising business model with arguably some of the most private, intimate, and valuable personal information available — a continuous, inside-look into someone’s inner sanctum where the public and competitors could never go or see. The temptation for Google to use and leverage this valuable private information will be enormous.

With Nest, but even more so with Dropcam, Google has created a profoundly serious business conflict-of-interest by putting a paid-privacy-based-service inside a privacy-hostile advertising business model thirsting for access to the most valuable private info.

If there is one thing that we’ve learned about Google — from its world’s worst privacy rap sheet, and its latest ambitions for a ubiquitous physical surveillance network – is that Google has very serious problems in respecting boundaries and asking for permission to use others’ private data.

George Orwell in his classic dystopian novel “1984,” envisioned a surveillance-technology called a telescreen that is eerily similar to Google-Dropcam’s capabilities today. It appears Google’s latest acquisition spree to assemble a ubiquitous physical surveillance networkenables Google to be the 21st century’s Big Brother Inc.

Forewarned is forearmed.

Orignially published at www.precursorblog.com.

 

Categories: On the Blog

New York, Legacy Cities Dominate Transit Urban Core Gains

Somewhat Reasonable - June 25, 2014, 9:18 AM

Much attention has been given the increase in transit use in America. In context, the gains have been small, and very concentrated (see: No Fundamental Shift to Transit, Not Even a Shift). Much of the gain has been in the urban cores, which house only 14 percent of metropolitan area population. Virtually all of the urban core gain (99 percent) has been in the six metropolitan areas with transit legacy cities (New York, Chicago, Philadelphia, San Francisco, Boston, and Washington).

In recent articles, I have detailed a finer grained, more representative picture of urban cores, suburbs and exurbs than is possible with conventional jurisdictional (core city versus suburban) analysis. The articles published so far are indicated in the “City Sector Articles Note,” below.

Transit Commuting in the Urban Core

As is so often the case with transit statistics, recent urban cores trends are largely a New York story. New York accounted for nearly 80 percent of the increase in urban core transit commuting. New York and the other five metropolitan areas with “transit legacy cities” represented more than 99 percent of the increase in urban core transit commuting (Figure 1). This is not surprising, because the urban cores of these metropolitan areas developed during the heyday of transit dominance, and before broad automobile availability. Indeed, urban core transit commuting became even more concentrated over the past decade. The 99 percent of new commuting (600,000 one-way trips) by transit in the legacy city metropolitan areas was as well above their 88 percent of urban core transit commuting in 2000.

New York’s transit commute share was 49.7 percent in 2010, well above the 27.6 percent posted by the other five metropolitan areas with transit legacy cities. The urban cores of the remaining 45 major metropolitan areas (those over 1,000,000 population) had a much lower combined transit work trip market share, at 12.8 percent.

The suburban and exurban areas, with 86 percent of the major metropolitan area population, had much lower transit commute shares. The Earlier Suburban areas (generally median house construction dates of 1946 to 1979, with significant automobile orientation) had a transit market share of 5.7 percent, the Later Suburban areas 2.3 percent and the Exurban areas 1.4 percent (Figure 2).

The 2000s were indeed a relatively good decade for transit, after nearly 50 years that saw its ridership (passenger miles) drop by nearly three-quarters to its 1992 nadir. Since that time, transit has recovered 20 percent of its loss. Transit commuting has always been the strongest in urban cores, because the intense concentration of destinations in the larger downtown areas (central business districts) that can be effectively served by transit, unlike the dispersed patterns that exist in the much larger parts of metropolitan areas that are suburban or exurban. Transit’s share of work trips by urban core residents rose a full 10 percent, from 29.7 percent to 32.7 percent (Figure 3).

There were also transit commuting gains in the suburbs and exurbs. However, similar gains over the next quarter century would leave transit’s share at below 5 percent in the suburbs and exurbs, because of its small base or ridership in these areas.

Walking and Cycling

The share of commuters walking and cycling (referred to as “active transportation” in the Queen’s University research on Canada’s metropolitan areas) rose 12 percent in the urban core (from 9.2 percent to 10.3 percent), even more than transit. This is considerably higher than in suburban and exurban areas, where walking and cycling remained at a 1.9 percent market share from 2000 to 2010.

Working at Home

Working at home (including telecommuting) continues to grow faster than any work access mode, though like transit, from a small base. Working at home experienced strong increases in each of the four metropolitan sectors, rising a full percentage point or more in each. At the beginning of the decade, working at home accounted for less work commutes than walking and cycling, and by 2010 was nearly 30 percent larger.

The working at home largest gain was in the Earlier Suburban Areas, with a nearly 500,000 person increase. Unlike transit, working at home does not require concentrated destinations, effectively accessing employment throughout the metropolitan area, the nation and the world. As a result, working at home’s growth is fairly constant across the urban core, suburbs and exurbs (Figure 4). Working at home has a number of advantages. For example, working at home (1) eliminates the work trip, freeing additional leisure or work time for the employee, (2) eliminates greenhouse gas emissions from the work trip, (3) expands the geographical area and the efficiency of the labor market (important because larger labor markets tend to have greater economic growth and job creation, and it does all this without (4) requiring government expenditure.

Driving Alone

Despite empty premises about transit’s potential, driving remains the only mode of transport capable of comprehensively serving the modern metropolitan area. Driving alone has continued its domination, rising from 73.4 percent to 73.5 percent of major metropolitan area commuting and accounting for three quarters of new work trips. In the past decade, driving alone added 6.1 million commuters, nearly equal to the total of 6.3 million major metropolitan area transit commuters and more than the working at home figure of 3.5 million. To be sure, driving alone added commuters in the urban core, but lost share to transit, dropping from 45.2 percent to 43.4 percent. In suburban and exurban areas, driving alone continued to increase, from 78.2 percent to 78.5 percent of all commuting.).

Density of Cars

The urban cores have by far the highest car densities, despite their strong transit market shares. With a 4,200 household vehicles available per square mile (1,600 per square kilometer), the concentration of cars in urban cores was nearly three times that of the Earlier Suburban areas (1,550 per square mile or  600 per square kilometer) and five times that of the Later Suburban areas (950 per square kilometer). Exurban areas, with their largely rural densities had a car density of 100 per square mile (40 per square kilometer).

Work Trip Travel Times

Despite largely anecdotal stories about the super-long commutes of those living in suburbs and exurbs, the longest work trip travel times were in the urban cores, at 31.8 minutes one-way. The shortest travel times were in the Earlier Suburbs (26.3 minutes) and slightly longer in the Later Suburbs (27.7 minutes). Exurban travel times were 29.2 minutes. Work trip travel times declined slightly between 2000 and 2010, except in exurban areas, where they stayed the same. The shorter travel times are to be expected with the continuing evolution from monocentric to polycentric and even “non-centric” employment patterns and a stagnant job market (Figure 5).

Contrasting Transportation in the City Sectors

The examination of metropolitan transportation data by city sector highlights the huge differences that exist between urban cores and the much more extensive suburbs and exurbs. Overall the transit market share in the urban core approaches nine times the share in the suburbs and exurbs. The walking and cycling commute share in the urban core is more than five times that of the suburbs and exurbs. Moreover, the trends of the past 10 years indicate virtually no retrenchment in automobile orientation, as major metropolitan areas rose from 84 percent suburban and exurban in 2000 to 86 percent in 2010. This is despite unprecedented increases is gasoline prices and the disruption of the housing market during worst economic downturn since the Great Depression.

[Originally published at New Geography]

Categories: On the Blog

Bilderberg: The Most Important Event You’ve Never Heard Of

Somewhat Reasonable - June 24, 2014, 2:36 PM

One of the world’s oldest and most important political conferences celebrated its 60th anniversary this month. The Bilderberg Group met in Copenhagen, Denmark from May 29th to June 1st to discuss matters of global import. Named after the Hotel Bilderberg where the first conference was held in 1954, Bilderberg has held meetings every year since then between many of the world’s top political, economic, and business leaders.

Yet, thanks to a culture of deep secrecy, very few people know much about Bilderberg or its objectives. This has led to a great deal of speculation, and quite a few conspiracy theories. It is important to separate myth from reality in order to understand Bilderberg because it is one of the western world’s most significant political meetings.

So what is Bilderberg, and what are they talking about this year? Here are the five things you need to know:

1. It’s the Talking Shop of the Global Elite

Bilderberg is, at its core, a gathering of the major players in the world of politics and business. The stated aim is to provide a mostly informal setting away from the prying eyes of the press to discuss some of the issues facing the world that demand international attention. It is a talking shop, a place where leaders can hob-nob and share ideas.

Technically a private event, invitations are presented on an individual basis. In other words, attendees do not go to Bilderberg as representatives of their governments or businesses, but as private individuals. This procedure has raised some eyebrows, since it is deeply questionable whether our elected leaders should be attending conferences with their international counterparts without any sort of oversight. There is also the risk of business leaders lobbying politicians, given their privileged access during the conference.

2. This Year’s Agenda

Every year Bilderberg sets an agenda concerning issues of note. Frequent subjects have been global security and economic intregration. According to public statements by the group’s steering committee, Bilderberg 2014 was focused on privacy and government transparency. This is of course a serious issue currently being faced across the world, with the internet and other forms of media making private citizens’ lives far more public, data more readily available, and the potential for abuse all the greater. Recent scandals, such as the Edward Snowden leaks, have likewise raised concerns over privacy and the need for international cooperation on enforcing standards.

3. This Year’s Guest List

This year, as every year, features a star-studded guest list. From the United States, General Keith Alexander, former head of the N.S.A., and Marie-Josee Kravis of the New York Fed, and her billionaire husband Henry Kravis, are among the guests. Christine LaGarde, head of the I.M.F., George Osborne, British Chancellor of the Exchequer, and many other major European leaders also attended. The sheer amount of wealth and power gathered in one place must necessarily be a cause for concern. Yet, no one seems to pay it much attention.

4. The Proceedings are Kept Totally Secret

A common question people ask when they hear about what Bilderberg is and the sorts of people who attend it is: “If it’s so important, why the hell haven’t I heard of it?”. The answer is simple: the organizers and attendees work very hard to keep the event secret. In fact the only reason we have a general agenda and guest list is because independent journalists have been scrutinizing them for years.

It seems like a bit of a no-brainer that any gathering of the most important elected and appointed officials who govern the lives of the citizens of the Western world with top business and economic leaders would demand extreme scrutiny. Yet much of the mainstream media has for decades only casually observed the event.

It is that secrecy that is perhaps most worrying about the Bilderberg Conference. If our political and corporate leaders meet without any sort of oversight, how can we hold them accountable?

5. Despite the Conspiracy Theories, Bilderberg is not that Sinister

The wealth, power, and secrecy of Bilderberg blend into a irresistible cocktail to the conspiracy-minded. Some people have tried to claim that Bilderberg is some sort of shadow government that secretly runs the world. These rumors have little foundation outside the fevered imaginations of a few fringe observers. That does not mean there is no cause for concern.

Human beings are corruptible, politicians even more so. The presence of moneyed interests and powerful individuals all gathered together for a secret conference presents a potential temptation.

As private citizens we have very limited power, and thus must always be wary when those who would lead us choose to keep us in the dark.

 

[Originally published at IOnTheScene]

 

Categories: On the Blog

Uber-Left Free Press’ ’Net Neutrality’ Isn’t What Most Supporters Think It Is

Somewhat Reasonable - June 24, 2014, 1:56 PM

In the cinematic classic “The Princess Bride,” Inigo Montoya utters thenow oft-repeated “You keep using that word.  I do not think it means what you think it means.”Uber-Left government-media outfit Free Press is highly practiced in this disingenuous art.  Their name is one shining example.  It sounds good, but when you find out for what they actually stand – not so much.And they use “Net Neutrality” one way publicly to engender support for the already heinous policy – but their ultimate intent with it is something drastically different, and dramatically worse.

Free Press’ presented Net Neutrality persona sounds benign and innocuous.

When we log on to the Internet via our computer or smartphone, we take a lot for granted. We assume we’ll be able to access any website or use any application we want, whenever we want, at the fastest speed, whether it’s a corporate site or a friend’s blog. We assume we can use any service we like — watch online videos, update our Facebook status, read the news — any time we choose, on any device we choose. What makes all these assumptions possible is a principle called Net Neutrality.

But there are a lot of things Free Press isn’t telling you.

Net Neutrality is socialism for the Internet – it guarantees everyone equal amounts of nothing.  It is the government mandating that everything on the World Wide Web be delivered at the exact same speed.  As with all things government – the Veterans Administration debacle being the latest terrible example – that speed is S…L…O…W.

Net Neutrality is a “solution” desperately running around in search of a problem.  All of the nightmare scenarios Free Press and their fellow proponents put forward are hypothetical – they aren’t actually occurring.  And they haven’t occurred.  And they won’t occur – because the free market dictates that they won’t.  (Netflix is currentlyclaiming Net Neutrality violations – but they too are mis-defining it, and have been proven to be faking the evidence.)

The Internet has since its commercial inception been a virtually regulation-free zone. As a result of this government-less-ness, the Web has exploded into the free speech-free market Xanadu we all know and love.

The government doesn’t have a regulatory hook in the Web – so it can’t begin reeling it in.  And that drives Free Press crazy.  So they weave their Net Neutrality fairy tale – we need the government to save us from…this unbelievably amazing Internet?  Really?

Free Press wants the government to reel in the private sector Web – because they don’t want there to be a private sector Web.  How do we know this?  Because Free Press’s co-founder said so.

Meet Robert McChesney – avowed Marxist and college professor (please pardon the redundancy).

Avowed Marxist?  McChesney writes for and was editor of Monthly Review – about which he wrote:

“Although Monthly Review has a current circulation of 8,500 – and has never seen its circulation rise much above the 12,000 mark – it is one of the most important Marxist publications in the world, let alone the United States.

In Monthly Review and elsewhere, McChesney has written things like:

“There is no real answer but to remove brick by brick the capitalist system itself, rebuilding the entire society on socialist principles.”

And:

“Any serious effort to reform the media system would have to necessarily be part of a revolutionary program to overthrow the capitalist system itself.”

So it comes as no surprise that Free Press co-founder McChesney also says this:

“At the moment, the battle over network neutrality is not to completely eliminate the telephone and cable companies. We are not at that point yet. But the ultimate goal is to get rid of the media capitalists in the phone and cable companies and to divest them from control.

How very Hugo Chavez of them.

So what McChesney and his Free Press want is to “remove brick by brick the capitalist (Internet) system itself” and “overthrow” “the media capitalists” and “divest them from control.”

Leaving us with government as our sole Internet Service Provider (ISP) – single-payer government Internet.  How’s that system working for veterans’ health care?

All of which is not exactly the innocuous Net Neutrality that Free Press has been selling, now is it?

Inigo Montoya – call your office

Categories: On the Blog

Never-Ending Green Disasters.

Somewhat Reasonable - June 24, 2014, 9:24 AM

Newton’s 3rd law of motion, if applied to bureaucracy, would state: “Whenever politicians attempt to force change on a market, the long-term results will be equal and opposite to those intended”.

This law explains the never-ending Green energy policy disasters. 

Greens have long pretended to be guardians of wild natural places, but their legislative promotion of ethanol biofuel has resulted in massive clearance of tropical forests for palm oil, sugar cane and soy beans.  Their policies have also managed to covert cheap food into expensive motor fuel and degraded land devoted to bush, pastures or crops into mono-cultures of corn for bio-fuel. This has wasted water, increased world hunger and corrupted the political process for zero climate benefits.

Greens also pretend to be protectors of wildlife and habitat but their force-feeding of wind power has uglified wild places and disturbed peaceful neighbourhoods with noisy windmills and networks of access roads and transmission lines. These whirling bird-choppers kill thousands of raptors and bats without attracting the penalties that would be applied heavily to any other energy producers – all this damage to produce trivial amounts of intermittent, expensive and blackout-prone electricity supplies.

Greens have long waged a vicious war on coal, but their parallel war on nuclear power and the predictably intermittent performance of wind/solar energy has forced power generators to turn to hydro-carbon gases to backup green power. But Greens have also made war on shale-gas fracking – this has left countries like Germany with no option but to return to reliable economical coal, or increase their usage of Russian gas and French nuclear power. Their war on coal has lifted world coal usage to a 44 year high.

Greens also say they support renewable energy, but they oppose any expansion of hydro-power, the best renewable energy option. For example, they scuppered the Gordon-below-Franklin hydro-electric project, which would have given Tasmania everlasting cheap green electricity. But they never mention their awkward secret – the Basslink under-sea cable goes to Loy Yang power station in Victoria and allows Tasmania to import coal-powered electricity from the mainland.

Robbie Burns warned us over 200 years ago:

“The best laid schemes of Mice and Men
Gang aft agley,
An’ lea’e us nought but grief an’ pain,
For promis’d joy!”

Categories: On the Blog

The Truth About the Global Warming: Heartland’s 9th International Conference on Climate Change, July 7-9 in Las Vegas

Somewhat Reasonable - June 24, 2014, 8:50 AM

Come to fabulous Las Vegas July 7-9 to meet leading scientists from around the world who question whether “man-made global warming” will be harmful to plants, animals, or human welfare. Learn from top economists and policy experts about the real costs and futility of trying to stop global warming.

Meet the leaders of think tanks and grassroots organizations who are speaking out against global warming alarmism.

Don’t just wonder about global warming … understand it!

Read testimonals from previous happy attendees!

#ICCC9 takes place at the Mandalay Bay Resort and Casino. Rooms start at only $80 per night plus fees and taxes. Fly American or United and get a discount of up to 10%!

We are hosting the event in Las Vegas that week in partnership with our friends at FreedomFest, who are cosponsors of #ICCC9 and host their excellent annual conference July 9 – 12 at Planet Hollywood.

A preliminary schedule for the event is here. Speakers already confirmed include Fred Singer, Craig Idso, Willie, Soon, Roy Spencer, Marc Morano, Christopher Monckton, Patrick Moore, and Anthony Watts. For more speakers and their bios, click here.

Register for the event here, or call 312/377-4000 and ask for Ms. McElrath or reach her via email at zmcelrath@heartland.org.

Exhibiting and sponsorship opportunities are available starting at only $150! Contact Taylor Smith at tsmith@heartland.org for information about promotional opportunities and prices.

Several prizes will be awarded to scholars, elected officials, and activists for outstanding contributions to the debate over global warming. To nominate someone or to suggest a prize, contact Robin Knox at rknox@heartland.org.

To watch videos from the previous eight International Conferences on Climate Change, click here. For more information about The Heartland Institute, visit our website.

Categories: On the Blog

Redskins Brouhaha Shows How Politics Is Ruining Sports Talk Radio

Somewhat Reasonable - June 23, 2014, 3:13 PM

One of the few simple joys I have in life, shared with Camille Paglia, is listening to sports radio. She describes it as one of the few arenas still safe for an old-fashioned sort of masculinity – I think of it more as a respite from reading and thinking about politics and policy, second only to leaning back in an easy chair with a good simple future-noir detective story about hunting Chinese Martians or a word that could end the world. There is a simple rhythm and cadence to good sports talk radio which allows for an undercurrent of wit and humor juxtaposed with statistical argumentation, hitting the high and the low.

Of course, in the ESPN age, the realm of sports is often invaded by politics. This is typically in the form of mild irritants, and the more sports-minded hosts will back away slowly from guests who suddenly feel the need to expound on their deeply held and often clumsily constructed theories about politics to troll their listeners. Some guests are serial offenders in this regard: Kevin Blackistone, for instance, has decried the playing of the national anthem at ballgames as jingoistic warmongering, and said the U.S. should boycott the Olympic Games over Israel’s actions toward the Gaza Flotilla. So you learn to avoid those segments and head over to the ones talking about whether the Vernon Davis holdout is justified and what roster moves need to be made if LeBron is going to stay in Miami.

So it is with great irritation that I have experienced the invasion of sports radio over the past few months by a voice I am more familiar with for its meandering conspiracy-theorizing over the rampant influences of the Brothers Koch: Harry Reid, whose funereal nagging about the name of the Washington Redskins has elevated this battle over political correctness from a low simmer to a hot summer topic. No one particularly cared about this fight when the Redskins were horrid (which has been pretty much every year since I was ten), but since they looked like they were getting good again a year ago, the fight is back in a big way, with all Democratic Senators (save Virginia’s Mark Warner and Tim Kaine) endorsing a name change.

Mostly, this is a sideline issue, as Redskins owner Daniel Snyder has reiterated that the team’s name will never change as long as he owns them, and as the franchise is one of the NFL’s most valuable and a gigantic money-printing machine, there seems to be no possibility of a financial incentive from advertisers or the NFL to make a change. What’s more, the poll data on Native Americans across the country shows overwhelming support for the name. There has never been a poll showing even a plurality of Native Americans in favor of a name change. Were it 90-10 in the other direction, I think the NFL would be more interested in the issue.

As a legal matter, this all changed yesterday with the ThinkProgress report that the U.S. Patent and Trademark Office’s Trademark Trial and Appeal Board decision to cancel six federal trademark registrations of the franchise, under the reasoning that they were derogatory at the time of their registration in the 1960s. Now that the lawyers have explained what this means, it actually looks like the answer is: not a lot.

ESPN.com Sports Business reporter Darren Rovell wrote, “[w]ithout protection, any fan can produce and sell Washington Redskins gear without having to pay the league or the team for royalties and wouldn’t be in violation of any law for doing so.” That is simply not true. The decision by the TTAB does not require the Washington D.C.-based NFL team to change its name, stop using the “Redskins” marks and it does not mean that the organization loses all legal rights in the marks. There are benefits to having a federal registration attached to an owned trademark, including but not limited to a legal presumption of ownership of the mark and the ability to bring an infringement action in federal court seeking statutory damages. Importantly, a lack of a federal registration in place does not equate to anarchy where any individual can create merchandise bearing “Redskins” marks and sell same in commerce.

So there is no open-season on Redskins merchandizing – and even if there were, it would serve to undercut only a small portion of the team’s revenue. The Redskins intend to appeal, as they have done before, and successfully. For the time being at least, the issue is no closer to a name change.

That being said, the trendlines of politics are such that I expect a name change to be inevitable in my lifetime because of where the team is located and the pressure exerted by our ruling elite. One of the big lessons of life in the Obama era is that it’s important to avoid the attention of the ruling class – lest you be audited, harassed, or generally become a hot topic of media conversation as a proxy for some other battle. There’s a reason this is happening to the Washington Redskins and not the Cleveland Indians or the Chicago Blackhawks or the Florida State Seminoles. If you live within the consciousness of a critical mass of people in power for whom all life is politicized, you will be made to bend to their will, by whatever means necessary. The last thing in the world you ought to want is for President Obama to be asked his opinion about your enterprise, and then have those around him work to make that opinion a reality.

That’s why it’s important to learn how not to be seen. We are a country now where perceptive people develop skills to go unnoticed by the imperial center. Survival now means avoiding having DC and its cohorts notice you at all costs. In this town, they understand that freedom of speech sounds like a good idea, after all, right up until the point where someone’s feelings are hurt. So in retrospect, if the Redskins wanted to remain the Redskins, they should have just left town. The Richmond Redskins would have done just fine. Either that or draft Michael Sam.

Honest opponents of the name would concede that it wasn’t a historical epithet; concede that the polling shows overwhelmingly Native Americans don’t think it’s an epithet today; concede that it’s not the same as the N-word and no one thinks it is lest everyone with an R shirt be a giant racist. They would concede they’re just opposed to it because it’s the 15 minute PC hate. What Bob Costas, Keith Olbermann, Mike Wise understand, as people who have personally experienced the hardships of abiding racism in their lives, is that the only way you can demonstrate you’re not a racist in the post-Obama era is to find new racists to attack. I don’t really mind it that much that these white liberal elitists want to demonstrate that they’re down with the struggle, but I really wish they didn’t have to ruin sports radio to do it.

 

[Originally published at The Federalist]

Categories: On the Blog

Executive fiats in the other Washington

Somewhat Reasonable - June 23, 2014, 1:52 PM

Two western state governors intend to get low carbon fuel standards, by legislation or decree

 

Progressives believe in free speech, robust debate, sound science and economics, transparency, government by the people and especially compassion for the poor – except when they don’t. These days, their commitment to these principles seems to be at low ebb … in both Washingtons.

A perfect example is the Oregon and Washington governors’ determined effort to enact Low Carbon Fuel Standards – via deceptive tax-funded campaigns, tilted legislative processes and executive fiat.

The standards require that conventional vehicle fuels be blended with alternative manmade fuels said to have less carbon in their chemical makeup or across the life cycle of creating and using the fuels. They comport with political viewpoints that oppose hydrocarbon use, prefer mass transit, are enchanted by the idea of growing fuels instead of drilling and fracking for them, and/or are convinced that even slightly reduced carbon dioxide will help reduce or prevent “dangerous manmade climate change.”

LCFS fuels include ethanol, biodiesel and still essentially nonexistent cellulosic biofuels, but the concept of lower carbon and CO2 naturally extends to boosting the number of electric and hybrid vehicles.

Putting aside the swirling controversies over natural versus manmade climate change, its dangers to humans and wildlife, the phony 97% consensus, and the failure of climate models – addressed in Climate Change Reconsidered and at the Heartland Institute’s Climate Conference – the LCFS agenda itself is highly contentious, for economic, technological, environmental and especially political reasons.

California has long led the nation on climate and “green” energy initiatives, spending billions on subsidies, while relying heavily on other states for its energy needs. The programs have sent the cost of energy steadily upward, driven thousands of families and businesses out of the state, and made it the fourth worst jobless state in America. Governors Jerry Brown, John Kitzhaber and Jay Inslee (of California, Oregon and Washington, respectively) recently joined British Columbia Premier Christy Clark in signing an agreement that had been developed behind closed doors, to coordinate policies on climate change, low carbon fuel standards and greenhouse gas emission limits throughout the region.

California and BC have already implemented LCFS and other rules. Oregon has LCFS, but its law terminates the program at the end of 2015, unless the legislature extends it. As that seems unlikely, Mr. Kitzhaber has promised that he will use an executive order to impose an extension and “fully implement” the state’s Clean Fuels Program. “We have the opportunity to spark a homegrown clean fuels industry,” the governor said, and he is determined to use “every tool at my disposal” to make that happen. He is convinced it will create jobs, though experience elsewhere suggests the opposite is much more likely.

Mr. Inslee is equally committed to implementing a climate agenda, LCFS and “carbon market.” If the legislature won’t support his plans, he will use his executive authority, a state-wide ballot initiative or campaigns against recalcitrant legislators – utilizing support from coal and hedge fundbillionaire Tom Steyer. Indeed, Inslee attended a closed-door fundraiser in Steyer’s home the very day he signed the climate agreement. The governor says he won’t proceed until a “rigorous analysis” of LCFS costs and technologies has been conducted, but he plans to sole-source that task to a liberal California company.

Their ultimate goal is simple. As Mother Jones magazine put it, “if Washington acts strongly on climate, the impact will extend far beyond Washington…. The more these Pacific coast states are unified, the more the United States and even the world will have to take notice.”

But to what end? In a world that is surging ahead economically, to lift billions out of abject poverty and disease – with over 80% of the energy provided by coal, oil and natural gas – few countries (or states) are likely to follow. They would be crazy to do so. Supposed environmental and climate benefits will therefore be few, whereas damage to economies, families and habitats will be extensive.

The Oregonian says the LCFS is “ultimately a complicated way of forcing people who use conventional fuels to subsidize those who use low-carbon fuels. It’s a hidden tax to support ‘green’ transportation. It will raise fuel prices … create a costly compliance burden … [and] harm Oregon’s competitiveness far more than it will help the environment. And that assumes it works as intended.” It will not and cannot.

LCFS laws will raise the cost of motor fuels by up to 170% over the next ten years – on top of all the other price hikes like minimum wages and the $1.86 trillion in total annual federal (only) regulatory compliance costs that businesses and families already have to pay – the Charles River Associates economic forecasting firm calculates. If these LCFS standards were applied nationally, CRA concluded, they would also destroy between 2.5 million and 4.5 million American jobs.

Ethanol gets 30% less mileage than gasoline, so motorists pay the same price per tank but can drive fewer miles. It collects water, clogs fuel lines, corrodes engine parts, and wreaks havoc on lawn mowers and other small engines. E15 fuel blends (15% ethanol) exacerbate these problems, and low-carbon mandates (“goals”) would likely require 20% ethanol and biodiesel blends, trucking and other groups point out.

Those blends would void vehicle engine warranties and cause extensive damages and repair costs. The higher fuel costs would affect small business expansion, hiring, profitability and survival. The impact of lost jobs, repair costs, and soaring food and fuel bills will hit poor and minority families especially hard.

Some farmers make a lot of money off ethanol. However, beef, pork, chicken, egg and fish producers must pay more for feed, which means family food bills go up. Biofuel mandates also mean international aid agencies must pay more for corn and wheat, so more starving people remain malnourished longer.

Biofuels harm the environment. America has at least a century of petroleum right under our feet, right here in the United States, but “renewable” energy advocates don’t want us to lease, drill, frack or use that energy. However, the per-acre energy from biofuels is minuscule compared to what we get from oil and gas production. In fact, to grow corn for ethanol, we are already plowing an area bigger than Iowa – millions of acres that could be food crops or wildlife habitat. To meet the latest biodiesel mandate of 1.3 billion gallons, producers will have to extract oil from 430 million bushels of soybeans – which means converting countless more acres from food or habitat to energy.

Producing biofuels also requires massive quantities of pesticides, fertilizers, fossil fuels – and water. The US Department of Energy calculates that fracking requires 0.6 to 6.0 gallons of fresh or brackish water per million Btu of energy produced. By comparison, corn-based ethanol requires 2,500 to 29,000 gallons of fresh water per million Btu of energy – and biodiesel from soybeans consumes an astounding and unsustainable 14,000 to 75,000 gallons of fresh water per million Btu!

Moreover, biofuels bring no net “carbon” benefits. In terms of carbon molecules consumed and carbon dioxide emitted over the entire planting, growing, harvesting, refining, shipping and fuel use cycle, ethanol, biodiesel and other “green” fuels are no better than conventional gasoline and diesel.

Put bluntly, giving politicians, bureaucrats and eco-activists power over our energy would be even worse than having them run our healthcare system and insurance websites. Spend enough billions (much of it  taxpayer money) on subsidies and propaganda campaigns – and you might convince a lot of people they should pay more at the pump and grocery store, and maybe lose their jobs, for illusory environmental benefits. But low-carbon mandates are a horrid idea that must be scrutinized in open, robust debate.

It’s time we stopped letting ideology trump science, economics and sanity. We certainly cannot afford to let despotic presidents and governors continue using executive orders to trample on our legislative processes, government by the people, constitutions, laws, freedoms, livelihoods and living standards.

Fiats are fun cars to drive. Executive fiats are dictatorial paths to bad public policy.

 

Paul driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.

Categories: On the Blog

Common Core Violates Privacy of Students and Families

Somewhat Reasonable - June 23, 2014, 1:43 PM

The public, even parents of school aged children, tend to trust those in authority to make good decisions and enact credible laws regarding our public education system, believing that any changes made would be in the public’s best interest. While that is largely true, citizens should remain vigilant and carefully examine any and all new laws and mandates. Complacency invites corruption. Our nation’s education system must always be one in which we can fully trust. Anything else is unacceptable.

The implementation of Common Core Standards, and its resulting curriculum, initiated a major shift in our nation’s education system, and the changes it requires have caused enormous controversy throughout America for numerous reasons that we have outlined in previous articles.

Let’s focus on the Data Mining element of Common Core. Now that the public has had a chance to “read the rules”, we discover Common Core violates the privacy of students and their families, through the gathering and sharing of personal information and worse yet, that the private information is being sent and shared with the federal government.

Parents are particularly concerned about three major issues: 1. The safety aspect of schools and government entities being able to keep personal data safe from “hackers”; 2. The reasons our federal government intervened and interfered with state rights, and require the gathering of personal data from students and their families; and 3. How parents can use legal ways to avoid divulging intrusive private information to schools.

The Problem of Keeping Private Information Safe

We are living in an age in which most information is being stored electronically.  It is popular due to the ease, convenience, and ability to store so much data without requiring massive space to do so.  With these wonderful attributes though, there is one unfortunate problem.  The stored data is not as safe as we once had believed.   A new study indicated almost half of all Americans’ private information was compromised/revealed due to hackers.  Hackers have successfully infiltrated and gleaned information from sources that were once considered impossible to “hack”, such as chain stores like Target and even our government agencies.  For that matter, our government has used sophisticated tech equipment to spy on other countries.  Nobody is safe from prying technology today, and thus neither is any electronically stored information garnered through schools.

Therefore, parents should be exceedingly cautious about giving personal information to schools. Some have suggested Common Core itself could be considered one of the more dangerous domestic spying programs.  This came about when Bill Gates, one of the leaders and most avid promoters of Common Core, put millions of dollars of his own personal money into its development, implementation and advertising of the new national education program. Consider that much of the data mining will occur via Microsoft’s Cloud system.

Even the Department of Education is concerned with the issue of privacy, admitting that some of the data gathered may be “of a sensitive nature.”   This is indeed an understatement by the DOE as much of the data collected will be completely unrelated to education.  Data collected will not only include grades, test scores, name, date of birth and social security number, it will also include parents’ political affiliations, individual or familial mental or psychological problems, beliefs, religious practices, income and other incredibly sensitive, highly private information about the student and the student’s family.

There is also concern that private companies donate education apps to schools in exchange for children’s information, increasing the threat of children’s personal data being abused.

According to The New American, schools in Delaware, Colorado, Massachusetts, Kentucky, Illinois, Louisiana, Georgia and North Carolina have committed to “pilot testing” and information dissemination via sending students’ personal information to the InBloom database (a non-profit group funded by the Gates Foundation and supported by Amazon). Not yet known iswhether parents know and/or approve of the dissemination of that personal information.

Reasons for the accumulation of student/family Data

We have all heard the quote:  “Information is Power”.  New York TimesColumnist Matthew Lesko expanded upon that theme with this statement: “Information is the currency of today’s world. Those who control information are the most powerful people on the planet – and the ones with the most bulging bank accounts.”  Imagine the power of those who receive the collection of student data from most every student in America.

Common Core supporters will point out that there is nothing within the standards or rules which requires personal data be acquired; and that any data gathering is entirely up to the individual states.  Ah, but it isn’t that simple or even true!  That statement is highly disputed, with a little research.

The federal government had been prohibited from gathering students’ specific data for a national database, but shortly after Obama became president, the Stimulus Bill provided a loophole.  Money was given to each of the states to develop longitudinal data systems to catalog data generated by Common Core aligned tests.  Permission to release student information collected since 2009 was then authorized to be shared among federal agencies without the consent of parents.

The federal government encouraged states to participate in data collection initiatives such as the Data Quality Campaign, the Early Childhood Data Collaborative, and the National Student Clearinghouse, all of which helped to increase the collection and sharing of children’s formally protected data.

In addition, the National Education Data Model suggests that states increase their collection of information about students to over 400 data points on each one.  That leaves little doubt that the construction of their data systems has been purposely increased.

Beginning in the 2014-2015 school year, students under Common Core will begin taking state standardized tests, and student-specific specific data will be stored by the states in their newly create longitudinal data system, designed to track student progress from K through 12th grades.  That data will be dissected, supposedly for the purpose of improving education.  However, as a nation, we must ask ourselves whether we want to respect individual rights of privacy or whether we want a more “collective” approach that claims it is permissible if the action benefits the common majority.  Consider, if such a benefit is at the expense of others.  It is moral?  Leo Tolstoy said: “Wrong does not cease to be wrong because the majority share in it.”

What will be collected?  

The type of material being collected due to the changes by the current administration, is so extensive, one could say “almost everything will be included, some of which is highly personal “.  Of course test scores will be collected, and be aware, Common Core encourages massive testing.  What is strange and should be a red flag to reasonable people is why schools are also asking about student’s hobbies, psychological evaluations, medical records, religious affiliation, political affiliation, family income, behavioral problems, disciplinary history, career goals, addresses, and bus stop times, with their locations.  It was even suggested schools use cameras and/or special equipment to judge facial expressions and a student’s posture in the classroom, supposedly for the purpose of assigning stress levels.

The Department of Education claims to be concerned with the issue of privacy, admitting that some of the data gathered may be “of a sensitive nature.”  This is indeed an understatement by the DOE.  Knowing much of the data collected will be completely unrelated to education, in 2012, a combination of 24 states and territories struck a deal to implement data mining to receive federal grants. “Personally Identifiable Information” was allowed to be extracted from each student.  Examples below are some of the more extreme examples of data mining, causing reasonable people to question why the government would venture into such an invasion of our privacy.

1. Political affiliations or beliefs of the student or parent;

2. Mental and psychological problems of the student or the student’s family;

3. Sex behavior or attitudes;

4. Illegal, anti-social, self-incriminating, and demeaning behavior,

5. Critical appraisals of other individuals with whom respondents have close family relationships;

6. Legally recognized privileged or analogous relationships, such as those of lawyers, physicians, and ministers;

7. Religious practices, affiliations, or beliefs of the student or the student’s parent; and

8. Details of Income.

The information, will be sent to federal agencies that were put in place once the States accepted Common Core.

Local Control Compromised

When the federal government first interfered with the States’ responsibility to educate our children, a line was tragically crossed.  Local control was compromised, as higher levels of officials took more responsibility and dictated more rules from their level of government.  While Common Core apologists try to minimize problems their changes caused, discerning people know there has been this breach in America’s laws and traditions.  Power transferred from the local governing agencies to the federal government.  Any advantage parents had for any significant control over their children’s school or curriculum has been greatly reduced.  It is easier to facilitate potential changes, act on complaints, and make specific adjustments when local government has the power to consider logical adjustments, rather than have to go to a state or federal level to be heard.

While Common Core supporters argued states still have the same control as always, many parents remained skeptical.  It did not take long to discover just how much control the federal government now has.  Our wise forefathers did not want the federal government in charge of the education of our children.  Too much power!  Remember the warning by Sir John Acton in the 1500′s.  “Power Corrupts and Absolute Power Corrupts Absolutely.”  When we see that power has corrupted a local politician, it is fairly easy to remove and replace the person.  That is not as easily discovered or accomplished then the official lives and works outside of our community.

What Parents Can do to Protect their Children from Data Mining

A California law firm, the Pacific Justice Institute has developed a from parents can use to opt-out of all statewide performance assessments, including academic, achievement tests, and Common Core assessments, as well as any questionnaire, survey, or evaluation containing personal questions about their child’s beliefs or practices in sex, family life, morality, politics, income, religion, and other highly personal information.

Parents in other states can contact The Pacific Justice Institute for specific information, and to see if there is a similar agency in their state with a similar “opt out” form.

Conclusion

There was a time in our history in which schools needed the permission of parents for their children to go to school.  Decades later that was reversed and a law enacted that made it mandatory for all children to attend school.  Laws were eventually enacted giving schools more authority than the parents over their children’s schooling.  The current administration has taken federal control to a whole new level, which includes loss of local control and parents subjected to invasive data mining.  This did not make the front page of our newspapers.  In fact Common Core was a surprise to most teachers and local school boards, who scrambled to comply with the new law and education standards and curriculum.

Something as important as major changes in our nation’s education system deserved more input, more openness, public involvement, a public comment period, and certainly proof through trial programs that the new system is superior to the one it replaced.

Instead, our federal government and most every state government unleashed an unproven education program, resulting in our nation’s children becoming guinea pigs in an experimental program that could prove disastrous.  That is why concerned citizens throughout America are having meetings and conferences to educate other about Common Core problems to encourage state officials to enact legislation that would stop Common Core, or at the very least put a “hold” on the program until it can be proven the new system has merit, and to enact strong privacy laws that will protect both students and their families from invasive data mining.

Categories: On the Blog

Climate Change–Less of a Scientific Agenda and More of a Political Agenda

Somewhat Reasonable - June 23, 2014, 11:04 AM

Those who don’t believe in climate change are “a threat to the future,” says the Washington Post in a June 14 article on President Obama’s commencement address for the University of California-Irvine. Regarding the speech, the Associated Press reported: “President Obama said denying climate change is like arguing the moon is made of cheese.” He declared: “Scientists have long established that the world needs to fight climate change.”

The emphasis on a single government policy strays far from the flowery rhetoric found at the traditional graduation ceremony—especially in light of the timing. While the president was speaking, all of the progress made by America’s investment of blood and treasure in Iraq was under immediate threat. And, as I pointed out last week, what is taking place right now in Iraq has the potential of an imminent impact to our economic security. Instead of addressing the threat now, why is he talking about “a threat to the future” that might happen in the next 100 years?

The answer, I believe, is found later in his comments.

In his speech, Obama accused “some in Congress” of knowing that climate change is real, but refusing to admit it because they’ll “be run out of town by a radical fringe that thinks climate science is a liberal plot.”

Perhaps he’s read a new book by a climatologist with more than forty years of experience in the discipline: The Deliberate Corruption of Climate Science by Tim Ball, PhD—which convincingly lays out the case for believing that the current climate change narrative is “a liberal plot.” (Read a reviewfrom Principia Scientific International.) In the preface, Ball states: “I’ve watched my chosen profession—climatology—get hijacked and exploited in service of a political agenda.” He indirectly calls the actions of the president and his environmental allies: “the greatest deception in history” and claims: “the extent of the damage has yet to be exposed and measured.”

It is not that Ball doesn’t believe in climate change. In fact, he does. He posits: “Climate change has happened, is happening and will always happen.” Being literal, Obama’s cheese comment is accurate. No scientist, and no one is Congress, denies climate change. However, what is in question is the global warming agenda that has been pushed for the past several decades that claims that the globe is warming because of human caused escalation of CO2. When global warming alarmists use “climate change,” they mean human-caused. Due to lack of “warming,” they’ve changed the term to climate change.

Nor is he against the environment, or even environmentalism. He says: “Environmentalism was a necessary paradigm shift that took shape and gained acceptance in western society in the 1960s. The idea that we shouldn’t despoil our nest and must live within the limits of global resources is fundamental and self-evident. Every rational person embraces those concepts, but some took different approaches that brought us to where we are now.”

Ball continues: “Environmentalism made us aware we had to live within the limits of our home and its resources: we had a responsibility for good stewardship.” But, “the shift to environmentalism was hijacked for a political agenda.” He points out: “extremists demand a complete and unsustainable restructuring of world economies in the guise of environmentalism” and claims: “the world has never before suffered from deception on such a grand scale.”

Though it is difficult to comprehend that a deception on such a grand scale, as Ball projects, could occur, he cites history to explain how the scientific method was bypassed and perverted. “We don’t just suddenly arrive at situations unless it is pure catastrophe. There is always a history, and the current situation can be understood when it is placed in context.”

In The Deliberate Corruption of Climate Science, Ball takes the reader through history and paints a picture based on the work of thought leaders in their day such as Thomas Malthus, The Club of Rome, Paul Erlich, Maurice Strong, and John Holdren. Their collective ideas lead to an anti-development mindset. As a result, Ball says: “Politics and emotion overtook science and logic.”

Having only been in this line of work for the past seven-and-a-half years, I was unfamiliar with the aforementioned. But Ball outlines their works. Two quotes, one from Erlich, author of, the now fully discredited, The Population Bomb, and the other from Strong, who established the United Nations Environment Program (the precursor to the Intergovernmental Panel on Climate Change), resulted in an epiphany for me. I now know that the two sides of the energy debate are fighting apples and oranges.

I’ve been fighting for cost-effective energy, jobs, and economic growth. I point out, as I do in a video clip on the home page of my website, that the countries with the best human health and the most physical wealth are those with the highest energy consumption. I state that abundant, available, and affordable energy is essential to a growing economy. I see that only economically strong countries can afford to care about the environment.

While the other side has an entirely different goal—and it’s not just about energy.

Erlich: “Actually, the problem in the world is there are too many rich people.” And: “We’ve already had too much economic growth in the United States. Economic growth in rich countries like ours is the disease not the cure.” 

Strong: “Isn’t the only hope for the planet that the industrialized nations collapse? Isn’t it our responsibility to bring that about?” 

When the other side of the energy debate claims that wind turbines and solar panels will create jobs and lower energy costs—despite overwhelming evidence to the contrary, I’d mistakenly assumed that we had similar goals but different paths toward achieving them. But it isn’t really about renewable energy, which explains why climate alarmists don’t cheer when China produces cheap solar panels that make solar energy more affordable for the average person, and instead demand tariffs that increase the cost of Chinese solar panels in the U.S.

Ball states: “In the political climate engendered by environmentalism and its exploitation, some demand a new world order and they believe this can be achieved by shutting down the industrialized nations.”

He cites Strong, a senior member of The Club of Rome, who in 1990 asked: “What if a small group of these world leaders were to conclude the principal risk to the earth comes from the actions of rich countries?” A year later, The Club of Rome released a report, The First Global Revolution, in which the authors state: “In searching for a common enemy against whom we can unite, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like, would fit the bill. …The real enemy then is humanity itself.”

Throughout the pages of The Deliberate Corruption of Climate Science, Ball goes on to show how in attempting to meet the challenge of collapsing an industrialized civilization, CO2 becomes the focus. “Foolishly we’ve developed global energy policies based on incorrect science promulgated by extremists.”

Ball concludes: “Because they applied politics to science they perverted the scientific method by proving their hypothesis to predetermine the result.” The results? “The sad truth is none of the energy and economic policies triggered by the demonization of CO2 were necessary.”

Obama said: “Scientists have long established that the world needs to fight climate change.” Yes, some have—many for reasons outlined in Ball’s easy-to-read new book. But, surely not all. Next month, hundreds of scientists, policy analysts, and thought leaders, who don’t agree with the president’s statement (including Ball and myself), will gather together for the Ninth International Conference on Climate Change. There, they won’t all agree on the reasons, but they’ll discuss and debate why each believes climate change is not a man-caused crisis. In real science, debate is welcome.

The computer models used to produce the scientific evidence and to provide legitimacy in support of the political agenda have a record of failed projections that would have doomed any other area of research and policy. Ball points out: “The error of their predictions didn’t stop extremists seeing the need for total control.”

The claim of consensus is continually touted and those who disagree are accused of thinking the moon is made of cheese. According to Ball: “Consensus is neither a scientific fact nor important in science, but it is very important in politics.”

Do you want to live in a world with “the best human health” or in one where “the real enemy is humanity itself?” Energy is at the center of this battle.

“It is time to expose their failures [and true motives] to the public before their work does too much more damage.”

Author’s Note: The title is taken from a 2011 quote from India’s Union Environment Minister Jairam Ramesh.

The author of Energy Freedom, Marita Noon serves as the executive director for Energy Makes America Great Inc. and the companion educational organization, the Citizens’ Alliance for Responsible Energy (CARE). Together they work to educate the public and influence policy makers regarding energy, its role in freedom, and the American way of life. Combining energy, news, politics, and, the environment through public events, speaking engagements, and media, the organizations’ combined efforts serve as America’s voice for energy.

[Originally published at Red State]

 

Categories: On the Blog

The Rebirth of Austrian Economics

Somewhat Reasonable - June 23, 2014, 10:45 AM

Forty years ago, during the week of June 15-22, 1974, the Austrian School of Economics was reborn during a conference in the small New England town of South Royalton, Vermont. Why was this important? Because the economists of the Austrian School have developed the most persuasive understanding of why only economic freedom can give mankind both liberty and prosperity.

During the Great Depression of the 1930s, many economists and political policy-makers argued that capitalism was a “failure” and only wisely guided government intervention and regulation of the market place could bring stability and fairness to society.

The Domination of Big Government Ideas

For the next thirty years following the Second World War, Keynesian Economics dominated economic policy decision-making. Government, it was said, had to have the discretionary authority to manipulate spending and taxing as well as the monetary system to assure full employment and stable economic growth.

This was matched by a rarified mathematical formalism in the higher levels of economic theory in which the everyday individual was reduced to a mere passive variable in a series of equations, with the assistance of which it was presumed government could successfully micro-manage the market. Unless regulated and guided by the superior hands of the government policy-makers, society would fall into waste and inefficiencies due to people’s wrong choices and misplaced actions when left on their own.

The Beginning of Austrian Economics

Almost 145 years ago, Carl Menger founded the Austrian School of Economics. One of the pathfinders to break asunder the myth of the labor theory of value, which had dominated economics from the time of Adam Smith to that of Karl Marx, Menger developed the subjective theory of value. The value of a good, Menger explained, was not determined by the amount of labor devoted to making a product, but rather the labor was given value by the intensity felt for the product by the individual who would finally use or consume it. Since individuals valued things differently and by different scales of importance, there was no way to objectively determine the value any market-traded good might have other than relating it back to the personal (“subjective”) judgments of the individual valuator.

Menger was soon followed by two disciples who refined Austrian theory to such a point that it became a major force in the world of ideas. Friedrich von Wieser formulated the concept of opportunity cost, by which is meant that nothing is free. The fact that most of the means that we use to achieve our various ends are scarce (too limited in supply to enable us to attain all the goals for which those means might be used) means we always have to make trade-offs.

The cost of anything is the alternative goal, purpose or end for which some scarce means might have been used if we had not instead valued more highly some other use for which we ended up applying those limited means. The idea that government can give people a “free lunch” is fundamentally wrong; what the government gives to someone with one hand it must take from someone else by the other hand, because the available means are not enough to fully satisfy both uses at the same time.

Eugen von Böhm-Bawerk, developed Menger’s theory of subjective value and applied it to the problem of savings, investment, and the creation of capital. Everything we do involves time. Whether we are boiling an egg or constructing a tunnel through a mountain, or planting a crop for food, all of our production activities take time.

This requires that individuals must save enough to free up the resources needed to build the capital goods and cover people’s living expenses until the production processes are completed at some point in the future when more and better goods and services will be forthcoming as the benefit from having waited for them.

Government taxation and regulation can undermine if not destroy the ability and motive of people to do the savings and investing that is essential if we are all to benefit from rising standards of living in the future.

Ludwig von Mises and the Case for the Free Market

In the twentieth century, Ludwig von Mises extended the Austrian approach. Mises applied Menger’s subjective value theory to the area of money and developed the “Austrian” theory of the business cycle. Government manipulation of money and credit in the banking system throws savings and investment out of balance, resulting in misdirected investment projects that are eventually found to be unsustainable, at which point the economy has to rebalance itself through a period of a corrective recession.

The only wise policy for government is to leave money and the banking system to the competitive forces of a free market to eliminate the inflationary booms and recessionary busts of the business cycle, so markets can effectively keep people’s saving and investing decisions in balance for well-coordinated economic stability and growth.

Mises also demonstrated in the early 1920s why the new experiment with socialist central planning in communist Russia would eventually fail. Rational and efficient economic decision-making requires market-generated money prices to determine and calculate the relative values of the finished goods that consumers might wish to buy in comparison with the costs of using the means of production – land, labor, and capital – in one alternative production activity instead of another, on the basis of which entrepreneurs can estimate likely profits or losses from producing one product rather than some other.

Comprehensive socialism abolishes private property, bans market ownership and trading of goods and resources, and places all economic decision-making in the hands of a government central planning agency.

But without private property, there is nothing to buy and sell. With nothing to buy and sell there is no bargaining to determine possible terms-of-trade. With no agreed-upon terms-of-trade, there are no market prices.

Without market prices to tell market decision-makers the value of what consumers might want and the actual value of scarce resources in competing uses for their employment, there is no rational way for the socialist planner to efficiently and effectively know what to produce and at the lowest costs to maximize total desired production. Socialist central planning creates a society of “planned chaos.”

Based on his critique of the unworkability of socialist central planning Ludwig von Mises developed a theory of how the competitive market process works, and the important role of the entrepreneur for guiding production in the pursuit of profits and the avoidance of losses.

This also led Mises to a detailed critical analysis of how and why various forms of government regulation and intervention in the market economy can only distort and bring about imbalance in the market’s own coordination of multitudes of supplies and demands in the service of consumer desires. The only viable economic system for freedom and prosperity, Mises concluded, is laissez-faire capitalism.

F. A. Hayek and the Use of Knowledge in Society

Further developments in Austrian theory were the product of the versatile mind of Friedrich von Hayek, who won the Nobel Prize in Economics in 1974 a few months after this Austrian Economics conference in South Royalton, Vermont.

In the 1930s, Hayek refined Mises’ theory of money and the business cycle, and became the leading free market critic of John Maynard Keynes at the time when “Keynesian Economics” was just being developed. He insisted that government deficit spending and manipulation of spending in the economy would only slowdown the normal market-generated recovery from a recession, and ran the danger of creating a future inflation that would be followed by another economic downturn.

Hayek, like Mises, was a leading critic of socialism. His core argument centered on the impossibility of even the wisest and most intelligent central planners ever having the ability to master, integrate and effectively use all the needed knowledge to successful guide an entire economy from the offices of a government planning bureau.

The division of labor in society is matched by a division of knowledge in which each of us possesses only a limited and small amount of all the knowledge of the world in our individual minds. We all must admit and accept how ignorant any one of us is about all the forms of knowledge that exist in the world, and which must somehow be successful brought to bear if all of us are to benefit from what one or a few people may know that we do not.

Hayek’s answer to this problem was to explain that market-generated prices serve as the communications devise through which we can inform each other about our desires as consumers and our abilities as producers, while leaving us free to use the knowledge that each of us individually possesses as we find it most advantageous. Thus, freedom and prosperity are combined through the market system of prices and competition to find out who can do better in satisfying the wants of others in the pursuit of self-interested profit.

Austrian Voices at the South Royalton Conference

The Institute for Humane Studies (IHS) organized the South Royalton Austrian Economics conference, and brought to Vermont three of the leading Austrian economists of that time to deliver a series of unique and important lectures: Israel M. Kirzner, Ludwig M. Lachmann, and Murray N. Rothbard.

Israel Kirzner had studied under Mises at New York University, and in 1973 had written, “Competition and Entrepreneurship,” the first of many books explaining the importance of the alert and creative market-based entrepreneur who brings about the balance and coordination of supplies with our consumer demands through his pursuit of profit opportunities.

Murray Rothbard had already made an outstanding name for himself as an Austrian economist with his two-volume work, “Man, Economy and State” (1962), in which he developed the entire edifice of economic understanding following in the footsteps of Ludwig von Mises. His 1963 book, “America’s Great Depression,” demonstrated that the economic depression of the 1930s had its origin in bad Federal Reserve monetary policy in the 1920s, and made far worse than it needed to be due to the wrong-headed interventionist policies of the Hoover Administration in the early 1930s.

Ludwig Lachmann had studied with F. A. Hayek at the London School of Economics in the 1930s, and went on to challenge the Keynesian misconception that the economy should be viewed and treated as one single aggregate lump of economic output. He subtly showed that the market is an intricate web of multitudes of individual supplies and demands interconnected in ways that could have no harmonious order to them other than through the free competitive actions of people, themselves, in a dynamic world of unexpected change.

Austrian Economics as Good Economics

The first day of the conference was highlighted by an opening evening banquet. At the dinner, free market economist, Henry Hazlitt, (the author of “Economics in One Lesson”) reminisced about how he first met Ludwig von Mises in the 1940s. The noted anti-Keynesian economist, W.H. Hutt, talked about the contributions that Mises made to economics And Murray Rothbard related some of the amusing anecdotes Mises would tell during the graduate seminars that Mises taught at New York University from 1945 until his retirement in 1969 at the age of 89.

Milton Friedman, who had a summer home in Vermont and who had been invited to the dinner, was asked to make a few comments. He admitted that Mises had made a number of notable contributions to economics, but that he was much too “extreme” in his views on economics and public policy. Besides which, Friedman added, there was no such thing as “Austrian economics,” only good economics and bad economics.

Clearly Freidman considered that the attendees at that conference were on a “fool’s errand” in focusing on something called “Austrian” economics. But for those of us attending that conference that week, we considered that Austrian Economics was a good economics for understanding the nature and workings of the real world of the free market place.

Human Action and Man as Unique Chooser

Starting the next day, a week of rigorous and incisive lectures began dealing with every aspect of “Austrian” theory. Rothbard and Kirzner laid the foundation by explaining the implications of the Austrian theory of human action and choice. The study of economics, Rothbard pointed out, begins with the fundamental axiom that man acts, that conscious action is taken to achieve chosen goals. This also implies that all action is purposeful and rational from the point of view of the actor.

All action, besides which, occurs through time. Action is taken now with the expected attainment of some result in the future. It also means that man acts without omniscience, for if an individual knew what the future would be in all its rich detail, then his action to replace one state of affairs with another would be pointless. With a guaranteed and certain future, action becomes worthless, because nothing can be changed in that future and the idea of people making their free choices becomes meaningless.

The fact that action is purposeful, chosen, and personally subjective also means that any statistical or historical studies that attempt to measure or predict human activity must be seen as having limited usefulness. Kirzner used the example of a man from Mars looking down at the earth through a telescope. The Martian observes that out of a box every day comes an object that enters another rectangular box that then moves away through a maze of canals and intersections. The Martian notices that on certain days the object that comes from the first box moves rapidly to catch up to the second, rectangular box. He then draws up a statistical study showing that one out of ten times the object will move rapidly to reach the rectangular box and uses this for predictions of “earthly” activities.

What has been totally overlooked by this method is that the first box happens to be an apartment building out of which comes an individual who goes to the street corner to catch the morning bus to work. The fact that on occasion the individual in question oversleeps and has to rapidly chase after the bus, so as not to miss it, does in no way guarantee that he may not get a better alarm clock, go to sleep earlier, or in the future, oversleep even more often. Nor does one individual’s actions determine how another individual will act in the same circumstances. Thus, to base one’s understanding of man on statistics and historical studies alone is to ignore that human action is volitional, purposeful, and changeable, dependent on the goals and means of the acting individual.

The inability of the economics profession to grasp the mainsprings of human action has resulted from their adoption of economic models totally outside of reality. In the models put forth as explanations of market phenomena, equilibrium — that point at which all market activities come to rest and all market participants possess perfect knowledge with unchanging tastes and preferences — has become the cornerstone of most economic theory.

The Market Process and the Entrepreneur

Lachmann, in an illuminating lecture, explained that the market is not a series of equilibrium points on a curve, but rather, it’s a constant process kept moving because the underlying currents of human action never rest. Men, lacking omniscience, integrate within their plans the information provided by a constant stream of knowledge about changes in resource availabilities, the relevant actions of other men, and unexpected occurrences. But because each man’s perspective and interpretation of this stream of knowledge may be different from that of others, what seems relevant to one individual may be discarded as insignificant by another.

The unknowability of the future means that individuals draw conclusions based upon expectations of what will happen over time. Divergent expectations and unexpected change, therefore, results in potential inconsistency of interpersonal plans. When errors become visible to individuals, each market participant will learn different lessons from the revised, available information. And, thus, we are again faced with the possibility of inconsistency of different market plans.

But if the plans of market participants can never be expected to smoothly and automatically mesh, what forces in the market tend toward an equilibrating, or coordinating, of the actions of multitudes of human actors? At this point, Professor Kirzner’s follow-up lecture offered the clue. Acting man is not merely a blind “taker” of prices and resource offerings; rather, because of the fact that unexpected change occurs in an uncertain future, man is also “watchful.”

Alertness to previously unseen opportunities serves as the key to the equilibrating market forces. This human capacity for alertness, said Kirzner, is the entrepreneurial role. It is not merely the difficult task of knowing when to hire and where to place the worker. It’s a much more subtle and rarified knowledge; it’s the ability of knowing where to get knowledge, of picking up bits of information that others around you have passed up and seeing the value of it for bringing into consistency a human plan or plans that otherwise would have remained in disequilibrium. The chance to profit from information about market opportunities that others have failed to see acts as the incentive for people to keep their eyes open for inconsistencies and opportunities in human plans.

Production, Time and Money in the Market Process

Lachmann and Kirzner continued this train of thought the following day with lectures on the Austrian theory of capital. Capital is the intermediate product – often the tool or machine – used to produce a finished good for consumption. Yet the many attempts to measure and quantify “society’s” capital stock fall apart when we once again emphasize the nature of purposeful action. A particular good is seen as a “production good” useful for a particular purpose only within the context of a human plan. That object that may be seen as a capital good in one instance may become totally worthless or shift to a consumer good tomorrow, depending upon the changing subjective valuations and judgments of the individuals interacting in the market.

The elusiveness of market equilibrium often means, as well, that, as Lachmann pointed out, a tendency for structural integration of interpersonal plans may exist, but some combinations that are found not to fit within existing plans may result in a scrapping of some of these goods and, therefore, are not really “capital” any longer in the eyes of the valuator. Kirzner continued the discussion pointing out that capital is the complex of “half-baked cakes,” the interim form the resource takes in the process of a human plan leading to the final stage of producing a product to satisfy the wants of some consumers.

Rothbard delivered an interesting and comprehensive lecture on the Austrian theory of money. It was Ludwig von Mises, Rothbard pointed out, who first applied the principles of marginal utility to money, showing how money originated and how exchange values were established on the market. Professor Rothbard suggested three areas for possible future research: (1) how to separate the state from money; (2) the question of free banking vs. 100-percent-gold dollars; and (3) the defining of the supply of money.

He followed up with a lecture on “New Light on the Pre-History of the Austrian School,” and showed the development of marginal-utility theories through the Middle Ages in Spain and Italy.

The Central Error in Keynesian Economics

Lachmann finished his series of lectures with critiques of macroeconomics and its recent controversies.  He argued that the market is a complex and ever-changing network of multitudes of individual actions and reactions to what everyone else is attempting to do in the pursuit of their desired goals and ends.

The Keynesian attempt to reduce all the rich complexity of human activity to a few simple statistical aggregates for government manipulation and control not only misunderstand the real and true nature of a dynamic and competitive market system, but was likely to lead to government policy mishaps that would create far more instability and disorder than if the political authorities simply left the market alone.

Showing How Government Policy Goes Wrong

On the last day of the conference, Kirzner and Rothbard summed up the Austrian approach within a consideration of the “Philosophical and Ethical Implications of Austrian Economic Theory.” Kirzner restated the principle of “value-freedom,” in economic analysis. As an economist, the Austrian theorist does not make judgments on ends chosen by people in the market. The economist’s task is to objectively analyze whether or not the means proposed to achieve a particular goal or end are the most appropriate or efficient to that purpose. The economist on his own cannot say or judge whether the goal or end being pursued by an individual, with whatever means chosen, is in itself “good” or “bad.”

While admitting this, Rothbard wondered if the economist could be totally value-free in all instances. What if a politician has as his goal the economic impoverishment of the nation so as to use demagoguery for gaining political power? Are we to tell him that this is a “good” means to achieve his end? Thus, Rothbard concluded, it may often be necessary to have certain value-laden principles to judge ends as well as means.

Conference Life in South Royalton

The evenings during the week were partly spent with the participants discussing the topics lectured about that day. But in addition, Murray Rothbard would “hold court” every night until the wee hours of the morning. He would tell funny stories, and relate an unending stream of hilarious anecdotes about famous people alive and dead. He amused his audience with a repertoire of “left-wing” and “right-wing” political songs that he knew in several languages. And he optimistically argued for the importance of Austrian Economics and a political philosophy of liberty if the human race was to free itself from the dangers of oppressive and harmful government.

The rustic appearance and the somewhat antiquated facilities and features of the town of South Royalton led Ludwig Lachmann to observe at the end of the week that he could now say that he knew what life had been like in the nineteenth century!

The slanted floor in the room I was staying in required me to spend the night holding on to the sides of the bed so I would not slide out the window behind the low headboard. And some strange mishap seemed to have occurred to one of the female attendees while alone taking a shower that it was all too “shocking” for her to relate all the details.

Another participant, who originally came from Yugoslavia, said that some things in the town seemed so “scary” at night that he admitted the following: “I lived under Nazi occupation and I endured life under communist rule in my native Yugoslavia. But last night was the first time in my life that I slept with the light on!”

The Catalyst for Austrian Economics Reborn

The organizers at the Institute for Humane Studies had sensed the rightness of the time for arranging such a conference as a catalyst for expanding interest in the Austrian School of Economics. And with that goal in mind it can only be said, forty years later, that it was a resounding success.

Between the South Royalton conference in June of 1974 and the awarding of the Nobel Prize in Economics to F.A. Hayek in October of that same year, the Austrian School began a brilliant renaissance that has once more made it one of the most important forces for sound ideas on economics and public policy making in the world today. This was assisted at first, also, by the publication of those lectures delivered at South Royalton in book form in 1976 under the title, “The Foundations of Modern Austrian Economics.”

After near oblivion in the decades immediately after the Second World War due to the dominance of Keynesian Economics, the Austrian School has been reborn. There are universities at which undergraduate and graduate students can take courses on Austrian economics with professors knowledgeable about and dedicated to the tradition that began with Carl Menger and then grew under the ideas of Ludwig von Mises and F. A. Hayek.

There are, now, at least three scholarly journals devoted to the further development of Austrian Economic ideas, plus online websites, blogs, and printed publications explaining and applying “Austrian” ideas to the contemporary policy problems of the day. In addition, well-known and respected publishing houses print both scholarly and popular books on Austrian Economics every year.

Even some prominent political figures have publicly advocated the implementation of free market-oriented policies on the basis of Austrian economic insights – including abolishing the Federal Reserve and moving money and banking into the arena of the competitive free market.

All of this has had a good part of its beginning with that conference on Austrian Economics forty years ago in a small, out-of-the-way New England town.

[Originally published at Epic Times]

 

Categories: On the Blog

Movement of the Permanent Internet Tax Moratorium

Somewhat Reasonable - June 22, 2014, 3:42 PM

This morning the House Judiciary Committee will undertake the markup of the Permanent Internet Tax Freedom Act.  The Act would protect consumers from the increased costs in accessing and using the Internet by permanently extending the moratorium on Internet access taxes, and would prevent multiple and discriminatory taxation of Internet sales.

The legislation already boasts deep bipartisan support with 138 Republican and 76 Democrat co-sponsors. That’s 214 members of the House supporting it, and rumors of more to join soon would bring the total to more than 50 percent. The Senate version of the bill has 50 co-sponsors. So, there is already enough support for a permanent moratorium that doesn’t add extraneous elements that could cause the moratorium to fail.

The legislation also enjoys broad support of thought leaders and citizens, as was made clear in an April letter to Congress. But time to pass the measure is of the essence since the moratorium will expire on November 1 of this year. If allowed to expire, states would begin to collect taxes on Internet access, or apply other discriminatory taxes that may already be in place but which have been held at bay during the moratorium.

Scott Mackey, former chief economist for the National Conference of State Legislatures and currently a consultant to the wireless industry, has estimated that an average household’s taxes would increase by $50 to $75 a year if states decide to apply their sales or telecommunications taxes to Internet access. While that doesn’t seem like much, keep in mind that that’s about what a low-income family spends in a year on subsidized school lunches. Those who qualify for such programs are exactly those who will be most negatively affected by a lapsed moratorium.

Businesses also lose money when Congress doesn’t send a clear message. If Congress dallies—and history has proven that Congress rarely acts in time—telecommunications providers would need to prepare to collect the new taxes. That effort would be a waste of time and resources if Congress were to ride to the rescue at the last minute—a result of the cavalier attitude by government. Less economic growth and fewer jobs are the result. 

Hopefully, the next step on the right path will be taken today with the House Judiciary Committee deciding that the moratorium must continue and refraining from introducing other issues  which will end its progress  in the House.

[Originally published at The Institute for Policy Innovation]

Categories: On the Blog

Just What Is the Perfect Level of CO2?

Somewhat Reasonable - June 22, 2014, 10:06 AM

Ever in an argument with a AGW proponent?

I have stopped trying to argue with someone who refuses to look at anything but that which supports his own position. It’s pointless. So in an effort to end a debate quickly, I now politely ask individuals to explain how CO2, given how small it is relative to all around it, actually changes the entire system. That usually stops it with most of the crowd. Like many things I see with new age forecasters today, they will jump on one weather factor and not understand its behavior is because of everything around it.

The second thing I do is put the ball in their court. This requires knowing what went on historically with weather/climate. So I ask what the perfect number is for CO2 in the atmosphere. An example: Dr. Bill McKibben – one of the people I am frequently amazed with because his comments indicate he either does not know and understand what the weather has done before, or does and refuses to let that get in the way – runs a group called 350.org. He and his team want CO2 at 350 ppm (parts per million). So let’s just go to 350 ppm and see what it was like.

First, here is CO2 on the “correct” scale, which is the percentage of the atmosphere. This is not what you commonly see, which is the amount of CO2 in parts per million, where CO2 is grossly over-represented. The scale should be from one to a million, not a tiny fraction of a million.

Now, by using the very tiny increment they do, and by not informing you that if you actually used the scale from one to a million, this would hardly show up, they’re guilty of creative distortion of reality. After all, aren’t we measuring this against the entire atmosphere? Just think how absurd it would be if we measured against the *entire system: ocean plus atmosphere. The oceans play a huge role in the climate. It’s the reason for Dr. William Gray’s spot on assessment of this whole charade.

Anyway, on the graph below, the numbers on the left are in part per million. We are near 400 ppm now, and the last time it was near 350 ppm was back around 1988.

Here are just a few samples of the weather that year.

Summer:

Average since then:

That was the summer all the hysteria began on the upcoming climate disaster. But what about precipitation?

Since then:

What about hurricanes? What did the ACE Index look like? Gee, about the same as now.

In fact after the peak when the Pacific and Atlantic were warm in tandem, it looks like this recent downturn is lower than the late ‘80s. This may be because whenever there is a “climatic shift” (in the late 1970s the shift was to warming because the PDO turned warm; it’s now opposite), the atmosphere needs to adjust so that the processes which leads to above normal activity can readjust.

What about ice caps? Look at the Arctic when the Atlantic was in its cold mode. 1988 had much higher anomalies than now.

But the Southern Hemisphere ice anomaly is much higher than it was then! In fact, it’s trying for a record!

1988 was as low against the averages in the Southern Hemisphere (more so, it dropped to -1.5) than it is now in the Northern Hemisphere, and the forecast continues to call for Arctic sea ice extent to rise above average against the late summer minimum. This would be the first time this has happened since the Atlantic went into its warm mode.

Globally we’re well above average. Are we not supposed to consider the whole globe on this crucial matter? It was the ice caps – plural – that were supposed to melt. Could it be like almost everything in nature – a cyclical back and forth swing?

So far, the Arctic “warm season” has been colder than 1988 (last year was the coldest ever recorded).

Here it was in 1988:

The fact is, most of the “global” warming has occurred in the Arctic during the winter seasons, where temps 5-10 degrees Fahrenheit above normal are frigid anyway. Given the amount of water vapor in such low temperatures – water vapor being the number 1 greenhouse gas (100x CO2) – it’s a stretch to think this is affecting the entire global climate against anything that can be measured against normal stochastic and cyclical events.

Now you may say, “You are cherry picking.” I can cherry pick any time and find it worse. The fact I can instantly bring up any time where weather has been more extreme says that in the past, the weather has been more extreme! We can go on forever, believe me. Here’s is another sample: How is it most of the states’ high temperatures and the greatest decade for low temperatures were in the 1930s, when CO2 was under 300 ppm?

We are not even close now. Anyone ever consider this? We have added considerably more weather stations, yet the state records set during a time with less stations than now have not been exceeded. And even though it was hotter in summer, it was colder extreme wise in winter.

Here’s a fact: CO2, like anything, has some effect on the weather and climate, probably relative to its relationship with water vapor, which is most likely influenced by the greatest store of heat (energy) to the system (and its also the greatest store of CO2) – the oceans. But can you measure it against the natural cyclical reactions driven by much greater forces and even stochastic events? Can you assign a value when every single point brought up by the AGW side can be easily countered by anyone who knows and understands what has happened in weather and climate in the past? How do you know? And given what is facing us today, is CO2’s value to the climate effectively rounded so close to zero that the whole issue is a red herring?

Look at this. The title says it all.

The answer is, you can’t.

Finally, from IPCC reviewer Dr. Vincent Gray:

Faith in things unseen defines something that is preached in religion. But with all the counter evidence here, it seems like this worship of CO2 as the climate control knob is more religion than science. I don’t force my religion on another man; why is it these folks seem to be pushing theirs on us? And like so many other religions that believe they must convert all men to their belief, this too is a recipe for widespread misery and as in most cases, disaster.

So just what is the perfect level of CO2, and who among men thinks they are fit to decide that, given the overwhelming evidence that nature is in control?

Joe Bastardi is chief forecaster at WeatherBELL Analytics, a meteorological consulting firm.

© Copyright 2014 The Patriot Post

 

[Originally published at The Patriot Post]

Categories: On the Blog

The Threat of Government Internet Monopoly

Somewhat Reasonable - June 21, 2014, 9:09 PM

In the past two decades the Internet has come to be a dominant part of people’s lives. For work, pleasure, communication, and countless other uses, the Internet is an indispensable tool to many individuals. Without it, much of the information-based civilization that has been built up would stop working the way we are accustomed to.

As the Internet has become more important, so too have access to the most cutting-edge systems to provide high speed, security, and data storage facilities. Broadband Internet provides the fastest access to the Internet, and is now essential to the functioning of the American economy both globally and locally.

The Information Age

The increased importance of the Internet has spurred a significant debate over the nature of the rights to access it. Is Internet access now a fundamental right because it is a critical tool in the expression of other freedoms such as the freedom of expression? As yet there is no consensus on an answer. The United Nations special rapporteur on the freedom of expression has stated,

“Given that the Internet has become an indispensable tool for realizing a range of human rights, combating inequality, and accelerating development and human progress, ensuring universal access to the Internet should be a priority for all States.”

Many countries, including France, Costa Rica, Spain, and Greece have all legally enshrined the right to Internet access. Most countries have not yet followed suit, though vigorous debate flourishes in many polities, including the United States.

If Internet access is a human right, or even recognized simply as being important for everyone to have, then how should it be ensured that everyone has access? Some suggest that governments have a duty to provide service through monopolies run by state companies.

This pro-government view is wrong-headed in the extreme. The truth is that the private sector should be allowed to provide these services; it is always the private sector, absent state bureaucracy, that provides the superior service.

The Disaster of State Monopoly

The imposition of a powerful state firm dominating the broadband market would serve to reduce the ability of private providers to compete. The greater resources of the state would be able to give it the power to dictate the market, making it less attractive to private investment. Creating a monopolistic provider would be very dangerous considering that this is a sector upon which much of future national development relies.

Crowding out private firms will make them less inclined to invest in new technologies, while the state provider is unlikely to fill the gap, as traditionally state utilities rely upon their power of incumbency and size rather than seeking novel services. An example of this is Eircom which, when it was the Irish state utility, provided broadband of a lower quality and at higher price than most private providers.

The end result of state dominance and reduction of private competitors is a loss of innovation, a loss of price competition, and an erosion of customer service.

Troublesome Servant, Fearful Master

Monopoly, or near-monopoly, power over broadband is far too great a tool to give to governments. States have a long history of abusing rules to curtail access to information and to limit freedom of speech. Domination of broadband effectively gives the state complete control of what information citizens can or cannot consume online.

If governments are the sole gatekeepers of knowledge, people may well be kept from information deemed against the “public interest.” It is harder for opponents of government regulations to voice their opinions online when they have no viable alternative to the state-controlled network.

The Internet is a place of almost limitless expression and it has empowered more people to take action to change their societies. That great tool of the people must be protected from any and all threats, and most particularly the state that could so profit from the curtailment of Internet freedom.

Categories: On the Blog

The EPA is America’s Other Enemy

Somewhat Reasonable - June 21, 2014, 9:54 AM

While our attention is focused on events in the Middle East, a domestic enemy of the nation is doing everything in its power to kill the provision of electricity to the nation and, at the same time, to control every drop of water in the United States, an attack on its agricultural sector. That enemy is the Environmental Protection Agency.

Like the rest of the Obama administration, it has no regard for real science and continues to reinterpret the Clean Air and Clean Water Acts. It has an agenda that threatens every aspect of life in the nation.

As Craig Rucker, the Executive Director of the Committee for a Constructive Tomorrow (CFACT) recently warned, “True to her word,” EPA Administrator Gina McCarthy, “is busily grabbing powers for EPA that Congress specifically chose not to grant, and that the Supreme Court has denied on multiple occasions.”

“The federal bureaucracy under the Obama presidency has a voracious appetite for more power. It despises individual liberty and drags down the economy every change it gets,” Rucker warns.

In addition to implementing President Obama’s “war on coal” that is depriving the nation of coal-fired plants that provide electricity, the EPA has announced a proposed rule titled “Definition of ‘Waters of the United States’ Under the Clean Water Act”, redefining, as Ron Arnold of the Center for the Defense of Free Enterprise reported in the Washington Examiner “nearly everything wet as ‘waters of the United States or WOTUS—and potentially subject us all to permits and fines.”

The President has made it clear that the rule of law has no importance to him and his administration and this is manifestly demonstrated by the actions of the EPA. “This abomination,” says Arnold, “is equivalent to invasion by hostile troops out to seize the jurisdictions of all 50 states. WOTUS gives untrustworthy federal bureaucrats custody of every  watershed, creates crushing new power to coerce all who keep America going and offers no benefit to the victimized and demoralized tax-paying public.”

In response to the EPA’s new power grab, more than 200 House members called on the Obama administration in May to drop its plans to expand the EPA’s jurisdiction over smaller bodies of water around the nation. A letter was sent to EPA Administrator McCarthy and Department of Army Secretary John M. McHugh (re: Army Corps of Engineers) asking that the proposal be withdrawn.

“Under this plan, there’d be no body of water in America—including mud puddles and canals—that wouldn’t be at risk from job-destroying federal regulation,” said Rep, Doc Hastings (R-Wash), chairman of the House Natural Resources Committee. “This dramatic expansion of federal government control will directly impact the livelihoods and viability of farmers and small businesses in rural America.”

Nearly thirty major trade associations have joined together to create the Waters Advocacy Coalition. They represent the nation’s construction, manufacturing, housing, real estate, mining, agricultural and energy sectors. The coalition supports S. 2245, “Preserve the Waters of the U.S. Act” which would prevent the EPA and Corps of Engineers from issuing their “Final Guidance on Identifying Waters Protected by the Clean Water Act.”

What has this nation come to if the Senate has to try to pass an act intended to prevent the EPA from extending control over the nation’s waters beyond the Clean Waters Act that identifies such control as limited to “navigable waters”? You can’t navigate a water ditch or a puddle!

There are acts that limit agencies such as the EPA from going beyond their designated powers. They are the Regulatory Flexibility Act and the Small Business Regulatory Enforcement Fairness Act.  The coalition says that the EPA and Corps “should not be allowed to use guidance to implement the largest expansion of Clean Water Act authority since it was enacted. Only Congress has the authority to make such a sweeping change.”

In two Supreme Court decisions, one in 2001 and another in 2006, rejected regulation of “isolated waters” by the EPA.

It does not matter to the EPA or the Obama administration what the Supreme Court has ruled Congress has enacted in the Clean Water Act, nor the Clean Air Act.

We are witnessing an EPA that is acting as a criminal enterprise and it must be stopped before it imposes so much damage on the nation that it destroys it.

© Alan Caruba, 2014

[Originally published at Warning Signs]

Categories: On the Blog

Heartland Daily Podcast: Which States are Friendliest to Small Business?

Somewhat Reasonable - June 20, 2014, 10:10 AM

Regulations have a way of growing like weeds: unless they are rooted out, they spread. Regulatory compliance has always been a headache for small business owners who do not enjoy the cozy relationships with big government that large corporations often develop. In fact, they are frequently ignored by legislators both in Washington and in the states.

John Lieber, chief economist of Thumbtack, recently joined our Steve Stanek on the Heartland Daily Podcast for a talk on the business climate in America today. Thumbtack is an online marketplace that brings together service providers and consumers who can negotiate and organize jobs.

Every year, Thumbtack conducts a survey of its customers to develop a “small business friendliness” index detailing the friendliness of each state government toward entrepreneurs and small businesspeople. This year, the survey included nearly 13,000 small business operators who were asked questions on 11 different metrics. The findings are very interesting.

The top three friendliest states were found to be Utah, Idaho, and Texas. The three least friendly states were California, Rhode Island, and Illinois. While not an overly surprising result in itself, the breakdown of the metrics revealed some interesting results about what factors make business climates unfriendly. It turned out that the main culprit was the complexity and difficulty of a state’s licensing regulation. In fact, those surveyed said this factor was twice as important as the level of taxation. Describing the previous surveys, Lieber said that this focus on regulatory compliance was persistently the most serious factor for small business owners in their assessment of their state’s business friendliness.

The big issues in the public consciousness tend to concern taxation. Yet it is not taxation that is really killing small businesses; it’s all the red tape. This is a very interesting finding, one that could have some real implications for policy-makers. For many people, regulation is not really something they think about. Politicians and voters have to be confronted with the true cost of compliance with overcomplicated and expansive regulatory regimes.

The interesting fact is that states looking to make themselves more attractive to businesses can do so without necessarily reducing taxes. What really attracts small businesses is “a tax code that is easy to understand and easy to comply with.” That is not too tall an order, and it should be something politicians across the political spectrum can get behind.

Listen to the podcast in the player above.

Subscribe to the Heartland Daily Podcast free at this link.

Categories: On the Blog

FCC’s Netflix Internet Peering Inquiry – Top Ten Questions – Part 17 Netflix Series

Somewhat Reasonable - June 20, 2014, 9:53 AM
  1. Does Netflix have any responsibility to help provide its users the streaming service that they paid Netflix for by connecting with ISPs in the high quality manner that most all other content delivery networks do? In other words, why is Netflix such an outlier here?
  2. More specifically, when Netflix customers pay Netflix for its video streaming service, does Netflix have any responsibility to its paying streaming customers to plan, arrange, and pay for widely-available, competitive, Internet paid-peering or content-delivery-network arrangements that are most likely to ensure the highest-quality Netflix customer-streaming-experience, or is it everyone else’s legal responsibility on the Internet, but Netflix’, to ensure quality streaming to Netflix’ customers?
  3. Why is it the financial responsibility of ISPs to automatically and immediately compensate for the streaming-quality implications of Netflix’ business decisions to serve its customers over the least-costly Internet access path for Netflix at any given time, when Netflix knows full well that its cost-cutting delivery strategy necessarily has negative streaming-quality implications for its paying customers?
  4. What law or court decision requires or obligates ISPs to overbuild their network infrastructure to handle whatever amount of industry-leading downstream traffic Netflix chooses to route wherever it wants to, without warning, and without any financial arrangement to pay for their extraordinary capacity surges?
  5. Is Netflix operating and negotiating in good faith, and in a commercially-reasonable way, with the ISPs about which it is complaining?
  6. Is it “commercially reasonable” to expect in a business negotiation that business A must pay all of business B’s business costs so that business B can profit at the direct expense of business A?
  7. Since Netflix appears to be involved one way or another in most all of the peering disputes covered by the media, could Netflix, (with the market power that comes with being the nation’s largest generator of downstream traffic — 34% per Sandvine), have any obligation under the FCC’s 706 authority to be as transparent in its network management decisions and delivery-quality-assurance choices as ISPs are?
  8. If only one side of a potential peering dispute, the ISP, were to have an FCC obligation to be publicly transparent, but not the Nation’s largest Internet delivery network, doesn’t that transparency imbalance perversely incent Netflix to arbitrage and game the PR situation because the public can’t know the whole story?
  9. Why does Netflix demand the ISP delivery mechanism pay for the whole cost of delivering Netflix’ one-third of downstream Internet traffic, when Netflix has paid the U.S. Postal Service hundreds of millions of dollars to deliver its DVDs to many of the same customers?
  10. If Netflix and others can use unlimited amounts of bandwidth and not pay their fair share of the Internet’s infrastructure costs, what economic incentive would there be to upgrade the Internet’s infrastructure to keep pace with their exploding demand, if Internet infrastructure costs were to be completely divorced from Internet infrastructure prices?

***

Netflix Research Series

Part 1:  Level 3 & Net Neutrality – Ignorance Unleashed! [11-30-10]

Part 2:  Level 3-Netflix Expose their Hidden Agenda [12-3-10]

Part 3:  Sinking Level 3 Seeking FCC Internet Regulation Bailout [12-8-10]

Part 4:  Netflix’ Open Internet Entitlement Hubris [2-1-11]

Part 5:  Fact-Checking Netflix’ Net Neutrality WSJ Op-ed [7-8-11]

Part 6:  Netflix’ Glass House Temper Tantrum Over Broadband Usage Fees [7-26-11]

Part 7:  Netflix Crushes its Own Momentum [9-20-11]

Part 8:  Netflix the Unpredictable [10-10-11]

Part 9:  Is Netflix the AOL of Web Streaming? [3-9-12]

Part 10: Netflix’ Net Neutrality Corporate Welfare Plan [5-9-12]

Part 11: 5 BIG Implications from Court Signals on Net Neutrality – A Special Report [9-13-13]

Part 12: Video: Why FCC Title II Reclassification of Broadband is a Legal Non-Starter [9-22-13]

Part 13:  Is Net Neutrality Trying to Mutate into an Economic Entitlement? [1-12-14]

Part 14:  Exposing Netflix’ Extraordinary Net Neutrality Arbitrage [1-24-14]

Part 15: Net Neutrality is about Consumer Benefit Not Corporate Welfare for Netflix [3-21-14]

Part 16: Exposing Netflix’ Biggest Net Neutrality Deceptions [6-5-14]

[Originally published at Precursor Blog]

Categories: On the Blog

Climate is for Kids

Somewhat Reasonable - June 20, 2014, 9:08 AM

This is a YouTube video showing exploitation of kids for climate change taking place in Canada.  Also displayed are a number of YouTube videos around the world showing the same exploitation taking place in other countries.

These movements may cause psychological damage to the young by giving them negative feelings about the future of the planet.  For all of history, human’s have benefited by the gifts from the planet, in particular energy sources, that have uplifted each generation after the other.  Bumps have occurred like WWI and WWII, but progress continued.  The environmental movement may reverse this process.

Categories: On the Blog

Flag Desecration: Protecting Free Expression Even When We Hate It

Somewhat Reasonable - June 19, 2014, 1:21 PM

Reverence and veneration of our national flag has long been profound in the United States, far more so than in other countries. Veneration of the Stars and Stripes has evolved beyond mere respect for it as a symbol of national identity, but as an almost religious emblem of American values and the American way of life.

That general reverence has led, over the years, to many state legislatures and the federal Congress passing legislation banning the desecration or burning of the flag. Such legislation generally follows similar language, effectively banning the desecration of the national flag in protests, or other acts of discontent. So far these bans have been struck down by the Supreme Court, which in 1989 described them as contrary to the principle of free speech. The last attempt at the national level was made in 2006, and popular support for such a ban remains high.

Proponents of a ban argue that the special symbolic value of the flag to the American people is such that it must be protected by law, and that the right to free speech does not extend to the desecration of the emblem of the nation. Yet that argument seems to curtail a form of free speech that could undermine the ability of people to protest the policies of the government.

A Visceral Action

There can be no doubt that the act of flag desecration is powerful. It causes anger, sadness, even shame in many patriotic citizens who recognize what it stands for and the sacrifices many brave men and women have made to keep it flying high.

Yet it is that very visceral quality that makes flag desecration such a potent, and important, expression of free speech or protest: it is an expression to which many people will respond.

Jarring statements grab attention, and can force attention onto an issue. A conventional protest can be overlooked, but images of a flag being burned immediately drag in media attention and start a commentary. While some of that commentary does inevitably center on the issue of flag desecration itself, it also brings focus to the cause.

When protesters are called to explain themselves, they get a chance to explain their views and promote their cause to a much wider audience than they might have been able to reach otherwise. For that reason, flag desecration can be very valuable for gaining attention, and if done thoughtfully, to generate meaningful discourse.

Patriotic Flag-Burning

Burning a flag may not be an act of “un-Americanism”, in the sense of opposing widely held principles emblematic of the United States, at all. The flag can be burned as an act of patriotism. When individuals feel the state is doing something contrary to the ideals of the nation, the ideals that the flag represent, burning the flag can be symbolic of the state’s non-adherence to the values it is meant to defend. The act of desecration thus serves to connect the cause of the protestor to the very ideals of the nation, and shows that it is central to the discourse of what the nation’s values are and how they should be maintained, rather than simply being the ancillary opinions of a few people that can simply be discarded.

It is also important that a free society be able to question its values and how they are realized. Banning something on the basis of majority opinion and their easily offended sensibilities is little more than a heckler’s charter. If views are banned simply because the majority disagrees with them, it is little more than the tyranny of the strong over the weak. The very reason there are checks and balances in our government is to prevent such tyranny. This is exactly why the Supreme Court has stood against the laws passed by the federal and state legislatures banning desecration of the flag; they protect the rights of citizens with a minority opinion from the majority seeking to take them away.

The Right to Say What Others Despise

For society to be free and democratic it must have provision for the expression of views contrary to the mainstream, even views directly oppositional to it. This must extend to the means by which we convey such messages. Public disgust is certainly not justification enough to deny the right to expression.

The exercise of a right can only justly be denied to someone when there is a direct harm to others by exercising that right. Some people may have a great sentimental attachment to the symbolic significance of the flag, but they should not expect the law to enforce their sentiments on everyone. The flag, like all symbols of beliefs and groups, is not inviolable, nor is anyone’s piece of mind or health so attached to its wellbeing that the desecration or defacing of it could cause any true harm.

Furthermore, the patriotism of individuals watching a flag burning is not affected by it. This view is upheld by the Supreme Court opinion in Texas v. Johnson, when the opinion argued that there could be no better response to a flag burning by someone opposed to such an action than waving their own flag or saluting and paying respect to the burning flag. People can thus show their opposition peacefully without infringing the right of a protestor to burn a flag.

Banning flag desecration on account of a sense of moral disgust, or of the threat to public order caused by angry counter-protestors, is the prohibition of an otherwise lawful act for the reason that others will commit crimes in response. Clearly, these are not justification for banning flag desecration.

The strength of a free society lies in its ability to tolerate opposing views, even those that are antithetical to the constitutional or civil laws as they stand. The protections we enjoy and jealously guard for ourselves only have meaning if we extend them to all citizens.

Categories: On the Blog

Fixing Our Dictatorial EPA

Somewhat Reasonable - June 19, 2014, 11:46 AM

Last year, Congress enacted 72 new laws and federal agencies promulgated 3,659 new rules, imposing $1.86 trillion in annual regulatory compliance costs on American businesses and families. It’s hardly surprising that America’s economy shrank by 1% the first quarter of 2014, our labor participation rate is a miserable 63% and real unemployment stands at 12-23% (and even worse for blacks and Hispanics).

It’s no wonder a recent Gallup poll found that 56% of respondents said the economy, unemployment and dissatisfaction with government are the most serious problems facing our nation – whereas only 3% said it is environmental issues, with climate change only a small segment of that.

So naturally, the Environmental Protection Agency issued another round of draconian restrictions on coal-fired power plants, once again targeting carbon dioxide emissions. EPA rules now effectively prevent the construction of new plants and require the closure of hundreds of older facilities. By 2030 the regulations will cost 224,000 jobs, force US consumers to pay $289 billion more for electricity, and lower disposable incomes for American households by $586 billion, the US Chamber of Commerce calculates.

The House of Representatives holds hearings and investigations, and drafts corrective legislation that the Harry Reid Senate immediately squelches. When questions or challenges arise, the courts defer to “agency discretion,” even when agencies ignore or rewrite statutory provisions. Our three co-equal branches of government have become an “Executive Branch trumps all” system – epitomized by EPA.

Some legal philosophers refer to this as “post-modernism.” President Obama’s constitutional law professor called it “the curvature of constitutional space.” A better term might be neo-colonialism – under which an uncompromising American ruler and his agents control citizens by executive fiat, to slash fossil fuel use, fundamentally transform our Constitution, economy and social structure, and redistribute wealth and political power to cronies, campaign contributors and voting blocs that keep them in power.

Even worse, in the case of climate change, this process is buttressed by secrecy, highly questionable research, contrived peer reviews, outright dishonesty, and an absence of accountability.

Fewer than half of Americans believe climate change is manmade or dangerous. Many know that China, Australia, Canada, India and even European countries are revising policies that have pummeled families, jobs, economies and industries with anti-hydrocarbon and renewable energy requirements. They understand that even eliminating coal and petroleum use in the United States will not lower atmospheric carbon dioxide levels or control a climate that has changed repeatedly throughout Earth’s history.

Mr. Obama and EPA chief Gina McCarthy are nevertheless determined to slash reliance on coal, even in 20 states that rely on this fuel for half to 95% of their electricity, potentially crippling their economies. The President has said electricity rates will “necessarily skyrocket,” coal companies will face bankruptcy, and if Congress does not act on climate change and cap-tax-and-trade, he will. Ms. McCarthy has similarly said she “didn’t go to Washington to sit around and wait for congressional action.”

However, they know “pollution” and “children’s health” resonate much better than “climate disruption” among voters. So now they mix their climate chaos rhetoric with assertions that shutting down coal-fired power plants will reduce asthma rates among children. It is a false, disingenuous argument.

Steadily improving air pollution controls have sent sulfur dioxide emissions from U.S. coal-fired power plants tumbling by more than 40% and particulate emissions (the alleged cause of asthma) by more than 90% since 1970, says air quality expert Joel Schwartz, even as coal use tripled. In fact, asthma rates have increased, while air pollution has declined – underscoring that asthma hospitalizations and outdoor air pollution are not related. The real causes of asthma are that young children live in tightly insulated homes, spend less time outdoors, don’t get exposed to enough allergens to reduce immune hyperactivity and allergic hypersensitivity, and get insufficient exercise to keep lungs robust, health experts explain.

But the American Lung Association backs up the White House and EPA claims – vigorously promoting the phony pollution/asthma link. However, EPA’s $24.7 million in grants to the ALA over the past 15 years should raise questions about the association’s credibility and integrity on climate and pollution.

EPA also channels vast sums to its “independent” Clean Air Scientific Advisory Committee, which likewise rubberstamps the agency’s pollution claims and regulations: $180.8 million  to 15 CASAC members since 2000. Imagine the outrage and credibility gap if Big Oil gave that kind of money to scientists who question the “dangerous manmade climate change” mantra.

Moreover, even EPA’s illegal studies on humans have failed to show harmful effects from pollution levels the agency intends to impose. Other EPA rules are based on epidemiological data that the agency now says it cannot find. (Perhaps they fell into same black hole as Lois Lerner’s missing IRS emails.) EPA’s CO2 rulings are based on GIGO computer models that are fed simplistic assumptions about human impacts on Earth’s climate, and on cherry-picked analyses that are faulty and misleading.

In numerous instances, EPA’s actions completely ignore the harmful impacts that its regulations will have on the health and well-being of millions of Americans. EPA trumpets wildly exaggerated benefits its anti-fossil-fuel rules will supposedly bring but refuses to assess even obvious harm from unemployment, soaring energy costs and reduced family incomes. And now Mr. Obama wants another $2.5 billion for FY-2015 climate change models and “assessments” via EPA and the Global Change Research Program.

EPA’s actions routinely violate the Information Quality Act. The IQA is intended to ensure the quality, integrity, credibility and reliability of any science used by federal agencies to justify regulatory actions.  Office of Management and Budget guidelines require that agencies provide for full independent peer review of all “influential scientific information” used as the basis for regulations. The law and OMB guidelines also direct federal agencies to provide adequate administrative mechanisms for affected parties to review agency failures to respond to requests for correction or reconsideration of scientific information.

Those who control carbon control our lives, livelihoods, liberties, living standards and life spans. It is essential that EPA’s climate and pollution data and analyses reflect the utmost in integrity, reliability, transparency and accountability. A closed circle of EPA and IPCC reviewers – accompanied by a massive taxpayer-funded public relations and propaganda campaign – must no longer be allowed to rubberstamp junk science that is used to justify federal diktats. Governors, state and federal legislators, attorneys general, and citizen and scientific groups must take action:

  • File FOIA and IQA legal actions, to gain access to all EPA and other government data, computer codes, climate models and studies use to justify pollution, climate and energy regulations;
  • Subject all such information to proper peer review by independent scientists, including the significant numbers of experts who are skeptical of alarmist pollution and climate change claims;
  • Demand that new members be appointed to CASAC and other peer review groups, and that they represent a broad spectrum of viewpoints, organizations and interests;
  • Scrutinize the $2.5 billion currently earmarked for the USGCRP and its programs, reduce the allocation to compel a slow-down in EPA’s excessive regulatory programs, and direct that a significant portion of that money support research into natural causes of climate change; and
  • Delay or suspend any implementation of EPA’s carbon dioxide and other regulations, until all questions are fully answered, and genuine evidence-based science is restored to the regulatory process – and used to evaluate the honesty and validity of studies used to justify the regulations.

Only in this manner can the United States expect to see a return to the essential separation of powers, checks and balances, economic and employment growth – and the quality, integrity, transparency and accountability that every American should expect in our government.

 

[Originally published at Townhall.com]

Categories: On the Blog
Syndicate content