Many people probably do not know it, but American shipping is still governed by the Merchant Marine Act of 1920. As noted by Keli’i Akina, president of the Grassroot Institute of Hawaii, and in a recent op-ed in the Honolulu Star-Advertiser, Akina asserted that this antiquated law represents:
The most economically debilitating plank of the Jones Act requires that ships carrying cargo between U.S. ports be built in the United States. This has created an artificial scarcity of ships largely due to the inefficiency and extraordinary cost of U.S. ship construction, driving up freight and charter rates and thus limiting domestic commerce.
As a consequence, U.S. shipbuilding yards today construct fewer than 1 percent of the world’s deep draft tonnage, and the ships produced for the commercial market come at a hefty price.
If you’ve ever wondered why some goods cost so much when shipped via ports, then the reason is directly related to the Jones Act. According to the purpose of the act (TITLE 46, APPENDIX App. > CHAPTER 24 > § 861),
It is necessary for the national defense and for the proper growth of its foreign and domestic commerce that the United States shall have a merchant marine of the best equipped and most suitable types of vessels sufficient to carry the greater portion of its commerce and serve as a naval or military auxiliary in time of war or national emergency, ultimately to be owned and operated privately by citizens of the United States; and it is declared to be the policy of the United States to do whatever may be necessary to develop and encourage the maintenance of such a merchant marine, and, insofar as may not be inconsistent with the express provisions of this Act, the Secretary of Transportation shall, in the disposition of vessels and shipping property as hereinafter provided, in the making of rules and regulations, and in the administration of the shipping laws keep always in view this purpose and object as the primary end to be attained.
Essentially, no foreign built ship can carry goods domestically between ports. Each ship has to be built in the U.S., owned by U.S. citizens, and staffed by American citizens and permanent residents. Talk about driving up the costs of shipping! As a result, goods are not as low as they otherwise would be because of this nearly century old piece of legislation.
I am reminded of the fabulous HBO series The Wire, where in one season it catalogued the decline of shipping in Baltimore as a result of inefficient union control and corruption. Not surprisingly unions oppose changing the act for a more free market and consumer friendly law in order to protect jobs. As a matter of good public policy, and for the sake of consumers, the Jones Act should be repealed.
Few people likely know or care that yesterday was Constitution Day, and those who do have probably already been audited by the IRS or had their 504(c)(4) applications denied. But Tuesday, September 17, 2013, marked the 226th anniversary of the United States Constitution, one of the most important documents in the history of human freedom and the foundation of the last best hope on earth for government of the people, for the people, and by the people.
Thinkers as diverse as fiction writer Stephen King, Founding Father Benjamin Franklin, and the oft-misquoted Alex de Tocqueville have ruminated on the origins and future of government, and all but the most radical anarchists recognize the need for at least some. How much government need or should exist remains, indeed, the essential struggle of our time.
Some of us, like Thomas Jefferson, believe that government is best which governs least, leaving individual citizens to pursue their own hopes and lives and dreams within a minimalist framework that protects individuals from coercion by others. Others, like Franklin D. Roosevelt and Woodrow Wilson, believe that educated or cultural elites should nearly engineer a “socially just” society through rules and regulations programs while providing most people’s material needs as a matter of “right” at the expense of others.
Some will always criticize the U. S. Constitution for providing too little government and others for providing too much, but little doubt exists about what the Constitution was actually designed to do. Because – in the immortal words of Lord Acton – power tends to corrupt and absolute power corrupts absolutely, our Constitution divides power between the states and the national government, and the power of the national government among three branches: the legislative, the executive, and the judicial, named in that order. Its genius is as simple as that – and all the rest is gravy.
But the devil is in the details and, like “free market” economics, the problem is not so much in the Constitution’s design as in its execution. Like a flag that’s been flown far too long, over the past 226 years our Constitution has become torn and battered, in some respects unrecognizable. Some spots shine brightly while others are faded; still others seem missing completely and, here and there, a patch has been added on.
Within fifteen years of the Constitution’s ratification, a judiciary designed to be the “least dangerous” of three co-equal branches by insulating it from politics asserted its superiority over the other two branches in Marbury v. Madison (1804) and has since become overtly political – think Bush v. Gore, Roe v. Wade, or National Federation of Independent Business v. Sebelius (the case that gave us Obamacare).
A legislature designed with two houses specifically to help balance power between the states and the national government – in which the people directly elect the members of one house but not the other – has been turned on its head. Since the 17th Amendment the Senate has become a nominally upper house with less turnover than the House of Lords while, in the supposedly more populist House, members get to select their constituents by redrawing districts to suit the party that controls the state.
Meanwhile, a President whose very title – “Mr. President” – signifies that he (or, from the looks of it, soon a she) is but one of us has become increasingly imperial over time. Today the White House occupant announces that he’d prefer to wait for Congress but that in these “not normal” times he needs to do things himself – much like the devious “Big Jim” in Stephen King’s “Under the Dome.”
So where do we go from here? Do we resign ourselves to the inevitable, that the natural condition of humankind is not freedom but a short and brutish life in which the common folk are ruled by others who enrich themselves at our expense? Do we give up on the dreams of the Founders and their spiritual descendants, Abraham Lincoln and Martin Luther King, Jr., to bestow the blessings of liberty on all God’s children? Do we accept that the fundamental transformation of America has not only begun but is finished?
If not, how do we change things in a positive direction? In the wake of this year’s Constitution Day we could do a lot worse than to study Mark Levin’s latest book, The Liberty Amendments: Restoring the American Republic (Simon & Schuster 2013), available at Wal-Mart and wherever else fine books are sold. A trim 208 pages plus Appendix and footnotes, Mark’s book thoughtfully proposes eleven amendments – a new Bill of Rights – that would restore the Constitution to its original intent.
Odd though it seems that the Constitution would need amending to return it to its roots, Mr. Levin’s proposed amendments would do just that: by establishing term limits for congressmen and the Supreme Court, returning the selection of Senators to the States rather than the people; limiting spending, taxing, and the federal bureaucracy; promoting free enterprise and protecting private property; granting the states authority to amend the Constitution directly and to check the power of Congress; and finally, to protect the vote by restricting the franchise to actual U. S. citizens.
“We live in perilous times,” he says, and “the challenges are daunting. … This is our generation’s burden. We have our work cut out for us. But there is a way forward. The Constitution.”
Truer words were never spoken.
Professor Susan Crawford’s Bloomberg op-ed, “New FCC Head Must Reclaim Authority over Telecom,” exposes a profound lack of substance, in being unable to identify any real market problem warranting FCC regulation.
Let’s review Professor Crawford’s litany of contrived policy problems.
First, she charges that ISPs are working “to ensure no regulator has any real authority over them.” No, ISPs are pointing out the unique excessiveness of having THREE government entities having authority over them on the same general matters. ISPs are not asking for any reduction in authority for the DOJ or the FTC. Specifically, Verizon is asking the D.C. Appeals Court to decide if the FCC exceeded its legal authority in imposing prophylactic common-carrier-like regulation on companies that have not done anything wrong.
Second, she charges that in 2002, the FCC “gave up any authority to require that network access providers not discriminate…” No, the FCC decided that broadband was a competitive information service that should not be regulated like a monopoly telephone company. The FCC was simply extending the decades-old FCC policy in its Computer Inquiry I,II, & III decisions, which rightfully sought to not impose common carrier regulation on computer services to promote innovation. The FCC determined that broadband was a computer/Internet service not just telephone call network.
Third, she demands that the new FCC Chairman mandate the nuclear option of Title II reclassification of broadband: “It’s imperative that Wheeler reclaim the FCC’s authority over telecommunications.” No again. Currently there is no problem with the FCC not having common carrier authority over broadband. However, if the FCC were to follow Professor Crawford’s recommendation, and pull the rug out from under an industry that has invested in good faith, almost a trillion dollars in modern and competitive broadband facilities, under the legal, policy and political precedent, assurances and consensus that broadband would not be common carrier regulated – that unwarranted, unjust, and capricious action would cause monstrous problems.
Finally, she imagines that: “The U.S. lacks any plan to upgrade from cable to faster fiber-optic connections, and there is no competition among providers to drive technology upgrades… [or] to treat fairly any interconnecting networks...” Obviously Professor Crawford has not done her research and does not know that Comcast demonstrated a 3 Gigabit cable broadband network at the Cable Show and Cable Labs has plans for more than 10 Gigabit cable broadband capability. She also does not understand that the existing, well-functioning, Internet peering system has never suffered from common carrier regulation. She somehow imagines that the very few companies like Netflix, that transport orders of magnitude more traffic than most anyone else, are somehow being discriminating against, if they negotiate an agreement to pay for some of the costs its massive traffic causes for others.
In sum, Professor Crawford gamely tried to identify a real problem to justify common carrier regulation of the broadband Internet, but could not. That speaks volumes. If this is the best that the Save the Internet’s movement’s torch-bearer can come up with after several years of trying, there is no there there — but smoke and mirrors.
[First Published by Precursor]
Ecotality declared Chapter 11 bankruptcy on Monday.
Compared to other Recovery Act beneficiaries that have failed – like battery maker A123 Systems and electric auto company Fisker Automotive – the deathwatch was short. A July 25th report issued by the Department of Energy’s Inspector General declared Ecotality’s EV Project largely a waste of time and misallocated money.
Then in mid-August Ecotality informed the Securities and Exchange Commission it was in deep financial trouble, with bankruptcy a possibility. A filing showed that the company was unable to obtain additional financing and the DOE had ceased payments to it for the EV Project until the agency could investigate further. DOE also warned Ecotality to not incur any new costs or obligations under the EV Project.
NLPC first raised questions about Ecotality’s viability and origins in October 2011.
Monday’s development is another black eye to President Obama’s green energy agenda, but we’ve come to learn that each flop is just another reason for his Energy Department to look on the bright side. DOE spokesman Bill Gibbons told the Washington Free Beacon’s Lachlan Markay in a statement yesterday that stimulus support for Ecotality was “meant to establish the seeds of infrastructure needed to support a growing market for advanced vehicles, [and] the company installed more than 12,500 charging stations in 18 US cities—or approximately 95 percent of their goal.”
The attitude echoes comments made by others from DOE after similar collapses. When Colorado-based Abound Solar declared Chapter 7 bankruptcy in June 2012, DOE deputy director of Public Affairs Damien LaVera wrote a lengthy article defending the agency’s “investments” in solar energy with the attitude of “Hey, they can’t all be winners!”
“Of the $400 million that Abound was originally approved for, the Department only lent the company less than $70 million,” LaVera wrote at the time. “Because of the strong protections we put in place for taxpayers, the Department has already protected more than 80 percent of the original loan amount. Once the bankruptcy liquidation is complete, the Department expects the total loss to the taxpayer to be between 10 and 15 percent of the original loan amount.”
Yes, great job DOE!
Then there was last week’s testimony by former Loan Programs Office director Jonathan Silver, in a hearing about secret email exchanges on private accounts held before the House Oversight and Government Reform Committee. When challenged by Rep. Jim Jordan of Ohio about millions of dollars in squandered “investments” thanks to his agency’s poor judgments, Silver said the losses only represented three percent of the portfolio and one percent of the loan loss reserve set aside by Congress for the stimulus, which Silver said made the program a “success.”
While DOE grant evaluators may be slapping each other on their backs for their great accomplishments, and the superior judgment they think they’ve exercised on behalf of the taxpayers, those of us in the real world wonder if this interminable nightmare will ever end. Nissan North America also appears to be concerned. The all-electric Leaf – which is supposed to be manufactured in much greater quantities now outside Nashville thanks to its $1.4 billion taxpayer guaranteed loan – is somewhat dependent on the chargers produced and deployed by Ecotality. The bankruptcy notice said Nissan loaned the company $1.25 million to continue operations until the process is completed.
Nissan has an interest in not seeing Ecotality’s thousands of “Blink” chargers become glorified lampposts. According to PlugInAmerica.com, at least 5,700 Leaf owners received free chargers through the EV Project, and many more own chargers that were heavily subsidized. In addition Ecotality’s chargers were deployed throughout ten major metro areas in which they were supposed to replicate a system where EV owners could conveniently find spots in their daily routines to repower while they shopped or worked. A large-scale uprooting of the chargers, much like retail chain Costco did a couple of years ago, would be an even greater disaster for DOE and EV manufacturers like Nissan.
Bloomberg reported yesterday that Ecotality said it had installed more than 8,000 home chargers and 4,000 commercial chargers. The DOE Inspector General noted in his July report that the intent of the EV Project was to create a system of chargers that would alleviate owners’ “range anxiety,” meaning that they could drive and not worry whether or not they could make it to their next stop before running out of power. The report reasoned that the purpose of the EV Project was to “develop, implement, and study techniques for optimizing the effectiveness of infrastructure supporting widespread electric vehicle deployment,” an agenda established by President Obama as part of his plan to have one million electric cars on the road by 2015.
So the heavier deployment to homeowners, rather than businesses and public locations, undermined that goal. Worse, the Inspector General criticized how DOE approved reimbursement to Ecotality that allowed the company to use as a “match” the full monthly costs of the electric cars, chargers and Internet service for EV owners who participated in the program – over $550 per month, according to the IG. Because of that generous accounting, Ecotality received taxpayer funds to offset costs it incurred.
“…the vehicles and Internet connections were purchased to satisfy personal needs of consumers, not solely for the project,” the IG reported.
Ecotality’s rollout of the chargers in this fashion were in part the result of weaker than projected (but not unexpected by those who truly understand the laws of economics) adoption of EVs. Now the next unintended consequence is that Nissan and other electric automakers such as General Motors (with the Chevy Volt) and Ford (with a $5.9 billion taxpayer loan for alternative vehicles production) are somewhat dependent on a system of chargers whose maintenance, software updates and repair are now in doubt. Hence the $1.25 million Nissan loan while the car companies and the government figure out what to do next.
The scrutiny will come quickly about Ecotality’s crony capitalism and spending practices, as well as DOE’s foolish decision to award such a huge grant to an obviously incapable and inexperienced company. One example: the company paid steep rent costs (vendors listed at Recovery.gov) – sometimes five figures monthly – for nearly every city in which they had the EV Project. With such poor adoption of EVs, it’s hard to imagine Ecotality representatives or contractors couldn’t work out of less expensive locales – like their homes. And why did Ecotality need to relocate its headquarters from Arizona to some ritzy office digs near the banks of San Francisco bay?
Such answers may be confirmed by a Washington Free Beacon source. Reporter Lachlan Markay quoted an Ecotality executive who blamed the company’s plight on previous CEO Jonathan Read, who “offered no leadership and either directly or indirectly […] squandered or pocketed all the government money.” Read had previously been quoted in a shareholder conference call a few years ago saying he was a “political beast” who would play the political card very hard. His background was in executive management for the Park Plaza hotel chain and Shakey’s International. As Markay reported, “Read boasted about his political connections, and received bonus payments contingent on ECOtality winning DOE support.
DOE has paid $96 million so far to Ecotality in reimbursed costs related to the EV Project. It’s hard to see how much, if any, of that will be recovered for taxpayers in the planned bankruptcy auction. They may be stuck with a bunch of dead-weight chargers that need to be removed as well. But remember, that is all just part of the success story that is the DOE clean energy portfolio.
[First Published by National Legal and Policy Center]
…[F]or many patients the most basic elements of care were neglected. Calls for help to use the bathroom were ignored and patients were left lying in soiled sheeting and sitting on commodes for hours, often feeling ashamed and afraid. Patients were left unwashed, at times for up to a month. Food and drinks were left out of the reach of patients and many were forced to rely on family members for help with feeding. Staff failed to make basic observations and pain relief was provided late or in some cases not at all. Patients were too often discharged before it was appropriate, only to have to be re-admitted shortly afterwards. The standards of hygiene were at times awful, with families forced to remove used bandages and dressings from public areas and clean toilets themselves for fear of catching infections.
These conditions caused the deaths of an unknown number of patients. It may sound like a Nazi concentration camp or a third-world “failed state” like Yemen, but it wasn’t. It took place in one of the most advanced industrial democracies in the world.
What should happen in such a situation? Should the facility be closed? The staff fired? Management arrested and tried for manslaughter? At least sued for malpractice? Would it make any difference to you if it was a private or a public facility?
In fact the quote above was taken from a press release announcing the “Final Report Of The Independent Inquiry Into Care Provided By Mid Staffordshire (England) NHS Foundation Trust.” The 500-page report was mandated by the House of Commons and chaired by Robert Francis QC, who was quoted as saying –
It is now clear that some staff did express concern about the standard of care being provided to patients. The tragedy was that they were ignored and worse still others were discouraged from speaking out.
Management knew what was happening, but failed to correct it and even suppressed any discussion of the problems.
Again, what should be done?
Enter Don Berwick.
Dr. Berwick was brought in to chair another committee — the National Advisory Group on the Safety of Patients in England, which issued another report and recommendations for action.
Berwick’s report is a complete whitewash of the situation. Here are a few of their observations and my comments.
Let’s start with Berwick’s personal letter to “Senior Government Officials and Senior Executives of the Health Service.” He writes –
You are stewards of a globally important treasure: the NHS. In its form and mission, guided by the unwavering charter of universal care, accessible to all, and free at the point of service, the NHS is a unique example for all to learn from and emulate.
Good grief, could he be more gushing, even in the face of glaring and criminal incompetence? No one on earth wants to emulate the NHS. Not one nation is trying to replicate the British system. It is the laughing stock of the world. The things about it that Berwick admires are the very things that made this atrocity inevitable as we will discuss below.
Patient safety problems exist throughout the NHS as with every other health care system in the world.
So, it’s no big deal, just the way things go. Get used to it.
NHS staff are not to blame — in the vast majority of cases it is the systems, procedures, conditions, environment and constraints they face that lead to patient safety problems.
NHS staff are not to blame? Who developed the procedures, conditions, environment and constraints? Where is the procedural rule that told the staff to let people lie in their own feces and urine? Who decided to leave food and water out of the reach of the patients? What kind of monster would step over a suffering patient and do nothing? Would Don Berwick be so sanguine if these things happened in a private hospital? Of course not! Heads would roll. But since it is a government hospital, no one is to blame.
In some instances, including Mid Staffordshire, clear warning signals abounded and were not heeded, especially the voices of patients and carers (sic).
So people ignored the abundant “warning signals” and those people are also “not to blame”?
The system must…abandon blame as a tool and trust the goodwill and good intentions of the staff.
What goodwill? What good intentions? If Ford built defective cars that killed hundreds of innocent people, would Don Berwick insist that we “trust the goodwill and good intentions of the staff?”
Many people probably died from avoidable causes, and many more suffered unnecessary indignities and harm…(but) without ever forgetting what has happened, the point now is to move on.
Yes, move on. Nothing to see here. No one is to blame. No one is accountable. “Many people probably died from avoidable causes” and that is really sad, but let’s “move on” to happier topics.
Some of the recommendations are contradictory. Berwick’s commission says that everyone involved in the system must be committed to constant improvement and patient safety, but it also says –
(The NHS should) ensure that responsibility for functions related to safety and improvement are vested clearly and simply in a thoroughly comprehensible set of agencies, among whom full cooperation is, without exception, expected and achieved.
So, on one hand everyone must be involved, but on the other, it is the responsibility of a limited number of agencies, allowing everyone else to say, “Sorry, that’s not my job, it is the work of the Bureau of Patient Safety.”
Most of the report is a long series of self-serving platitudes about continual improvement, life-long education, focus on the patient, and so on. It insists that patients are central to the mission, but even this is contradicted by the make-up of the commission itself. The appendix notes that –
The Committee assembled was dominated in a majority by scientists — experts in organizational theory, quality improvement, safety and systems — and with a healthy minority of people currently in management positions within the NHS in England.
Where are the patients who are supposedly so central to the whole shebang? Not worth including, I guess.
And here is the real problem with the NHS. Like the commission itself, patients are an afterthought. They have no power, no authority in the NHS. The entire system is based upon the idea that well-meaning experts will do things for (or to) supplicating patients who get their services for free and have no choice in what they get.
But the current scandal shows that these experts are not always well-meaning or even competent. What happens to the hapless patient then? They are left to lie in their urine-soaked beds with food and water out of reach. There is no recourse, other than to “move on.”
Without patient empowerment, there is no “system” that can prevent such abuses. We can implore the experts to be caring and competent all we want, but some will not be and it is impossible for committees to police every action by every “caregiver.” And if no one is ever “blamed” for any wrong doing, it is futile to even try.
Another excuse provided by Berwick’s committee was recent budget cuts and resulting staffing shortages. Faced with such shortages, what is a hospital to do? Well, it might have requested that patients make up the difference. It might have charged patients a small portion of the costs, maybe $10 a day, $25 a day, whatever it takes to avoid the staffing shortages. I expect patients would have gladly paid such a fee to avoid the humiliation, pain and even death they experienced by getting their care for “free.”
But such a remedy would have violated Dr. Berwick’s devotion to the NHS’s “unwavering charter of universal care, accessible to all, and free at the point of service.” So, political ideology trumps all else. Sure, patients may suffer, but they suffer for free and we experts can pat ourselves on the back for being so caring.
[First Published by NCPA]
Among the targets to disable an enemy’s ability to wage war is their energy infrastructure. The destruction of the utilities that provide electricity or its ability to refine oil is critical to crippling a nation’s ability to function, based on the universal use of hydrocarbons such as coal, natural gas, and oil.
If an enemy was doing this to America we would go to war against it, but this is being done and the enemy is the government on which we depend to ensure the nation has the energy it needs to function and grow. Leading the war on America has been the Environmental Protection Agency, but it is joined by the Department of Energy, the Department of the Interior, and other agencies.
The Institute for Energy Research has estimated that the much of the government’s oil and gas that is technically recoverable is worth $128 trillion, about eight times our national debt. Our coal resources in the lower 48 states are estimated to be worth $22.5 trillion.
On September 10, The Wall Street Journal reported that “The Obama administration plans to block the construction of new coal-fired power plants unless they are built with novel and expensive technology to capture greenhouse-gas emissions, according to people familiar with a draft proposal.” The U.S. has more than 27% of the world’s known coal reserves.
Greenhouse gas emissions are primarily carbon dioxide (CO2), a gas vital to all life on Earth, the “food” that vegetation depends upon. It plays no role whatever in a “global warming” that is not occurring. It is emitted by the Earth’s many active volcanoes and hot springs. It is exhaled by humans and land animals. It is the product of the combustion of hydrocarbons. As it increased in the atmosphere, the Earth has entered a cooling—not a warming—spell since the late 1990s. Its atmospheric concentration is a very tiny 0.039 percent by volume.
It is, however, the justification on which much of the EPA’s enforcement activities are based. “The only way coal plants could comply is to capture carbon dioxide emissions and stick them underground—a costly process that hasn’t been demonstrated at commercial scale before.”
The idea of “capturing” CO2 and holding it underground is about as idiotic as it gets. More CO2 means more abundant crops to feed humans, livestock, and wildlife. It means healthier forests and jungles. Yet this is what would be required if the EPA gets its way. And even if it were possible, it would drive up the cost of electricity to consumers.
If implemented the proposal would guarantee one thing; fewer coal-fired plants and, as a result, less production of electricity. In 2012, the American Energy Institute warned that “coal’s share of U.S. electricity is expected to fall to below 40 percent this year from 42 percent last year and produce the lowest share since data was collected in 1949. Just five or six years ago, its share of electricity generation was 50 percent.”
The EPA isn’t content stopping the construction of coal-fired plants. In April 2013 a decision by the D.C. Circuit Court of Appeals upheld the EPA’s veto of the Arch Coal Spruce Mine in West Virginia. The decision pushed aside the Army Corps that normally conducts the environmental reviews and which granted approval to the mine in 2007.
The EPA ordered the Corps to withdraw the permit. This transfer of power to the EPA imperils all future coal mining projects. A Wall Street Journal article about the EPA’s project veto noted that “A recent study by Berkeley Professor David Sunding estimates that some $220 billion of annual investment depends on these permits; the fact of an EPA veto will deter new investment.” EPA warnings have caused a British mining giant, Anglo-American, to walk away from a proposed Alaskan “Pebble” mine—potentially the largest coal and copper project in North America.
It is not just coal whose use is targeted by the EPA, fracking technology has unleashed a boom in natural gas, but the Obama administration has nominated an enemy of natural gas to chair the Federal Energy Regulatory Commission (FERC). Ron Binz regards it as a “dead end” because he too is a believer in carbon capture and storage. His answer to a non-existent global warming is “renewable” energy sources such as solar and wind. Solar currently provides 0.01% of the electricity fed to the grid and wind provides just 2%. FERC oversees much of the gas business and could effectively deter the growth of this industry with all of its attendant benefits from jobs to the reduction in the cost of electricity.
A recent report by the Republican members of the Senate Environment and Public Works Committee exposes the way the EPA has “pursued a path of obfuscation, operating in the shadows, and out of the sunlight.” The report noted how the former administration established an alias identify in order to discuss agency business without having to report on it. The report provides a lengthy description of violations of the Freedom of Information Act and other federal laws and regulations intended to encourage transparency in government.
All of this is going on while the nation languishes in the long recovery from the 2008 financial crisis, while creating jobs is vital to that recovery, and while it continues its long history of resisting the provision of energy in any form to Americans.
It is a war being waged on Americans, most of whom are unaware of it, but are being victimized by it.
A hearing of the House Committee on Oversight and Government Reform last week investigated the Obama administration’s practice of concealing email communications, with former top officials getting grilled about their use of private Internet accounts to conduct government business.
Two of the most egregious offenders were subject to withering scrutiny, although it didn’t last long enough to get very deep. Lisa Jackson, the former EPA Administrator whose FOIA-evadable email address was under the alias “Richard Windsor” – named in part for her dog – was questioned about a message sent to Siemens vice president Alison Taylor in which she asked her to “use my home email rather than this one when you need to contact me directly….”
Jackson, of course, said it was perfectly normal to direct a corporation official that she regulates to communicate with her via methods for which the public has no access. Marlo Lewis of the Competitive Enterprise Institute provides an excellent summary of South Carolina Republican Rep. Trey Gowdy’s questioning of Jackson at GlobalWarming.org.
And then there was Jonathan Silver (in photo), former director of the Department of Energy’s Loan Program Office. He came under fire – especially from Committee Republican Jim Jordan of Ohio – about his directives to keep messaging out of the public eye. The Congressman confronted Silver, who came to the loan program from the venture capital realm, with an email he sent in August 2011 from his personal account to a few staff members. The message was the subject of questioning Silver received by the same committee in July 2012.
“Don’t ever send an email on doe email with private addresses,” Silver wrote to a DOE colleague’s Gmail account. “That makes them subpoenable (sic) (i.e., subject to subpoena).”
Silver’s excuse was that he was “not trying to evade anything” and that the message was simply an admonition about handling personal vs. public communications. Jordan was having none of it, and asked if that was the case, why didn’t Silver respond to the message from his DOE email account. Silver said he could not explain because he could not remember where he was at the time when he sent the message.
“I’ll tell you what I think happened,” Jordan interrupted. “I think you were trying to conceal it.”
The Ohio Congressman then called attention to emails between Silver and John Woolard, CEO of Brightsource Energy, who was waiting for the final approval of a loan from DOE for a California solar project. One message extended an open invitation by Silver to Woolard to stay at his home “anytime,” and that the “guest bedroom is ready.” Jordan then refreshed Silver on a March 2011 piece of correspondence – which the Committee raised in that 2012 hearing also – in which Woolard and former Brightsource Chairman John Bryson sought Silver’s help in editing a letter to White House Chief of Staff William Daley, in which they would ask for “a commitment from the (White House) to quarterback loan closure between the Office of Management and Budget and DOE by March 18.”
“I think you were trying to help your friends,” Jordan said to Silver in last week’s hearing. The letter he referenced was never sent, but Brightsource did receive a $1.6 billion loan.
Jordan then unleashed a barrage of accusations at Silver: that he sought to conceal official government business on private email accounts; that he was so focused on helping friends get money that he was willing to help them appeal to the White House for the loan to be expedited; that 22 of 26 companies that got loan approvals had credit ratings of double-B-minus; and six companies have gone bankrupt.
“The taxpayer got the shaft all the way around in this program,” Jordan said.
The congressman then showed an email that committee staff from the law firm that also happens to represent Silver, requesting that questions not be directed to him. Silver claimed to know nothing about the request.
Silver entered his DOE role with great excitement from the venture capital community. Prior to taking the Loan Programs Office leadership role, he was co-founder and managing director of Core Capital Partners, “a successful early-stage investor in alternative energy technology, advanced manufacturing, telecommunications and software.” Before that he held senior positions with several other investment and finance firms. A number of others from the VC realm jumped to the Obama administration with him.
The Energy Department considers Silver’s tenure a “large” success.
“Under Mr. Silver’s leadership,” DOE’s Web site says, “the Loan Programs Office has grown to become the largest project finance effort in the United States. Since Mr. Silver took office, the agency has committed over $40 billion in 42 clean energy projects with total project costs of over $63 billion. Cumulatively, these projects create or save over 66,288 jobs across 38 states and avoid over 38 million metric tons of carbon dioxide, equivalent to taking over 4.5 million vehicles off the road or about as many vehicles as in the state of Michigan. The program’s 23 generation projects produce over 32 million megawatt hours, enough to power nearly 3 million homes.”
Of course those are made-up numbers – not actual measurements. DOE also boasted how the Loan Programs Office, under Silver, underwrote the world’s largest wind farm; two of the of the world’s largest solar thermal power plants; the nation’s first nuclear power plant in three decades; several large geothermal projects; one of the country’s first commercial-scale cellulosic ethanol plants; and three “successful” electric vehicle launches.
Note the boasting emphasizes “large,” which doesn’t mean “successful.” There is no such thing as “too big to fail” in any business, much less clean-tech – Solyndra, A123 Systems, and Ener1 batteries being just a few examples. The three electric vehicle “launches” were not initiated because of DOE either – Tesla, Nissan (Leaf) and Ford (various EVs) were already well underway before the stimulus came along, and the jury is still out on whether their electric car ventures will succeed or not. As for the nuclear plant, its future is in serious doubt as well.
Rep. Jordan assailed Silver’s record last week, as he cited several bankrupt recipients of DOE support. Silver’s response was that the losses represented only three percent of the portfolio. Jordan correctly noted the amount was “millions and millions of dollars.”
“Not every investment will be successful, but the vast majority have been,” Silver testified in perfect bureaucratese, the kind you would expect from a former venture capitalist.
Jordan responded with incredulity noting the 22 that had a double-B-minus credit rating, when no one in the private sector would have given them a loan.
“You guys go ahead and (loan the money), and six of them go bankrupt, and that’s a success?” Jordan retorted.
In responses to questions from Florida Republican Rep. John Mica, Silver acknowledged having private communications with two investors – John Doerr of Kleiner, Perkins, Caufield and Byers, and Ira Ehrenpreis of Technology Partners – while projects they were invested in were under consideration for DOE loans. Doerr’s investment, Fisker, is near failure and Ehrenpreis had stakes in Tesla Motors and Abound Solar, the latter which went bankrupt last year.
As Chairman Issa explained during last week’s hearing, investors in the failed companies who may seek redress in the courts may have been wronged if those private communications – which are legally the property of the public – have been deleted. It’s just another example of the despicability of crony capitalism.
[First Published by National Legal and Policy Center]
Last week, while America dithered over whether or not to depose Syria’s president, an ocean away, a different leader was decisively dumped. The election of Australia’s new prime minister has international implications.
On September 5, in a landslide election, Tony Abbot became Australia’s new Prime Minister—restoring the center-right Liberal-National coalition after six years of leftward economic polices. Conservatives the world over are looking to learn from Abbott. In the Wall Street Journal (WSJ), Tom Switzer, sums up the “resounding victory” this way: “Abbott did the very thing so many US Republicans and British Tories have shied away from in recent years: He had the courage to broaden the appeal of a conservative agenda rather than copy the policies of his opponents. As a result, Australians enjoyed a real choice at the polls.”
Conservatives have a right to be rejoicing. As Jerry Bowyer points out in Forbes: “the Anglosphere is now post progressive. The English speaking nations of the world: England, New Zealand, Canada and now Australia are governed by conservatives. America stands apart from them as the sole remaining major leftist-governed power in the Anglo world.” He then points out how the English-speaking peoples “tend to move in a sort of partial political sync with one another.”
While this should sound alarms for liberals, the real panic is with the global warming alarmists.
Abbott is said to have run a “tight campaign”—though he was “remarkably vague over his economic plans.” The Financial Times reports: “Abbott was much clearer on his intention to scrap a carbon tax and a levy on miners’ profits.”
Abbott ran an almost single-issue campaign saying: “More than anything, this election is a referendum on the carbon tax.” While there are debates as to whether or not he will have the votes needed in the Senate to overturn the Labor Party’s policies (though it looks like he can do it), the will of the people couldn’t be clearer. As Switzer observes: “what changed the political climate was climate change.” In Slate.com, James West calls the election “the culmination of a long and heated national debate about climate change.” Abbot has previously stated: “Climate change is crap.”
Add to the Abbot story, the news about the soon-to-be-published Intergovernmental Panel on Climate Change’s “fifth assessment report,” which “dials back on the alarm,” and you’ve got bad news for alarmists. Addressing Abbott’s win, West writes: “Politicians enthusiastic about putting a price on carbon in other countries must be looking on in horror.”
It is not just the politicians who are “looking on in horror.” It is everyone who has bought into, as the WSJ calls them, “the faddish politics of climate change”—those who believe we can power the world on rainbows, butterflies, and fairy dust are panicked. Their entire world view is being threatened.
This was clearly evident at last week’s hearing in Santa Fe, New Mexico, regarding the proposed change in compensation for electricity generated by rooftop solar installation. The hearing was scheduled in a room typically used for Public Regulatory Commission meetings. Well before the scheduled start time, it became clear that a bigger auditorium was needed—and it was filled to capacity. The majority was, obviously, there in support of solar—they were carrying signs. Thirty-nine of them gave public comment in opposition to the proposed rule changes. After each comment, they hooted, cheered and waved their signs—until the Chairman prohibited the sign waving. Two of the women went by only one name “Lasita” and “Athena,” with no last name—linking themselves to some goddess. Several referenced Germany’s success with renewable energy.
They were organized, rabid in their support, and intimidating to anyone who dared disagree. At one point, the Sierra Club representative, took control of the hearing and, completely ignoring the Chairman’s instructions, stood in the front of the room and, with hand-waving gestures, got everyone who was there in opposition to the proposed change to stand up and wave their signs. A smattering of individuals remained seated. Three of us spoke in favor of the proposed change. I brought up those who’d held up Germany as a model to follow and posited that they didn’t know the full story.
At the conclusion of the meeting, a petite woman marched up to me and demanded: “What do you do?” I calmly told her that I advocate on behalf of energy and the energy industry. “Oil?” she sneered. “Yes.” “Coal?” “Yes.” “Gas?” “Yes.” “Nuclear?” “Yes.” “It figures,” she hissed as she went off in a huff. When I approached my vehicle in the parking lot, I feared my tires might have been slashed. They weren’t.
Australia’s election was early this month. Germany’s is later—September 22. As climate change played a central role in Australia’s outcome, green policies are expected to be front and center in Germany’s election.
In an article titled: “Ballooning costs threaten Merkel’s bold energy overhaul,” Reuters points out that Merkel’s priority, assuming she wins a third term, “will be finding a way to cap the rising cost of energy.” “In the current election campaign,” Der Spiegel reports, “the federal government would prefer to avoid discussing its energy policies entirely.” Later, addressing Germany’s renewable energy policy it states: “all of Germany’s political parties are pushing for change. … If the government sticks to its plans, the price of electricity will literally explode in the coming years.”
German consumers pay the highest electricity prices in Europe. “Surveys show people are concerned that the costs of the energy transformation will drive down living standards.” Spiegel claims: “Today, more than 300,000 households a year are seeing their power shut off because of unpaid bills.” Stefan Becker, with the Catholic charity Caritas, wants to prevent his clients from having their electricity cut off. He says: “After sending out a few warning notices, the power company typically sends someone to the apartment to shut off the power –leaving the customers with no functioning refrigerator, stove or bathroom fan. Unless they happen to have a camping stove, they can’t even boil water for a cup of tea. It’s like living in the Stone Age.” This is known as Germany’s “energy poverty.”
Because of “aggressive and reckless expansion of wind and solar power,” as Der Spiegel calls it, “Government advisors are calling for a completely new start.” Gunther Oettinger, European Energy Commissioner, advised caution when he said Germany should not “unilaterally overexpose itself to climate protection efforts.”
While the solar supporters in Santa Fe touted the German success story—“more and more wind turbines are turning in Germany, and solar panels are baking in the sun”—“Germany’s energy producers in 2012 actually released more climate-damaging carbon dioxide into the atmosphere than in 2011.” Surprisingly, according to Der Spiegel, Germany’s largest energy producer, E.on, is being told not to shut down older and inefficient coal-fired units. Many of the “old and irrelevant brown coal power stations” are now “running at full capacity.”
Interestingly, one of the proposed solutions for Germany’s chaotic energy system is much like what has been proposed in New Mexico and Arizona. Reuters writes: “instead of benefiting from a rise in green energy, they are straining under the subsidies’ cost and from surcharges.” The experts propose a system more like Sweden’s, in which “the government defines the objective but not the method.” Der Spiegel explains: “The municipal utilities would seek the lowest possible price for their clean electricity. This would encourage competition between offshore and terrestrial wind power, as well as between solar and biomass, and prices would fall, benefiting customers.” If implemented, the Swedish model “would eliminate the more than 4,000 different subsidies currently in place.”
The Financial Times reports: “Nine of Europe’s biggest utilities have joined forces to warn that the EU’s energy policies are putting the continent’s power supplies at risk.” It states: “One of the biggest problems was overgenerous renewable energy subsidies that had pushed up costs for energy consumers and now needed to be cut.”
“It is only gradually becoming apparent,” writes Der Spiegel, “how the renewable energy subsidies redistribute money from the poor to the more affluent, like when someone living in small rental apartment subsidizes a homeowner’s roof-mounted solar panels through his electricity bill.” Sounds just like what I said in my public comment at the PRC hearing in Santa Fe.
Australia’s election changed leaders. Germany’s election will likely keep the same leader, but Merkel “has promised to change but not abolish the incentive system right after the election.”
While other countries are changing course and shedding the unsustainable policies, America stands apart from them by continuing to push, as the Washington Post editorial board encourages, building “the cost of pollution into the price of energy through a simple carbon tax or other market-based mechanism.” President Obama’s nominee to chair the Federal Energy Regulatory Commission, Ron Binz, believes in regulation and incentives to force more renewables and calls natural gas a “dead end.”
In a September 5 press release with the headline: “Administration Should Learn From Australia’s Carbon Tax Failure Before Committing US to Same,” Senator David Vitter (R-LA) says: “We can add Australia as an example to the growing list of failed carbon policies that are becoming so abundant in Europe.”
It is said: “The wise man learns from the mistakes of others, the fool has to learn from his own.” Sadly, it appears that the US has not learned to beware of the foolish politics of climate change.
[First Published by Townhall]
In light of the recent “national” strike conducted by 0.09% of fast food workers in the US, many op-eds and commentaries have been penned by economists trying to justify the strikers’ demands. Most of the arguments for minimum wage are easily dismissed by Econ 101 references to supply and demand graphs. However, given the immense popularity of the minimum wage, proponents have been developing more and more convoluted defenses of the law.
This blog post will look at a few of the most absurd arguments made on behalf of the minimum wage law this year:
1. “Tonight, let’s declare that in the wealthiest nation on Earth, no one who works full-time should have to live in poverty, and raise the federal minimum wage to $9.00 an hour. This single step would raise the incomes of millions of working families. It could mean the difference between groceries or the food bank; rent or eviction; scraping by or finally getting ahead. For businesses across the country, it would mean customers with more money in their pockets. In fact, working folks shouldn’t have to wait year after year for the minimum wage to go up while CEO pay has never been higher. So here’s an idea that Governor Romney and I actually agreed on last year: let’s tie the minimum wage to the cost of living, so that it finally becomes a wage you can live on.” – Barack Obama.
There is almost too much wrong here to unravel. Virtually every statement is inaccurate, misleading, or founded on faulty claims.
First, a full time worker making $7.25 per hour (the federal minimum wage) earns $14,500 per year with two weeks of vacation. That puts him $3,010 above the federal poverty line for an individual adult living alone at $11,490. A family of four, in which both parents make minimum wage, earns $29,000 per year, putting it $5,450 above the $23,550 poverty line for a four person house hold. To say that an individual literally cannot live off minimum wage is blatantly false by the government’s own welfare standards, and the fact that millions of minimum wage workers don’t die off every year.
Second, the idea of being able to “declare” away an economic reality should make every economist shudder. Why does Obama stop there? He should implore us to declare away poverty, disease, war, laziness, stupidity, and death.
On a more realistic level, this “declaration” language plays into the progressive conception of politics and economics. Obama does not see the market as a network of organic, voluntary human interaction, but as a machine which can be reached into and tweaked with scientific precision. Annoying things like, “supply and demand” and “individual desire” are merely byproducts of the machine, not its foundation and fuel. If millions of individuals can be helped by “declaring” higher wages, than why not do so?
2. “Our women [business owners] who pay a living wage have an advantage over their larger counterparts who don’t. Whether Obama’s proposal is high enough or the time frame is fast enough is the question.” – Margot Dorfman, CEO of the US Woman’s Chamber of Commerce.
This statement should immediately set off alarms in any economist’s head. If paying “living wages” is automatically better for business than paying “dying wages” (or whatever), then why aren’t all businesses paying living wages? Surely corporate greed compels big companies to maximize profits, and if paying higher wages will maximize profits, then why is this even an issue?
This is not to say that businesses could never benefit from a wage increase. However, I tend to believe that individual business owners, whose livelihoods depend upon their bottom line, are better equipped to handle the financial structure of their own companies than anyone else, especially government bureaucrats.
3. “[If the minimum wage was raised] I would pay a couple of dollars more for products, but the question then is, do I get a raise too? If my salary goes up, I will be willing to pay even more for my products,” – R.B. Barrett, pro-minimum wage protester.
In July, the Huffington Post published a story on an economic analysis which revealed that if McDonald’s doubled its minimum wage pay, it would only have to increase the cost of its signature Big Mac burger by 68 cents to maintain current margins. In its zeal to attack corporate America, Huffington didn’t realize their prestigious research came from an undergrad student at the University of Kansas who incompetently omitted crucial data which drastically affected the outcome. When this information came to light, Huffington admitted the error and removed the original article. Regardless of the research presented, an economist should have been able to dismiss the entire study with little analysis.
Prices determine costs. Costs do not determine prices.
Prices are determined by the supply and demand of a specific product. The cost of production is then calculated in relation to the revenues generated by the product’s price. If the product’s variable (non-fixed) costs cannot be brought below its price, then the product is uneconomical and should not be produced. By extension, the minimum wage has no direct impact on prices, but only an indirect impact via wages as a component of variable costs. Due to the circuitousness of this connection, any analysis which claims that it can predict minute price changes as a result of minimum wage increases should be automatically tossed aside as junk economics. Unfortunately, anti-minimum wage advocates often make this error as well.
I have no doubt that under very specific circumstances, in very specific times and places, raising the minimum wage can benefit poor workers. Given that such an occurrence seemingly violates the law of supply and demand, does this mean that we should all throw out our economics textbooks and start the whole science again?
No, fundamental laws of economics like supply and demand do not need empirical testing to be validated. It is common sense that the more costly an action is, the fewer times the action will be performed. One does not need a peer-reviewed, double-blinded research analysis to determine this. Supply and demand are inherent components of human action, not wild variables that change by the week.
The empirical tests which demonstrate the efficacy of the minimum wage law invariably leave out the unseen costs of economic manipulation, including the variables altered by the test itself. Imagine if a test shows that raising the minimum wage doesn’t increase unemployment, as a conventional classical economist would predict. This leaves two options: either the market actors are irrational and violated the law of supply and demand, or the minimum wage increase coincidentally coincided with a natural increase in business which canceled out the negative effects of the minimum wage increase.
Either option is possible. If the market actors are irrational, then the study is merely looking at an anomaly (if it is not an anomaly, then economics would effectively not function, and we would still be living in caves). If the market actors are rational, then there are unseen forces which created the false impression that the minimum wage is harmless.
Too many commentators pretend that economics is no different from physics or chemistry in its methodological approach, but nothing could be further from the truth. A glass of water will always freeze at 0°C under standard conditions, but a human will not always buy more of a product if its price falls for a variety of possible reasons. Economic laws are predictions of human behavior generated by individuals with free will, not analyses of the reactions of inanimate rocks and molecules.
Clumsy critics will attempt to wield empirical studies as a weapon without understanding their true nature. Don’t be fooled by their attempts.
The government mandated blend of ethanol in every gallon of gasoline is a full-fledged disaster and neither Congress, nor the Environmental Protection Agency shows any indication of either repealing or abandoning it.
A recent Wall Street Journal editorial said, “A strong candidate for the most expensive policy blunder of recent years would have to be the mandate to blend corn ethanol and other biofuels into the nation’s gasoline supply. Last month even the Environmental Protection Agency essentially acknowledged that the program is increasingly unworkable and costly to consumers. The EPA just won’t do much to fix it.”
Some future historian will calculate how many trillions this nation wasted when it passed a law in 2007 that was supposed to reduce greenhouse gas emissions to save the Earth from global warming and to provide a domestic energy source to compete with OPEC oil.
Implicit in that calculation will have to be the millions, if not billions, of automobiles whose engines were ruined by ethanol. Another element of the calculation is the way the cost of food at home and around the world was increased needlessly by requiring approximately 42% of the U.S. corn crop be used for ethanol production. It is more than the amount of corn used to feed livestock and poultry nationwide.
Only an environmentalist would think it was a good idea to burn food as fuel instead of permitting corn to be used as part of the nation’s food chain and for export. As a former Republican member of the House of Representatives, Bob Beauprez of Colorado, noted in a Washington Times commentary, “The ethanol mandate has sent corn prices skyrocketing, harmed cattle and poultry producers, forced refiners to waste money on ethanol credits, and hiked food prices worldwide.”
The ethanol mandate is so crazy that, not just corn must be sacrificed, but “cellulosic” ethanol, made from switch grass and wood chips, is also required. It isn’t even being produced despite the law, but the EPA continues to levy fines against oil company refineries for failing to buy and use a fuel that doesn’t exist. The cost of that is, of course, passed along to consumers.
In January, the D.C. Court of Appeals struck down the EPA’s 2012 cellulosic mandate as unrealistically high. The EPA has announced that it is reducing the 2013 cellulosic ethanol mandate to a mere six million gallons.
In order to remain in compliance with the requirement to blend ethanol, even though refiners are producing less gasoline than the law mandates, they have been forced to purchase ethanol credits called Renewable Identification Numbers (RINs). The price of these RINs has climbed from seven cents in January to a high of $1.43 in July. All this does is increase the cost of gas at the pump without having a single good reason for the existence of the ethanol blend.
Determined to force ethanol into our fuel, the EPA granted a partial waiver for the sale of E15, a motor fuel that contains 15% ethanol and 85% gasoline, approving it for use in 2001 and later model cars and trucks. The Wall Street Journal noted a survey by AAA found that “only 5% of vehicles are approved for higher levels of ethanol under manufacturer warranty.”
All this waste and stupidity comes from three decades of lies about global warming and the supposed need to reduce greenhouse gas emissions. The President continues to lie about global warming/climate change despite the fact that Earth has entered its seventeenth year of a cooling cycle.
The EPA continues to distort the nation’s and the world’s food supply so far as corn is concerned. It continues to ignore the damage ethanol inflicts on auto and other engines. It ignores the needless increase in the cost of gasoline to the consumer.
Americans are afflicted with a government that is indifferent to the facts about ethanol and the EPA remains intent on punishing Americans for their use of gasoline.[First Published by Warning Signs]
Economic rationality, competition, and broadband pricing freedom are the big winners, and common carrier-like net neutrality was the big loser, if the Appeals Court panel decides Verizon v. FCC as expected.
Monday’s intense tag-team grilling of the FCC’s lawyer by Judges Tatel and Silberman left most observers thinking the Court will decide it is illegal for the FCC to impose common-carrier-like regulation on broadband providers — regardless of what else they decide.
- (Listen to the Judges’ one-hour grilling of the FCC – here – it’s the middle half of the two-hour court recording.)
This single point of relative clarity is a big deal with many implications, if the Court indeed decides it is illegal for the FCC to impose common carrier-like regulation on broadband providers via its Open Internet order.
Summary Tentative Conclusions:
- The tent-pole net neutrality assumption of the Tim Wu/Susan Crawford-Save-the-Internet movement – i.e. common-carrier-defined net neutrality for broadband — may actually be illegal, not legitimate U.S. policy as many have long assumed.
- Two-sided broadband markets and usage based pricing are normal legal economic practices, paving the way for commercially-negotiated broadband payments by, and usage pricing for, big cost-causing edge providers, like NetFlix and Google-YouTube.
- Consumers would be able to pay less, not more for broadband, if consumers no longer were forced to shoulder the full broadband cost of Internet access by subsidizing the biggest edge companies like Netflix and Google-YouTube, which consume ~half of the Internet’s peak traffic per Sandvine.
- Over-the-top (OTT) Internet video models would no longer enjoy the current FCC entitlement of zero pricing for edge companies, like Netflix and Google-YouTube; they would have to negotiate commercial arrangements to pay for the costs their mass video streaming causes.
- If the FCC then tries to reclassify broadband as a Title II common carrier service, this court would likely strike it down, for the many strong reasons discussed at the end of this analysis.
I. Common-carrier-defined net neutrality is likely illegal.
Many observers, including this one, thought the key question for this court was whether or not the FCC had the direct legal authority to do what they wanted. The big surprise was when a majority of the court signaled that the cornerstone of the FCC Open Internet order, a non-discrimination mandate on broadband providers is likely to be found illegal. The reason for the surprise is that Judge Tatel, in his Cellco decision in the data roaming case, did not find the FCC’s policy in that order to be common carrier-like regulation. Different facts can lead to a very different conclusion.
Concerning the FCC Open Internet Order’s cornerstone anti-discrimination provision, Judges Tatel and Silberman indicated the FCC’s provision was indistinguishable from the Title II, Section 202, common carrier non-discrimination provision, which cannot legally be applied to an unregulated information service like broadband.
This potentially is a very big deal. There is virtually no broadband industry objection to freedom-defined net neutrality, i.e. a user’s unfettered freedom to access the legal content, apps and services of their choice over the Internet, subject to reasonable network management. In stark contrast, there are very strong broadband industry objections to common-carrier-defined net neutrality, which in effect could be monopoly rate regulation of prices, terms and conditions of competitive companies without market power.
This Tatel/Silberman legal interpretation could politically isolate common-carrier-defined net neutrality proponents to the political fringe even more. It forces them to advocate for the FCC to reclassify broadband as a Title II common carrier service – the extreme political equivalent of asking the FCC to use a policy weapon-of-mass-destruction on the broadband sector – to somehow save the FCC’s authority from natural obsolescence.
The competitive broadband industry has invested several hundred billion dollars in broadband Internet infrastructure based upon the FCC’s repeated precedents over the last decade that broadband is an unregulated information service not a common carrier regulated monopoly telephone service.
The FCC knows there is strong bipartisan congressional opposition and Republican FCC commissioner opposition to common-carrier-defined net neutrality. (Importantly, later in this post I will explain why an attempt by the FCC to reclassify broadband as a Title II service would likely be rejected by this court.)
II. Two-sided broadband markets and usage based pricing are legal.
Judges Tatel and Silberman repeatedly signaled opposition to the FCC mandating zero pricing for edge providers and defining the charging of edge companies as unreasonable discrimination.
This is a very big deal. This court has finally provided some legal and economic adult supervision to the political net neutrality debate. Economics are not per se discrimination. Economics naturally dictate that those that get more value or use more pay more. That’s economics 101, whereas mandated zero-pricing – is uneconomics 101.
If the court effectively rules normal economics and commercial negotiations are not discriminatory, it will expose that much of the net neutrality debate is really motivated by uneconomics and who gets subsidized, not about freedom, free speech, censorship or discrimination.
Look no further than the writings of Professor Tim Wu (who coined the term “net neutrality” in 2003), to understand how common-carrier-defined net neutrality is all about economic subsidies; who gets to pay nothing; and who gets stuck with their bill. See Professor Wu’s 2009 paper that was repeatedly cited in the FCC’s Open Internet order, entitled: Subsidizing Creativity through Network Design: Zero Pricing and Net Neutrality.
Net neutrality to them is really about securing economic subsidies; talking about freedom, and free speech, blocking, discrimination and censorship are basically political cover and misdirection to avoid a public discussion of subsidies of edge providers, which would not be popular because they don’t need it.
This upcoming court decision will likely help focus the net neutrality debate on hidden economic subisidy scheme behind the political net neutrality rhetoric. This shift in the debate could have big implications for consumers and for big edge companies, which I will discuss next.
III. Consumers would pay less for broadband, not more without FCC common-carrier-like net neutrality.
The great service the incisive minds of Judges Tatel and Silberman have done here is to put laser focus on the nonsensical legal and economic impact of the FCC’s de facto mandating of zero pricing for edge providers.
Common sense tells us that if broadband consumers, residential and business, are the only ones paying for the bandwidth they use, yet “edge providers,” who consume most all of the Internet’s traffic with their bandwidth-voracious, video streaming, are entitled to zero-pricing (free bandwidth), broadband consumers are economically subsidizing the few bandwidth-hogging edge providers.
Simply consumers are paying more than they should have to pay, because the FCC effectively has mandated subsidies for “edge providers.”
Today’s FCC rules perversely force consumers to subsidize Internet streaming companies like Netflix and Google-YouTube by barring a two-sided market from naturally forming. Consumers don’t pay for the full cost of a newspaper because advertisers pay the rest.
Consumers wouldn’t have to shoulder the full cost burden of video streaming, if those companies streaming video paid for their own streaming distribution costs. Why should consumers have to subsidize NetFlix and Google’s free lunch when they consume 45% of the Internet’s peak traffic per Sandvine? In other words, how much is the implicit subsidy or FCC tax on broadband consumers because it has mandated zero pricing – free bandwidth – for edge providers?
Let me debunk the myth of net neutrality proponents that a two-sided market would mean that consumers or startups would pay more than the broadband tier of service that they pay for now. Virtually 100% of consumers and 99% of businesses would face no risk of increased costs from no more zero pricing for edge providers – they would be the ones that would benefit from the video streamers contributing their fair share of the broadband Internet infrastructures cost.
The entities that face paying more if zero-pricing subsidies are illegal are the big video streaming companies like Netflix and Google-YouTube, which I will discuss next.
IV. OTT providers’ presumed entitlement of consumer-subsidized free video distribution is at risk.
Over the top (OTT) video streamers should be most concerned by what Judges Tatel and Silberman signaled Monday. They did not see payments from edge providers for better, more or faster bandwidth as discriminatory, but as economics and business. It is noteworthy that the judges specifically mentioned Google over a dozen times in their questions and hypothetical arguments. The only other company mentioned frequently in the oral arguments was Verizon which challenged the FCC’s order.
Why this is a big deal is that the common-carrier-defined net neutrality movement has created a conventional wisdom that on the Internet everything is or should be free, and that somehow includes and means that OTT video streamers should have no cost obligation for the bandwidth that their video streaming business consumes in distributing their service to consumers. Apparently the Appeals Court views their business conventional wisdom as an illegal FCC mandate.
Netflix in particular has the most to worry about. They are all in that their business is entitled to a permanent subsidy of free Internet distribution when Sandvine indicates that Netflix consumes ~30% of the Internet’s peak traffic. Netflix has a nosebleed P/E of 377, over twenty times that of the overall market, based on a fast growth model where they plow most all of their ~$1 billion gross margin back into the business to maintain its ~20% growth.
Netflix’ big problem here is that a large part of their current gross margin is the implicit FCC bandwidth subsidy of the FCC’s Open Internet order. Without a video streaming free lunch subsidized by all broadband consumers, Netflix would have to pay for their own distribution like most every other company does in the economy. If they had to pay for at least some of the real costs that they currently shift to consumers via net neutrality, they would have much less cash flow to fund their fast growth.
Simply, investors don’t know it yet but Netflix, but the Appeals Court may effectively expose that Netflix is an FCC-inflated stock, much as WorldCom, Qwest, Global Crossing, and other fiber backbone companies were FCC-inflated stocks during the tech bubble over a decade ago because of the implicit CLEC subsidies the FCC bequeathed them via skewed rate regulation.
Google also has something to worry about from their FCC video distribution subsidy likely going away. Unlike Netflix, Google does not enjoy an FCC inflated P/E, and its ~$30b in gross profit margin is ample enough to cushion the additional cost of paying its fair share of bandwidth.
Nevertheless a change in a video streamers economics in the U.S. is significant because of Google’s clear dominance of Internet video distribution in the U.S. Per ComScore, Google reaches 84% of the U.S. Internet audience. Per Neilsen, Google has 65% share of all U.S. online video streams and that is 25 times more total Internet video streams than their nearest competitor.
V. Title II reclassification of broadband is likely illegal.
Proponents of common carrier-like net neutrality regulation will push the FCC to reclassify broadband as a Title II common carrier service. They will cite the Supreme Court Brand X decision, which decided the FCC could determine if broadband was a telecom service or information service, and the recent Arlington Supreme Court decision affirming Chevron Deference. They will imply that these combined precedents technically confer on the FCC near imperial power to decide the economic fate of the entire broadband sector at any time.
What they will conveniently ignore is everything else that has happened — before and since the FCC confirmed broadband was an unregulated information service — that would likely make a potential FCC Title II reclassification of broadband illegal and/or unconstitutional.
Facts and Merits Haven’t Changed: The FCC’s original decision to confirm broadband was an unregulated info service was consistent with decades of FCC Computer Inquiry precedents that sought to not regulate computer networks (like the Internet) in order to encourage innovation. The FCC on the facts and merits determined repeatedly that broadband networks were computer/information, router-based, packet-switched networks; they were not telephone-switched voice networks warranting monopoly telephone rate regulation. Those foundational technological, functional, and market facts and merits have not changed, if anything, they have grown more supportive of an information services classification.
Arbitrary & Capricious Whipsaw: Reclassification after all this time (for the transparent purpose of regulatory relevance) would be highly vulnerable to legal challenge as an arbitrary and capricious, unreasonable whipsaw, and 180 degree reversal of forty years of policy, precedent and evidence. The FCC would have a steep climb to factually defend and justify that most everything the FCC has thought, done and defended for ~forty years of deregulating computer/information services was somehow all wrong. It’s not reasonable for the FCC to argue, “because we say so;” the FCC has to comply with procedure, the law, due process and the Constitution.
Assume Unbounded Classification Power? If in the middle of the game, the FCC unilaterally has wide latitude to change most all of the rules of the game for some players, like broadband, via “reclassification,” the FCC logically could have wide latitude (and Chevron Deference) to reclassify other businesses, like some types of edge providers, as common carriers too.
If nothing else matters but the two Supreme Court precedents, the FCC essentially could imagine it has carte blanche or imperial power to reclassify any Internet-technology company as a common carrier to preserve the open Internet. Then those tech-entities would have to sue in court on the facts, and win, to escape the FCC’s common carrier grasp.
Simply, what is the legal limitation on the FCC from reclassifying most any information service as a common carrier service?
A Bait & Switch Taking? The FCC’s last classification decision was made when broadband was a nascent business serving a small fraction of Americans. Now it is mature business serving virtually all Americans. Since the FCC’s decisions to confirm deregulation, the broadband industry has invested several hundred billion dollars in broadband infrastructure, as a direct result of the formal classification decision and the prospect of a stable policy precedent and a competitive market of risk and reward.
Broadband companies never would have invested or operated as they have over the last decade, if there was the real potential that their networks could become monopoly telephone rate regulated networks.
If the FCC reclassified, it could be considered the biggest regulatory “bait and switch” in U.S. history, promising one thing to gain nearly a trillion dollars of private sector, risk capital infrastructure investment, only to unreasonably seize the private property of competitive companies a decade later without just compensation, based on no reasonable evidence that their economic model enjoys market power or is somehow harming the public sufficiently to justify public utility regulation.
No Justification of a Problem: FCC reclassification would run directly counter to the law and evidence. After a decade of broadband market experience, the FCC to date has found no significant evidence of any broadband problem warranting a rate regulation solution. Thus reclassification could run afoul of equal protection of the law and precedent that regulatory solutions must be reasonably proportionate to the problem they propose to address.
Communications Law Dissonance: An FCC reclassification of broadband would depend on an obsolete 1934 interpretation of the FCC’s authority as the rate regulator of a telephone monopoly, and run completely counter to Congress’ 1996 Telecom Act modern purpose: “To promote competition and reduce regulation…” and Internet policy statement: “It is the policy of the United States… to preserve the… competitive free market… Internet… unfettered by Federal or State regulation.”
The practical effect of the FCC reclassifying broadband would be an effective rewrite of the 1996 Telecom Act to favor monopoly regulation over competition, and FCC regulation over consumer choice. That is not ambiguous. That is obvious.
In short, FCC reclassification of broadband as a common carrier service would run counter to the evidence, policy, precedent, the law, Congress, and the Constitution. The FCC would be supplanting Congress’ policy, role and authority. Simply, FCC reclassification of broadband at this point is a legal, political and Constitutional non-starter that also would be enormously destructive economically.
[First Published by Precursor]
The Nongovernmental International Panel on Climate Change (NIPCC) on Tuesday, Sept. 17 will release a major new report on climate change science produced by an international team of 40 scientists at a press conference at the James R. Thompson Center in downtown Chicago.
The new report, titled Climate Change Reconsidered II: Physical Science, challenges what its authors say are the overly alarmist reports of the United Nations’ Intergovernmental Panel on Climate Change (IPCC), whose next report is due out later this month.
(NOTE: If you cannot attend the Chicago press conference in person, a conference call with the NIPCC scientists will take place the same day, Tuesday, Sept. 17 at Noon Central Time. Register here to participate in the conference call. You can download the full report and the Summary for Policymakers at noon Central Time Tuesday, Sept. 17 at this site.)What: Press conference announcing release of Climate Change Reconsidered II: Physical Science When: 10:00 a.m., Tuesday, September 17. Where: James R. Thompson Center100 West Randolph StreetPress Room (15th Floor)
Chicago, Illinois USA Who: Lead author S. Fred Singer, Ph.D., professor emeritus of environmental science at the University of Virginia, director of the Science and Environmental Policy ProjectLead author Craig Idso, Ph.D., chairman, Center for the Study of Carbon Dioxide and Global ChangeCo-author Willie Soon, Ph.D., chief science advisor, Science and Public Policy Institute Media: Open to all credentialed press, register here for the Noon Central Time conference call after the live Chicago event
Copies of a Summary for Policymakers, an executive summary, and the entire book (unbound) will be available to reporters at the news conference. All three documents will be available for free online following the news conference.
Quotes for pre-release attribution:
Joseph Bast, president of The Heartland Institute:
“This is probably the most important report on climate change ever produced. Its breadth and depth rival that of the IPCC’s reports. Its authors have no agenda except to find the truth. It anticipates and soundly refutes the IPCC’s hypothesis that global warming is man-made and will be harmful. And it comes at a time when global warming alarmism is retreating among academics, the general public, and the political class.”
Dr. S. Fred Singer, Ph.D., atmospheric and space physicist, professor emeritus of environmental science at the University of Virginia, founder of the Science and Environmental Policy Project (SEPP):
“Scientists have not been able to devise an empirically validated theory proving that higher atmospheric CO2 levels will lead to higher global average surface temperatures (GAST).
“Moreover, if the causal link between higher atmospheric CO2 concentrations and higher GAST is broken by invalidating each of the EPA’s three lines of evidence, then the EPA’s assertions that increasing CO2 concentrations also cause sea-level increases and more frequent and severe storms, floods, and droughts are also disproved.
“Such causality assertions require a validated theory that higher atmospheric CO2 concentrations cause increases in GAST. Lacking such a validated theory, the EPA’s conclusions cannot stand. In science, credible empirical data always trump theory.”
Dr. Craig Idso, Ph.D., founder and chairman of the Center for the Study of Carbon Dioxide and Global Change:
“Climate Change Reconsidered II (CCR-II) provides the scientific balance that is missing from the work of the IPCC. Although the IPCC claims to be unbiased and to have based its assessments on the best available science, this report demonstrates that such is certainly not the case.
“In many instances the IPCC has seriously exaggerated its conclusions, distorted relevant facts, and ignored the findings of key scientific studies that run counter to its viewpoint. CCR-II examines literally thousands of peer-reviewed scientific journal articles whose findings do not support, and indeed often contradict, the IPCC’s perspective on climate change.”
Dr. Robert M. Carter, Ph.D., paleontologist, stratigrapher, marine geologist, and environmental scientist; former professor and head of the School of Earth Sciences at James Cook University (Townsville, Australia):
“NIPCC’s Climate Change Reconsidered II report is full of factual evidence that today’s climate continues to jog along well within the bounds of previous natural variation. The empirical pigeons have therefore finally come home to roost on the IPCC’s speculative computer models — and they carry the message that ice is not melting at an enhanced rate, sea-level rise is not accelerating, the intensity and magnitude of extreme events is not increasing, and dangerous global warming is not occurring.”
The series is published by the Chicago-based Heartland Institute, a national nonprofit research and education organization. Economist magazine in 2012 called The Heartland Institute “the world’s most prominent think tank promoting skepticism on man-caused climate change.”The New York Times calls Heartland “the primary American organization pushing climate change skepticism.”
Like earlier volumes in the Climate Change Reconsidered series, this new report cites thousands of peer-reviewed articles to determine the current state-of-the-art of climate science. NIPCC authors paid special attention to contributions that were overlooked by the IPCC or presented data, discussion, or implications arguing against the IPCC’s claim that dangerous global warming is resulting, or will result, from human-related greenhouse gas emissions.
Most notably, the authors say the IPCC has exaggerated the amount of warming that is likely to occur if the concentration of atmospheric carbon dioxide were to double and whatever warming may occur would likely be modest and cause no net harm to the global environment or to human well-being.
NIPCC is a project of three nonprofit organizations: Science and Environmental Policy Project, Center for the Study of Carbon Dioxide and Global Change, and The Heartland Institute. The lead authors of the new report are Craig Idso, Ph.D. and S. Fred Singer, Ph.D., identified above, and Robert Carter, Ph.D., former head of the School of Earth Sciences, James Cook University (Australia). Scientists from around the world participated as lead authors, section authors, contributors, and reviewers.
The first two volumes of the Climate Change Reconsidered series, published in 2009 and 2011, are widely recognized as the most comprehensive and authoritative critiques of the reports of the United Nations’ IPCC. The complete texts and reviews of both volumes are available here and here. In June, a division of the Chinese Academy of Sciences published a Chinese translation and condensed edition of the two volumes.
The Heartland Institute is a 29-year-old national nonprofit organization headquartered in Chicago, Illinois. Its mission is to discover, develop, and promote free-market solutions to social and economic problems. For more information, contact Director of Communications Jim Lakely at email@example.com and 312/377-4000, or visit our Web site.
A general optimism prevails in the United States and Europe that the economies have finally turned the corner and growth is resuming. In the U.S., automobile sales are up and the housing industry has been improving, but there are many negatives which show overall optimism is unwarranted.
In Europe, too, there have been modest improvements—some negative growth factors have become less negative—and there is a general feeling that the bailouts of Greece and other countries are working well. Below we explain some less favorable facts about the U.S. and Europe which cannot be ignored. They pose continuing problems.
The rate of economic growth declined over the past year to 1.6% from 2.8%. The employment figures released on September 6 showed August added 169,000 jobs, not enough to keep up with the growth in population. Moreover, the figures for June and July were revised downward by 74,000 jobs. June figures were also revised downward a month ago as were those for May. At the recent rate of hiring, employment won’t get back to pre-recession levels for more than eight years. Of the new jobs created in August, a disproportionate number were low-paying ones in retail sales and restaurants.
Unemployment declined in August from 7.4% to 7.3%, but this was mostly due to the increase in the number of people who had stopped looking for work because they don’t believe they can find a job. If they were counted as unemployed, the unemployment rate would be near 10%. There were also 7.9 million Americans who wanted full-time work but could only obtain part-time work. If these were included with those who have stopped looking for work, the rate would be 13.7%.
August was the 40th consecutive month in which more unemployed workers left the labor market than found jobs. Should we be asking “Is the economy going up or down?” In August the number of people reporting they had jobs—a separate survey from the payroll calculations of employment—fell by 115,000. Four years after the official end of the recession, in 2009, there are still 1.9 million fewer jobs than at the peak in 2008. And even though price inflation in now very low, workers’ pay still isn’t keeping up with it. According to Labor Department data, the average hourly pay for a non-government, non-supervisory worker, adjusted for price increases, declined to $8.77 from $8.85 at the end of the recession in 2009.
The labor participation rate includes those working plus those looking for work. In August this measurement was the lowest since 1978. This number has continued to decline throughout the so-called recovery from the recession. This recovery has been the slowest and longest from any recession in our history—in spite of the $831 billion stimulus program which was supposed to create economic growth.
One must question whether that stimulus program aided growth or retarded it. According to the Congressional Budget Office, every job created by the stimulus program cost the taxpayers between $500,000 and $4 million. Not only was the stimulus program ineffective, it added to the national debt, which retards future economic growth.
The euro-zone economy in the second quarter grew at a rate of 0.3%, compared to the previous quarter, ending six consecutive quarters of contraction. That is far too sluggish to overcome still-rising debts and massive unemployment, which is still over 12%. Charles Wyplosz, economics professor at the Graduate Institute, Geneva, says, “If we had 3 or 4 years of growth at 2% to 3% annually then we would probably get out of the woods…But I don’t know where such growth would be coming from.”
The euro-zone economy is still 3% smaller than in early 2008 when the economic crisis hit. In many countries far more businesses are failing than are being founded, and there is more firing than hiring. And countries who received bailouts are not doing as well as anticipated and may require further aid, adding to their debts, as we explain below.
In August, Greece reported budget data showing a surplus compared to last year’s steep budget deficit. But the economy contracted by 4.6% in the second quarter, and unemployment was still over 27%. The country’s GPD has declined for 20 straight quarters as the nation’s recession drags on for six years.
German Finance Minister Wolfgang Schauble said Greece will need a third bailout in order to avert bankruptcy. Der Spiegel reported the German central bank expects new outside financial aid will be necessary for Greece by the beginning of 2014 at the latest.
Greece’s debt-to-GDP ratio is expected to reach 176% this year, far above the 120% the International Monetary Fund accepted as “sustainable.” But even the 120% level is double that of the European Union’s monetary pact, which states member nations must limit their debt-to-GDP ratios to 60%.
The 120% level is highly suspect. As we pointed out in our book The Impending Monetary Revolution, the Dollar and Gold, an IMF report in December 2011 said that a small shock to this “accident prone” program could send “debt on an ever increasing trajectory.” A lower growth rate, smaller privatization receipts, higher interest rates than assumed, or a worse budget performance could leave Greece’s debt-to-GDP at 159% in 2020, said the IMF. A report in 2011 by three economist at the Bank for International Settlements concluded that the threshold for sustainable debt was a debt-to-GDP ratio of 85%—not 120%—based on studies of 18 countries from 1980 to 2010. Remember, too, that it was the revelation that the Greek ratio had gone to 113.4% in 2009 that triggered the Greek crisis.
The IMF engaged in “arithmetical gymnastics” to arrive at a debt-to-GDP for Greece of 120% for 2020. The Wall Street Journal has noted that it is only because the IMF “accepted these mostly fictional debt outlooks” that it and the other contributors to the Greek bailout now stand to lose money. The IMF even tossed out its own rule against lending to countries whose debt isn’t “sustainable in the medium term.”
The IMF worries that without another bailout, Greece will be unable to repay what it owes the IMF from the previous bailout. The IMF now says Greece’s longer-term debt targets cannot be met without forgiveness of some of the nation’s debts. It insists it will not forgive any repayment of its loans to Greece but is pushing for the European countries who were partners in the bailout to do so—so that Greece will have enough money to repay the IMF’s portion of the bailout! You can imagine how that has gone over with those countries! Germany, Finland, Austria and others have stated the IMF should take its share of any losses along with the euro-zone governments.
After declaring the need for additional debt relief for Greece, the recent IMF report noted: “Risks remain to the downside, mainly from lower growth and potential fiscal and privatization slippages.” It emphasizes that the Greek government has failed in almost every instance to hold up its end of the bailout bargain. For example, privatization of state assets is now expected to yield €22 billion through 2020, less than half what was predicted in March 2012.
Greece’s debt and growth problems are too big to ignore for long, notes Gabriel Sterne, senior economist at Exotix investment banks. “These are a couple of cans that are perhaps too heavy to kick down the road.”
While Greece and other troubled countries have undertaken austerity measures that cut spending and reduce social benefits in order to salvage their economies, France’s socialist President Hollande has done just the opposite. He increased the government budget deficit, raised the minimum wage, and lowered the minimum retirement age from 62 to 60, reversing the raise by former president Nicolas Sarkozy.
Hollande’s government increased taxes by over €7 billion euros ($9.3 billion) and added €20 billion to the budget while cutting spending by only half that amount. Next year’s budget proposes €6 billion in new taxes. Business investment has fallen every month since Hollande took office 15 months ago. A Markit Purchasing Managers Index over 50 shows economic growth, below 50 shows contraction. France’s PMI dropped further, to 47.9 from 49.1. French unemployment, now above 10%, has increased for the 23rd month in a row. France’s debt-to-GDP ratio, which was 31% in 1980, 57% in 1994 is now over 90%, the highest of any European country not receiving a bailout.
The IMF in August urged Hollande to scrap the new taxes, saying France’s failure to grow the economy will have “significant outward spillovers” into other euro-zone economies.
Spain’s GDP declined 0.1% in the second quarter. Though modest, this was the eighth consecutive quarterly contraction.
Spain’s unemployment rate fell for the first time in two years. But the drop of almost a percentage point still leaves the rate above 26%—well over twice the euro-zone average. Furthermore, the decline doesn’t really indicate an upturn in the economy because more people stopped looking for work than found jobs. Almost all the jobs created came from coastal areas where summer vacation jobs are concentrated. Jobs continued to be lost in sectors like manufacturing and construction.
Spain’s debt-to-GDP ratio—which was only 36% in 2007 and Spain had a triple-A credit rating—is expected to be over 100% by 2015, according to the IMF.
Italy’s debt-to-GDP ratio is on course to be over 130% for 2013. The nation would need an annual average economic growth of around 3% over the next 20 years just to reduce its debt-to-GDP ratio to 90%. How can this be done in a nation that since 1999 has averaged only 0.5% growth annually?
The number of Italians living below the poverty level has increased by 14% in the last two years.
Portugal would have to increase its average economic growth to as much as 6%—nine times its average since 1999—in order to cut its debt ratio to 90%.
Portugal needs €14 billion in 2013 and €15 billion in 2014 to repay creditors, according to the “troika” managing the bailout (the European Central Bank, the European Commission and the IMF). Portugal will need a second bailout on top of the original €78 billion of the first bailout.
Cyprus is widely expected to need more money. It’s economy is in a free fall despite its €10 billion bailout. Analysts say the bailout forecast of an economic contraction of 8.7% this year is far too optimistic. Unemployment is already at 17.3%, well above the bailout forecast of 15.5%. While people are allowed to make limited cash withdrawals, 90% of the deposits at the nation’s largest bank are frozen during restructuring. Capital controls isolate the country from the rest of the euro zone. Most small businesses are operating on a cash-only basis.
Why National Deficits Matter
Nobody can ever get out of debt by borrowing successively larger sums to cover successively larger debts. Neither can governments. Eventually debts are repaid or the borrower goes bankrupt. In the U.S. the Federal Reserve prints money enabling the federal government to spend it today by borrowing from our children and grandchildren. They will be obligated to pay it, but they will never be able to do so.
The federal gross national debt is now approaching $17 trillion. (It is projected to be $17.2 trillion by the end of 2013.) At $17 trillion, the U.S. debt-to-GDP ratio is 106%. According to the IMF, meeting America’s obligations will require an immediate and permanent 35% increase in all taxes and a 35% cut in all government benefits. That’s not going to happen. It can’t happen. Instead America will be bankrupt. By 2025, entitlement spending and debt payments are projected to consume all federal revenue. And having the Fed print vastly more money to pay our obligations will not solve the problem; it will merely bring inflation that destroys the value of the dollar.
What about more stimulus spending? Politicians will certainly clamor for this as a solution, but it won’t work. Obama’s colossal $831 billion stimulus bill didn’t work; it made the problem worse by further ballooning the national debt. More and larger stimulus programs would do the same. Economist John Maynard Keynes claimed spending—for anything—was the driver of the economy and that government spending produced a multiplier effect as dollars were, in turn, spent over and over throughout the economy. But Hunter Lewis, Keynes biographer, says, “There is no evidence” that spending ever cured a recession, and Keynes “wasn’t particularly interested in evidence.”
Harvard Professor Robert Barro, who has done extensive research on Keynesian multipliers, has written, “What few know is that there is no meaningful theoretical or empirical support for the Keynesian position.” Obama’s stimulus bill was based on a Keynesian multiplier of 1.5, meaning the GDP will increase by $1.5 for every dollar of additional government spending. This multiplier was stated by administration officials trying to sell the stimulus bill to Congress and the public, and it is stated specifically in the First Quarterly Report by the Council of Economic Advisors on the subject; but there is no evidence that multiplier is valid.
Among other research on this subject, my book cites the work of Barro and Redlick, who found a multiplier effect of 0.4 to 0.7, and of Professor Gerald Scully, who found a multiplier of 0.46 in his analysis of 60 years of federal outlays. If the multiplier really were larger than 1.0, the GDP would rise even more than the rise in government spending! The U.S., Greece and other spendthrift countries wouldn’t be going broke, they’d be getting richer the more they spent! The reality is that the multiplier is always less than 1.0. The money that is spent over and over again in the private sector from government programs always adds less to the GDP than the cost of the programs. If that money were not preempted by government stimulus spending, it would be spent (or saved/invested) multiple times in the private sector, too—and more effectively.
Hunter Lewis says, “Keynes completely ignores the issue of how you are investing. For him, not only is any investment equivalent to any other investment, but spending is equivalent to investment.” You can see why this is appealing to Barrack Obama as it was to Franklin Roosevelt, who popularized Keynes’ ideas.
The great economist Ludwig von Mises wrote way back in 1944, in his book Omnipotent Government,
“All governments are firmly resolved not to relinquish inflation and credit expansion. They have all sold their souls to the devil of easy money. It is a great comfort to every administration to be able to make its citizens happy by spending. For public opinion will then attribute the resulting boom to its current rulers. The inevitable slump will occur later and burden their successors….Lord Keynes, the champion of this policy, says: ‘In the long run we are all dead.’ But unfortunately nearly all of us outlive the short run. We are destined to spend decades paying for the easy money orgy of a few years.”
All the world’s central banks now operate on Keynesian principles. The Fed, the European Central Bank, and the central banks of Japan, Switzerland and China have printed an astounding $10 trillion since 2007, tripling the size of their combined balance sheets.
With uncertainty plaguing national economies and the future value of their money, people are continuing to turn to gold as a way for safeguarding their future. Two important trends in this are evident in the second quarter. The first, which is a continuation of a trend evident for some time, is a desire for the buyers of gold to take physical possession of it. This means a preference for physical possession of jewelry, coins and bars rather than holding gold ETFs, shares in gold mining companies, or coins or bars held in financial accounts. The second is the way increased private buying has more than made up for a decline in central bank buying.
After the sharp decline in April, gold prices seem to have bottomed. On balance, the second quarter showed very positive signs. Jewelry showed a multi-year high as lower prices generated a surge of demand from consumers, particularly in China and India. In China, demand hit a record 385.5 metric tons in the second quarter. That was double the figure from a year earlier and well above the 294.3 metric tons of the first quarter, which occurred before the big price drop in April. Overall, world gold jewelry demand increased 37% and reached 575.5t, the highest volume in five years and in value terms 20% higher than the second quarter 2012.
Gold demand in India in the second quarter was up 70% year on year to 310t despite continued government efforts to curb enthusiasm for the metal. Jewelry was up 52% to 188t, and retail bar and coin sales set a record at 122t, up 116%.
Worldwide, the second quarter showed record demand for coins and bars to 508t, up 56% in value terms. Counter to this, there were outflows from ETFs; however, SPDR Gold Trust, the largest gold ETF, in August reported the first net increase in purchases in two months.
The world’s central banks’ purchases of gold slowed to 71.1t, down 56% on the previous year but nevertheless marking the tenth consecutive quarter of purchases. I would have expected more central bank buying; however, it must be noted that China has not reported its central bank purchases of gold since 2009. Despite its silence, China is known to have added gold to its central bank holdings from mines it owns within the country as well as from foreign countries allowed to operate gold mines in China.
If you wish to predict into the future, model data is too limited to do so, but they can be used in a very limited way to test a hypothesis. It makes no sense to depend on model predictions 50 or 100 years into the future as being valid, nor to change our way of life under the assumption that the model predictions are truthful. – Dr. Anthony R. Lupo
On Tuesday, September 10, The Heartland Institute hosted the second in its series of conference calls with friends and allies previewing the Nongovernmental International Panel on Climate Change’s (NIPCC) Climate Change Reconsidered II: Physical Science ahead of its digital release on Tuesday, September 17th.
Featured on the September 10th call was Dr. Anthony R. Lupo, Department Chair and Professor of Atmospheric Science at the University of Missouri – Columbia, a lead author of a chapter on climate models” in Climate Change Reconsidered II. Dr. Lupo was a Fulbright Scholar to the Russian Academy of Sciences during the 2003-2004 academic year.
He received his masters and Ph.D. degrees in atmospheric physics from Purdue University and is a member of the American Meterological Society, National Weather Association,and American Geophysical Union, and a Fellow of the Royal Meteorological Society. Lupo’s research has appeared many times in peer-reviewed journals, including National Weather Digest, Journal of Geophysical Research, and Bulletin of the American Meteorological Society.
Prior to the introduction Dr. Lupo, Joseph Bast, CEO and president of The Heartland Institute, reiterated how Climate Change Reconsidered II is the result of a collaboration among three organizations: Science & Environmental Policy Project, Center for the Study of Carbon Dioxide and Global Change, and The Heartland Institute, with Heartland in charge of the editing and the publishing. The lead authors and editors of the report are Dr. Craig D. Idso, Dr. Robert M. Carter, and Dr. S. Fred Singer.
Dr. Anthony Lupo gave a brief summary of the findings presented in his chapter. As related by Lupo, the first part of the chapter looked at all aspects of how numerical modeling works and some of its limitation. The second part deals with the use of models in climate forecasting and the concept of “blocking” in terms of large scale phenomenon and its impact.
Dr. Lupo then answered a series of questions posed by Joe Bast. Here are my own summaries and notes from some the questions and answers:
Bast: How are findings of climate models validated?
Lupo: One way is to run the models backwards. The problem is how to validate models against today’s climate through this backward procedure to capture the variability of the elements affecting climate change. There are at least 78 circulation models. One controversy is that each model will come up with a different forecast in a slightly different way. This shows the lack of understanding of how the climate system works.
Bast: What role do models play in predicting future forecasts given the spread of data produced by the 78 models?
Lupo: Models are the best tools we have, but we must look with an eye toward some of their shortcomings. They are not gospel truth, but only purveyors of possible outcomes. Consider the predictions of the IPPC: By the year 2100 the temperature will increase from 2 to 12 degrees Fahrenheit. Some models even show a slight cooling. Real likelihood is that the model predictions are more like 2 – 4 degree Fahrenheit.
Bast: How did the models fail to predict or miss that during the past 16 – 17 years the temperature has been constant or flat?
Lupo: Models have their limitations and their results can’t be treated as the truth of anything. If you wish to predict into the future, model data is too limited to do so, but they can be used in a very limited way to test a hypothesis. It makes no sense to depend on model predictions 50 or 100 years into the future as being valid, nor to change our way of life under the assumption that the model predictions are truthful.
Joe Lakely, Director Communications at The Heartland Institute, then took calls from those on the conference call. Many of the questions had to do with the models themselves and the reliability of them.
- Responses from Lupo (again in my words, not direct quotes) included: Regarding Jim Hansen, a long-time scientist at NASA and now retired, he is certainly a big player in the global warming debate for a number of years, but with his funding dependent on the success of the global warming hypothesis, it would be unwise to rely on his forecasts.
- Why are climate models given any respect given that for 15 years their projections have been inaccurate and given flaws in their methodology they never will be accurate? Lupo saidmany scientists are aware of the faulty model projections, but models are the best tools we have at this time and that in time they could be made more accurate. He expects they will continue to generate a range of options but with higher confidence in a narrower range than is now the case. But he admitted we may never reach the point where models can determine with certainly that a future model prediction will take place.
- Why does the government (EPA) continue to make policy decisions based on faulty projections by models? Lupo compared any attempt to change government policy [especially one relating to a political agenda] to trying to turn the Titanic around. There is so much money and momentum behind the claim that CO2 is the main culprit of global warming, that backing away from that claim will be slow and painful.
- Can models actually teach us anything worthwhile given that they can be tweaked and modified to provide outputs that seem to support almost anyone’s agenda? Lupo took an optimistic approach by indicating that even our failures can teach us something. To circumvent faulty model assumptions, averaging a bank of models will result in a more realistic projection. ”Ensemble” forecasts are also employed whereby initial conditions are tweeked as many as a dozen times. If all or most of the tweaked model runs come up with basically the same numerical predication, there is a relatively high degree of confidence that the prediction can be taken at face value.
- Why does the IPCC repeatedly forecast more warming, and more confidence in computer models than climate science actually supports? Lupo said the IPCC from the beginning has overestimated climate sensitivity to carbon dioxide, whereas to skeptics they properly understand that climate “is a very strong beast” which resists the forcing of climate change through sensitivity to its surroundings. The IPCC ignores natural weather cycles and new evidence of a greater solar impact on climate than previously thought. The IPCC will acknowledge that the temperature has been flat the last 15-20 years but at the same time will declare: “Just wait, global warming is coming!”
A extensively peer-reviewed study published last December in the Journal of Atmospheric and Solar-Terrestrial Physics with research conducted by Nicola Scafetta, a scientist at Duke University, addressed the three gross omissions by the IPCC, noting that climate changes observed since 180 are linked to cyclical, predictably, naturally occurring events in Earth’s solar system with little or no help from us.
Two more calls are scheduled in the CCR-II conference call series:
*Tuesday, September 17 at Noon CST
Climate Change Reconsidered II Release Day
Speakers: Dr. Craig Idso, Dr. Willie Soon, and Dr. Fred Singer
*Tuesday, September 24 at Noon CST
Response to the Intergovernmental Panel on Climate Change (IPCC) Report (www.ipcc.ch)
Calls start at Noon CST. Contact Robin Knox at firstname.lastname@example.org or call 312/377/4000. Your registration will ensure you receive any follow-up materials from the call.
[First published at Illinois Review.]
A survey conducted by the R-Street Institute and the National Taxpayers Union shows that voters across the ideological spectrum oppose the Marketplace Fairness Act (MFA). If signed into law, the MFA would enable states in which online purchases are made from companies located in other states, to collect a sales tax from the company. Not only do the findings demonstrate Americans don’t want higher taxes and a more expansive government, but the MFA could also have important electoral implications:
- Conservative voters oppose the MFA by a 2:1 margin. The study notes that any Republicans who do support the MFA could be facing very tough primary challenges ahead.
- With 56% of Independents opposing the tax compared to 37% in support of it, the MFA could be a swing issue in favor of Republicans.
- Liberal voter margins were smaller, but 48% still opposed the MFA while 43% supported it. This means that Democratic advocates of the MFA could risk alienating their base come election season.
This is great news for conservatives and libertarians who are concerned the House might pass the MFA.
As with most things the government calls “fair,” the MFA is actually an intrusive extension of government power. If enacted, the tax would greatly increase the breadth of state governments by allowing tax collectors to reach across state lines for revenue. Proponents claim the MFA will level the playing field between brick and mortar stores and online retailers, but opponents see it as just another attempt by states to tax their way out of debt.
The economically beleaguered voter would rather not see consumers and affordable online outlets like Amazon, eBay, and Overstock.com get hit with an estimated tax increase of $23 billion. With over 9,600 taxing governments in the United States, the MFA will force these businesses to become subject to more convoluted tax codes. Worse yet, the MFA’s legal extensions are a recipe for dangerous unintended economic consequences.
The law effectively establishes a tax base outside of an individual state’s voting population. This means states with few online retailers can poach tax revenue from states with a lot of online retailers, like California and Delaware. Without a presence in the electoral population, it will be difficult for businesses to defend themselves from foreign state tax collectors, even while they receive no benefit (state roads, etc.) from said taxes. The result may be protectionist competition between state governments, such as punitive sales taxation, and internal business incentives, both of which will only further distort and harm the market.
The MFA reminds me of Ronald Reagan’s famous quote about the government’s view of the economy: “If it moves, tax it. If it keeps moving, regulate it. If it stops, subsidize it.” Online retailing has been one of the most productive innovations of the information age. It has unleashed a dynamic market place in which businesses from around the world compete to deliver goods to our doorstep at a fraction of the economic and temporal cost of manually shopping at a brick and mortar store. Yet when the government sees this vibrant market place, all it can see is another tax basin to latch on to.
The Internet is a bastion of liberty and innovation, and should be kept separate from government interference like the Marketplace Fairness Act.
Colorado voters yesterday successfully recalled two Democrat Colorado state Senators who led passage of a package of state gun control laws reflecting President Obama’s overcooked national rhetoric in the wake of the tragic Sandy Hook elementary school shooting in Connecticut last December.
With 100% of the vote counted, Senate President John Morse conceded defeat after trailing all night. Senator Angela Giron lost by a wider 56% to 44% margin with all of the vote counted in her Pueblo, Colorado district. Both were replaced by Republicans, reducing the Democrat majority in the state Senate to just one vote. The successful recall elections, raised questions about the future of Democratic Party candidates as President Obama’s protective national star fades and passes.
The gun control measures included raising the fees and costs for gun purchases, limiting gun magazines to 15 rounds of bullets, and universal background checks. The measures were ramrodded through the Democrat majority legislature with silly, condescending arguments, such as those so foolish as to try to defend themselves with arms were only likely to end up being killed or harmed themselves. That argument was used to deride a rape victim who testified that her suffering would have ended with a different result if she had kept her usual firearm nearby.
The futility of the gun regulatory measures was shown by the record of the universal background checks. Since passage of the requirement earlier this year, tens of thousands of such checks had been run, with only a few dozen rejected. That only demonstrated that those with criminal records would not try to get their weapons from regulated gun stores, especially when they can get all they need on the black market.
The bottom line on gun control is that the government does not even have the practical power to prevent criminals from getting guns. All that the government can do as a practical matter is disarm the law abiding victims of crime. That is exactly why the worst crime and gun violence is always in the cities with the stiffest gun control laws, such as President Obama’s hometown of Chicago.
But the Colorado Progressives railroading their gun control measures through the state legislature never even bothered to respond to the arguments concerning their futility. That is what I call the rope-a-dope “progressive” debate strategy. With a compliant, and complicit, party controlled media, the so-called “progressives” can simply ignore whatever they do not want to debate. But that strategy does not always work to mislead voters, as we saw in Colorado yesterday.
The recalls were also a defeat for New York City Mayor Michael Bloomberg, who poured $350,000 of his own money into the races of the challenged incumbents. The busybody New York Mayor is head of his own Mayors Against Illegal Guns organization, committed to roaming the nation to promote the same ineffective gun control laws that trouble to New York. That title only further reflects the silliness of liberal arguments,
The electoral defeats are only going to grow for Democrats as Obama’s second term progresses, and his stature declines into lame duck status. In less than a couple of months, New Jersey Governor Chris Christie is poised to win reelection in a landslide. Republicans can enjoy a clean sweep this year if they can rally behind Virginia gubernatorial candidate Ken Cuccinelli, who is currently trailing by a slight margin in independent polls. That would be a good foundation for 2014 to end up looking a lot like 2010.
[First Published by American Spectator]
Yesterday, Chairman Goodlatte (R-VA) and Representatives Eshoo (D-CA), introduced the Permanent Internet Tax Freedom Act. This proposal is designed to ensure consumers’ access to broadband is protected from onerous local taxes and fees by permanently extending the 1998, The Internet Tax Freedom Act (ITFA) ban on state and local taxation of Internet access service. This moratorium which is set to expire in 2014 would be made permanent by Chairman Goodlatte’s bill.
The initial reaction to the new House bill was positive from several groups. The Internet Tax Freedom Act Coalition, a partnership of businesses, associations and consumers dedicated to the growth of the Internet economy commended the bill’s cosponsors.
The initial reaction to the new House bill was positive from several groups. The Internet Tax Freedom Act Coalition, a partnership of businesses, associations, and consumers dedicated to the growth of the Internet economy commended the bill’s cosponsors.
“We commend Chairman Goodlatte and Representative Eshoo for their leadership on this important piece of legislation” said Annabelle Canning, executive director of the ITFA Coalition in a press comment. “A permanent extension of ITFA will encourage continued adoption of broadband and protect consumers from having multiple and discriminatory taxes imposed on their online purchases.”
CTIA President and CEO Steve Largent argued in a statement that the moratorium is a necessary step towards ensuring technological development.
“The Permanent Internet Freedom Act permanently extends the moratorium on Internet access taxes and fees and provides a tax certainty that will continue to foster American technological innovation, growth and leadership in electronic commerce,” said Largent. “Affordable wireless broadband is no longer just a modern convenience, but a vital component in the lives of American consumers and businesses. From education to healthcare to commerce, a reasonable and permanent tax structure that guarantees affordable access to the Internet and the incredible services it provides is vital for consumers and continued innovation.”
An increase in Internet access taxes would hit broadband users everywhere for billions of dollars of new government spending, placing an unnecessary burden on consumers in order to do something the market is already handling quite effectively. Making the Internet access tax moratorium permanent would help broadband access and development expand while reducing the need for more government broadband spending. Cheap and reliable access to the Internet, allowed in part by the moratorium was one of the key forces behind the quick ascendance of the Internet and the online economy.
Supporters of the moratorium have argued that restricting federal, state, and local governments from adding new taxes and fees to Internet access is important because it prevents ISP bills from turning into phone bills and becoming another cash cow fueling government spending. Wireless phone bills have become a frequent target for new fees and taxes, funding any number of new programs. The national average tax on wireless service currently tops 17 percent, more than double the 7.3 percent average tax on other goods and services. In some states, wireless service taxes top 20 percent.
That question now confronts the Pennsylvania Interscholastic Athletic Association because a Pennsylvania judge earlier this month turned down the PIAA’s request to modify a 1975 state court ruling regarding girls playing on boys’ high school athletic teams. Call it the law of unintended consequences: Pennsylvania Commonwealth Court Judge Kevin Brobson’s recent ruling lets boys play on high school girls’ sports teams – at least for now.
No doubt as old as time, the quest for gender equality began in earnest in the United States in 1848 with the “Declaration on Sentiments” in Seneca Falls, New York, gained momentum with ratification of the 19th Amendment in 1920, and arguably took its next major step in the early 1970’s as dwindling United States involvement in the Viet Nam War made the unfairness of a male-only military draft inconsequential.
In 1972, social and political forces coincided as Ms. Magazine debuted on the newsstand, Congress passed Title IX of the Education Amendments Act, and Congress passed and sent to the states for ratification the Equal Rights Amendment.
Written in 1923 by suffragist leader Alice Paul and introduced without success into every session of Congress since 1923, the ERA provided simply that “Equality of rights under the law shall not be denied or abridged by the United States or by any state on account of sex.” ERA opponents argued at the time that the proposed amendment would lead to such perceived horrors as public lesbianism, gay marriage, no-fault divorce, abortion on demand, fatherless children, women in men’s locker rooms, and even unisex washrooms.
Although the ERA has still not been ratified by the necessary 38 states, many of its goals have already been adopted by custom or by state and federal law, most notably the equal educational opportunity amendments of Title IX and the equal employment opportunity provisions of Title VII of the Civil Rights Act of 1965.
Under Title IX, women are guaranteed equal educational opportunity with men, which since Cohen v. Brown University in 1996 has meant proportional opportunities on varsity athletic teams. Because the biggest money-making intercollegiate sport (major college football) requires about 100 male scholarship players to be competitive, however, and because nearly three out of five current undergraduates at U.S. four-year institutions are now female, that has meant eliminating men’s varsity athletic teams, typically swimming, gymnastics, baseball, volleyball, and sometimes hockey or crew.
While seemingly unfair to young men, the result has generally been good for young women: helping them to keep fit, to learn teamwork, to develop self-confidence, and to earn college scholarships. Meanwhile, however, what to do with all those boys and young men who want to play sports but can’t find a team?
In college the answer is usually to play a club sport or to choose another sport or even another college. But in states like Pennsylvania, which has had its own Equal Rights Amendment since 1971, the answer has been to let high school boys play on high school girls’ sports teams when there is no boys’ team.
The Pennsylvania ERA, like the proposed national ERA, provides that “[e]quality of rights under the law shall not be denied or abridged in the Commonwealth of Pennsylvania because of the sex of the individual.” (Pa. Const. art. I, § 28.) In 1975 a Pennsylvania state court said this means the PIAA must permit girls to compete on high school boys’ sports teams.
At present, according to Associated Press reports of a recent survey to which about half of PIAA schools responded, 104 of 1400 PIAA schools have had girls who played boys’ football, 112 who wrestled boys, and 34 who played boys’ soccer. But about three in five Pennsylvania high schools also allow boys to play on girls’ teams, including 38 in field hockey, fourteen in volleyball, eight in lacrosse, five in soccer, and one each in swimming and tennis. The PIAA and some of its members don’t approve, and sued to modify the 1975 court decision to prevent boys from competing on girls’ teams while still permitting girls to compete on boys’ teams.
Judge Brobson rightly ruled that he has no power to change PIAA policies or to give advisory opinions. “If PIAA, as the primary policymaking body for interscholastic competition in the Commonwealth, believes it is appropriate to take action in this area,” he said instead, “then it should take the first step into the breach and create a policy.” “Only then,” he continued, “if that policy is challenged in a court of law, may its constitutionality be evaluated.”
Whether boys competing on girls’ teams – or girls competing on boys’ – makes any sense is in the eye of the beholder. But since Brown v. Board of Education, the United States has at least paid lip service to the notion that “separate educational facilities are inherently unequal,” and Title IX cases hold that school athletic programs are subject to the law governing educational facilities.
What’s sauce for the goose should be sauce for the gander, but how this will play out in Pennsylvania remains to be seen.
This may be a surprising headline to readers of The Wall Street Journal and the Washington Post, which reported virtually the opposite result in their August 19 editions. The stories, “Hip, Urban, Middle-Aged: Baby boomers are moving into trendy urban neighborhoods, but young residents aren’t always thrilled,”
by Nancy Keates in The Wall Street Journal and “With the kids gone, aging Baby Boomers opt for city life,” by Tara Barampour in the Washington Post reported on information from the real estate firm, Redfin (a link to the corrected Wall Street Journal story is below). Both stories reported virtually the same thing: that 1,000,000 baby boomers moved to within five miles of the city centers of the 50 largest cities between 2000 and 2010. Because these results appeared to be virtually the opposite of census results, I contacted both papers seeking corrections.
When pressed for more information, Redfin.com responded with a tweet indicating that: “We don’t have a link to share or published study; Redfin did a special analysis of Census data at reporters’ requests.”
In fact, the census data shows virtually opposite. Redfin’s method was not clear, so I queried the five mile radius within the main downtown areas of the 51 metropolitan areas with more than 1,000,000 population in 2010, shown below in the table and the figure.
Within the five mile radius of downtown, there was a net loss of nearly 1,000,000 baby boomers, or 2 percent of the 2000 population (ages 35 to 55 in 2000). There was also a loss of 800,000 in the suburbs, or 17 percent of the 2000 population. The continuing dispersion of the nation is indicated by the fact that there was a gain of nearly 450,000 in this cohort outside the major metropolitan areas. Overall, there was a net loss of 1.3 million, principally due to deaths.
To its credit, The Wall Street Journal issued a correction, as I would have expected. The incorrect reference to an increase of baby boomers in the urban cores was removed. To my surprise, not only did the Washington Post fail to make a correction, but they also ignored multiple requests to deal with the issue (though my emails received courteous computer generated acknowledgements).
With the ongoing repetition of the “return to the city from the suburbs” myth, it is important to draw conclusions from the data, not from impressions.
PERSONS BORN 1946-1965 RESIDENTIAL LOCATIONS Total Population Major Metropolitan Areas 2000 2010 Change % 5-Mile Radius of Downtown 5,811,000 4,826,000 (985,000) -17.0% Balance 39,436,000 38,639,000 (797,000) -2.0% Major Metropolitan Areas 45,247,000 43,464,000 (1,783,000) -3.9% Outside Major Metropolitan Areas 37,579,000 38,025,000 446,000 1.2% United States 82,826,000 81,489,000 (1,337,000) -1.6% Data from US Census, University of Missouri Radius Tool Statistical discrepancy overstates 2010 population by approximately 0.5 percent.
[First Published by Newgeography]
Intergovernmental Panel on Climate Change (IPCC) is scheduled to release the first portion of its Fifth Assessment Report (AR5). AR5 will conclude once again that mankind is causing dangerous climate change. But one week prior on September 17, the Nongovernmental International Panel on Climate Change (NIPCC) will release its second report, titled Climate Change Reconsidered II (CCR-II). My advance review of CCR-II shows it to be a powerful scientific counter to the theory of man-made global warming.
Today, 193 of 194 national heads of state say they believe humans are causing dangerous climate change. The IPCC of the United Nations has been remarkably successful in convincing the majority of the world that greenhouse gas emissions must be drastically curtailed for humanity to prosper.
The IPCC was established in 1988 by the World Meteorological Organization and the United Nations Environmental Program. Over the last 25 years, the IPCC became the “gold standard” of climate science, quoted by all the governments of the world. IPCC conclusions are the basis for climate policies imposed by national, provincial, state, and local authorities. Cap-and-trade markets, carbon taxes, ethanol and biodiesel fuel mandates, renewable energy mandates, electric car subsidies, the banning of incandescent light bulbs, and many other questionable policies are the result. In 2007, the IPCC and former Vice President Al Gore shared the Nobel Peace Prize for work on climate change.
But a counter position was developing. In 2007, the Global Warming Petition Project published a list of more than 31,000 scientists, including more than 9,000 PhDs, who stated, “There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate.” At the same time, an effort was underway to provide a credible scientific counter to the alarming assertions of the IPCC.
The Nongovernmental International Panel on Climate Change was begun in 2003 by Dr. Fred Singer, emeritus professor of atmospheric physics from the University of Virginia. Dr. Singer and other scientists were concerned that IPCC reports selected evidence that supported the theory of man-made warming and ignored science that showed that natural factors dominated the climate. They formed the NIPCC to offer an independent second opinion on global warming.
Climate Change Reconsidered I (CCR-I) was published in 2009 as the first scientific rebuttal to the findings of the IPCC. Earlier this summer, CCR-I was translated into Chinese and accepted by the Chinese Academy of Sciences as an alternative point-of-view on climate change.
Climate Change Reconsidered II is a 1,200-page report that references more than one thousand peer-reviewed scientific papers, compiled by about 40 scientists from around the world. While the IPCC reports cover the physical science, impacts, and mitigation efforts, CCR-II is strictly focused on the physical science of climate change. Its seven chapters discuss the global climate models, forcings and feedbacks, solar forcing of the climate, and observations on temperature, the icecaps, the water cycle and oceans, and weather.
Among the key findings of CCR-II are:
- Doubling of CO2 from its pre-industrial level would likely cause a warming of only about 1oC, hardly cause for alarm.
- The global surface temperature increase since about 1860 corresponds to a recovery from the Little Ice Age, modulated by natural ocean and atmosphere cycles, without need for additional forcing by greenhouse gases.
- There is nothing unusual about either the magnitude or rate of the late 20th century warming, when compared with previous natural temperature variations.
- The global climate models projected an atmospheric warming of more than 0.3oC over the last 15 years, but instead, flat or cooling temperatures have occurred.
The science presented by the CCR-II report directly challenges the conclusions of the IPCC. Extensive peer-reviewed evidence is presented that climate change is natural and man-made influences are small. Fifteen years of flat temperatures show that the climate models are in error.
Each year the world spends over $250 billion to try to decarbonize industries and national economies, while other serious needs are underfunded. Suppose we take a step back and “reconsider” our commitment to fighting climate change?
The Nongovernmental International Panel on Climate Change is a project supported by three independent nonprofit organizations: Science and Environmental Policy Project, Center for the Study of Carbon Dioxide and Global Change, and The Heartland Institute. Steve Goreham is Executive Director of the Climate Science Coalition of America and author of the book The Mad, Mad, Mad World of Climatism: Mankind and Climate Change Mania.
[First Published by The Washington Times.]