President Donald Trump wants to purchase Greenland, a self-governing territory of Denmark. “Strategically, it’s interesting,” he observed, though “it is not number one on the burner.”
Alas, the Danes aren’t impressed. “Greenland is not for sale. Greenland is not Danish. Greenland belongs to Greenland,” said Denmark’s Prime Minister Mette Frederiksen.
Greenlanders were a bit blunter. Inuit Maya Sialuk told the Wall Street Journal: “We are still trying to recover from a colonization period of almost 300 years. Then there is this white dude in the States who’s talking about purchasing us.”
Our nation’s problems are caused not by a lack of territory but increasingly disjointed cultural identities. There’s only one solution.
Trump, always in character, called the Prime Minister’s response “nasty” on Wednesday and abruptly canceled a bi-lateral meeting with her government planned for next month.
In 1867, Secretary of State William Seward, who orchestrated the acquisition of Alaska, proposed buying Greenland. In 1946, the Truman administration did likewise. Both times, the Danes said no.
Greenland is not just a place; it is a territory of 56,000 people who largely govern themselves—with a parliament and prime minister—other than in international affairs, which Copenhagen manages, though in consultation with the locals. In fact, internationally Greenlanders are viewed as an independent people. The original residents were Inuits. Vikings showed up in the 10th century, and the territory eventually became part of Norway, and then Denmark.
Greenland is obviously strategic—it is close to America and hosts Thule Air Base. However, one should not oversell its value to American interests. The Washington Examiner grandly declared that Thule “gives the U.S. military the means to deter and defeat prospective aggression.” Aggression by whom? A sneak attack by the Russkies or Chinese launched from the Arctic seems, well, unlikely.
Anyway, no one expects NATO member Denmark to hand over the island to a hostile power. Last year, Washington opposed Chinese financing of three airports, and Denmark’s government found other funders. Canada and Mexico are even more strategic and the U.S. isn’t trying to buy them. (Although Americans once unsuccessfully attempted to conquer Canada and considered annexing all of Mexico, instead of the half that Washington seized after winning the Mexican-American War.)
Most everyone is making the issue about America; at least the Examiner remembered that the U.S. would be, er, buying people. But, it explained, don’t worry, “this isn’t just about American interests. Greenland’s small population also has everything to gain from a massive influx of American investment. The surge in tourism alone would surely offer a vast untapped potential.”
It’s not clear why U.S. firms would suddenly invest in a largely icebound territory that lies mostly north of the Arctic Circle. And what is stopping tourists from going today? Maybe Greenlanders don’t want to be overwhelmed by their much larger neighbors.
While talking about the potential financial benefits for Denmark, columnist Quin Hillyer pronounced, “The wishes of Greenland’s current population should be considered.” Only “considered”? They rule themselves. Would the U.S. seriously contemplate taking control if they didn’t want to join the American colossus? Surely thousands of people should not be bartered as if they’re an oil field or coal mine.
While the Examiner lauds the possibility of Greenland’s “inhabitants joining our national family,” they might not have the same desire to be ruled by the imperial city of Washington, D.C. Can’t say I blame them. America remains exceptional in many ways, but it’s ruled by an abusive, hypocritical, irresponsible, sanctimonious, and incompetent elite—at best legal guardians, not parents—living far distant and with only minimal concern for the “family’s” welfare.
Power is increasingly concentrated in Washington. The federal government exercises ever more authority over ever more aspects of Americans’ lives. Interest group politics has grown more feverish and vicious. In recent years the U.S. has even been moving in the wrong direction on economic liberty. Although it remains ahead of Denmark on the Economic Freedom of the World ratings, on the similar Index of Economic Freedom, Denmark is just one position and a fraction of a point behind America. Most of the Scandinavian countries are redistributionist rather than socialist, and employ less intrusive regulation than the U.S.
People have long been concocting new schemes to expand the American Empire. In the early days, Washington conquered nearby territories; then it acquired more distant possessions. These days, outright aggression is frowned upon, so expansionists must be more nuanced. For instance, before the possibility of Canada dissolving was mooted, even Patrick Buchanan, who had long argued against America’s warfare state, listed the seceding pieces Washington should snag.
However, the U.S. already is too big. With nearly 330 million people, there is no “national family.” California is a fabulous place, but a majority of its citizens want to base policy on dirigiste economics and identity politics. Why not let them go their own way, rather than whine when the Electoral College prevents them from imposing their self-absorbed fantasies on everyone else?
Equally caustic judgments could be made against other sections of America, such as the South, Rust Belt, and New England. Books have been written about breaking the U.S. into pieces. In this case, secession, or “separation,” would have nothing to do with race and slavery. Rather it would be about community, commitment, family, communication, unity, compassion, responsibility, humanity, and scale. Americans from everywhere should live in peace. But there is no reason why everyone needs to be forced into the same massive political aggregation, with one faction or another constantly attempting to seize control of the whole.
Thinking creatively could yield additional benefits. Why not sell off California to the highest bidder? That could raise a good chunk of money to pay down the national debt. Put Hawaii on the market. Mark Zuckerberg, Bill Gates, or perhaps some Russian or Middle Eastern billionaire might make it worth Americans’ while to yield the Pacific paradise. If not Hawaii, then perhaps American Samoa and the Commonwealth of Northern Marianna Islands.
The Midwest, with its big agricultural production, would be in high demand. China might pay a hefty price—after all, it has a lot of people to feed! There might even be a market for progressive enclaves: San Francisco, Austin, Madison, New York City, Atlanta, and more. Bundle them together and see what the market will bear. Maybe a British government under Prime Minister Jeremy Corbyn would make an offer. It would be a once in a lifetime opportunity to acquire America’s commanding heights of left-progressivism.
Another alternative would be to have U.S. communities go the way of Greenland—that is, become autonomous territories under Danish control. For example, Springfield, Virginia, where I live, could offer to sell to Denmark. No more being forced to support the American imperium.
What is not to like about Greenland’s situation? Once an active, militaristic, and colonial power, Denmark has left those days behind. Now it is a small, inoffensive, constitutional monarchy. It survived World War II and protected its Jewish population. It’s a small, human-sized country, with fewer than six million people; it’s wealthy, democratic, and, according to a UN survey, for what that’s worth, the world’s happiest place.
Perhaps most important, the Danish military has only about 27,000 people in uniform. New York City alone has 36,000 police officers. That means Denmark really can’t wander the globe bombing, invading, and occupying other nations, unlike Washington, which seems to believe Americans cannot be happy unless they’re at war.
Denmark exercised exceptionally good judgment by remaining neutral in World War I, perhaps the stupidest modern war with the greatest long-term consequences. In contrast, America, led by the sanctimonious megalomaniac Woodrow Wilson, voluntarily, even enthusiastically, entered that conflict. World War I brought forth communism, fascism, Nazism, and World War II. Had Washington stayed out, a compromise peace was most likely; the result would have been unsatisfying, but far better for humankind. Denmark’s perspective was the right one, and American policymakers were wrong.
President Trump should leave Greenland alone. It isn’t Denmark’s to sell and it isn’t in America’s interest to buy. This nation’s problems have resulted not from a lack of territory, but from its transformation from a democratic republic to a global imperium. No wonder Mute Bourup Egede, who heads a Greenland independence party, observed: “America will always have an interest in Greenland. Our country will always be ours.” As it should be.Doug Bandow is a senior fellow at the Cato Institute. He is a former special assistant to President Ronald Reagan and the author of Foreign Follies: America’s New Global Empire.
In mid-1939 German Chancellor Adolf Hitler had a problem. He wanted to go to war with the Soviet Union in order to grab precious Lebensraum, or living space — and eradicate the Bolshevik menace. The Western powers, however, namely Great Britain and France, refused to make a deal with him.
Instead, they guaranteed the security of Poland, the next obvious Nazi target and pathway to the USSR. He wanted to avoid a two-front war, which ended badly for the Germans in World War I. So the Austrian corporal turned German Führer sought a deus ex machina. He found it on August 23, 1939, when the Treaty of Non-Aggression Between Germany and the Union of Soviet Socialist Republics, also known as the Hitler-Stalin Pact and Molotov-Ribbentrop Pact, the names of the respective dictators and foreign ministers who negotiated the agreement’s terms, was signed.
While diplomacy almost always is preferable to war, the two sometime coincide. Plenty of plundering marauders have made common cause. But it is hard to think of an example of greater depravity: two of the worst mass murderers in history dividing the world between them.
World War I left both Germany and Russia isolated pariah states. Germany’s new Weimar republic had expected gentler treatment by the allies, having surrendered under Woodrow Wilson’s “14 Points” and then defenestrated the Kaiser and the entire imperial system. But the Versailles Treaty placed full blame on Berlin, amputated historic Germanic lands, transferred indisputably German populations to other nations, imposed the cost of the war on the German people, and kept the democratic German government out of the League of Nations, which was designed to guarantee British and French dominance of the new international order. Ravaged by political conflict and civil strife at home, Berlin schemed to overturn the artificial territorial divide, which it never accepted.
We should never forget the moment when two of history’s worst dictators came together to do evil, leaving immeasurable death and carnage in their wake.
The newly created Soviet Union, successor state to the Russian Empire, was even more isolated. Forced by Germany, which triumphed on the Eastern Front, to accept the draconian Treaty of Brest-Litovsk in 1918 — the only way for the Bolsheviks to preserve their tenuous control as civil war loomed — the Communists spent the next several years battling counter-revolutionaries while seeking to reassemble the old empire. The Americans, British, French, and Japanese intervened militarily against the new regime, first hoping to keep Russia in the war and next seeking to strangle the Soviet state in its infancy. The USSR survived, but turned inward as Vladimir Ilyich Lenin’s successors battled for control and the triumphant Joseph Stalin brutally industrialized his peasant nation.
During this time, the former enemies became friends of sorts. In April 1922, Germany and Russia signed the Treaty of Rapallo, renouncing financial and territorial claims against the other. A secret annex allowed Berlin to train military personnel and test military equipment on Soviet soil, violating the Versailles Treaty. The Treaty of Berlin, signed in April 1926, guaranteed neutrality in the event of a third-party attack on the other. Trade also expanded between the two states — especially noteworthy for Moscow, which was more isolated from capitalist markets.
Then Adolf Hitler came to power on January 30, 1933. He disliked the Western powers but bore special animus toward the Soviet Union and Bolshevism, against which he had preached war. In November, Berlin and Tokyo signed the Anti-Comintern Pact, which explicitly targeted the Communist International and USSR. Aided by substantial Communist parties active in Europe, Moscow initially looked to the West. In May 1935, France and the Soviet Union signed the Franco-Soviet Treaty of Mutual Assistance. After the September 1938 Munich Agreement and March 1939 German invasion of Czechoslovakia, Moscow, London, and Paris opened tripartite talks over military cooperation against Germany.
The barriers to agreement were significant, however. Only a fellow traveler could imagine Soviet Communism as a trustworthy bulwark for Western democracies. Poland refused to allow the passage of Soviet troops, lest they not be so quick to leave. And the British and French, uncertain and unenthusiastic, hoped war with Germany could be avoided and doubted the military value of the Red Army, which was recovering from Stalin’s purges. Divided over strategy, they stalled negotiations, sending their representatives by boat rather than air and denying them authority to make a deal.
This encouraged Stalin to seek a new foreign dance partner. In May he replaced Maxim Litvinov, the Westward-leaning (and Jewish) foreign minister, with Vyacheslav Mikhailovich Molotov, a hardened revolutionary and loyal apparatchik. The result was a whirlwind geopolitical romance, as Hitler pressed for a quick settlement that would free him to attack Poland and then deal with his European foes. The two great totalitarian rivals decided that they were united in “opposition to the capitalist democracies,” as the diplomats put it.
Of course, there were tensions, since both governments had spent years vilifying the other. Some top Nazis were uneasy about sacrificing the Finns and Balts, who were supposed to be racial kin of the Germans (and among whom many ethnic Germans lived). The political pirouettes performed by the Communists, especially party members in the West, who went from calling for war against the Reich to demanding peace with Germany, were even more dramatic. After all, few of them had before acted like they believed that “fascism is a matter of taste,” as Molotov observed when the agreement was signed.
Publicly the two governments agreed not to aid or ally with any nation against the other signatory. Through a secret protocol, Berlin and Moscow defined “spheres of influence”: the two totalitarians coldly divided Poland and apportioned influence over the three Baltic States, Finland, and Romania. (They later adjusted their shares, with continuing contempt for the territories and peoples bartered back and forth.) Moscow also became a significant supplier of raw materials to the Reich, receiving industrial and military products in return.
Stalin apparently explained his decision in a speech to the Politburo on August 19, as the agreement was being finalized — though the Soviets always denied that the talk occurred. (More than one version of the supposed text exists.) He explained why “we must accept the German proposal and, with a refusal, politely send the Anglo-French mission home.”
The Soviet dictator said the agreement with Germany ensured that Berlin would invade Poland and be at war with France and Britain. As a result, “Western Europe would be subjected to serious upheavals and disorder. In this case we will have a great opportunity to stay out of the conflict, and we could plan the opportune time for us to enter the war.”
If Berlin defeated the allies, it still would have acknowledged the USSR’s geopolitical interests, he indicated. More important, “Germany will leave the war too weakened to start a war with the USSR within a decade at least.” Berlin also would need to occupy the two allied states and exploit new territories. “Obviously, this Germany will be too busy elsewhere to turn against us” — especially after a Communist revolution would break out in France and it, along with other nations that fall under the victorious Nazis, would become Moscow’s ally.
If Germany lost to Britain and France, “a Sovietization of Germany will unavoidably occur,” said Stalin, though he was afraid that Britain and France would intervene and destroy the resulting Communist government. Therefore, he argued, “our goal is that Germany should carry out the war as long as possible so that England and France grow weary and become exhausted to such a degree that they are no longer in a position to put down a Sovietized Germany.”
Stalin’s cynicism was almost complete. He concluded that “it is in the interest of the USSR, the worker’s homeland, that a war breaks out between the Reich and the Anglo-French bloc. Everything should be done so that it drags out a long as possible with the goal of weakening both sides.” After signing the non-aggression pact, Moscow must “work in such a way that this war, once it is declared, will be prolonged maximally.”
In some ways the aftermath was predictable. Germany invaded Poland on September 1, 1939. London and Paris declared war on Berlin on September 3. The Soviets grabbed their share of Poland two weeks later, causing that nation to cease to exist. (Reestablished after the war, Warsaw was not able to reclaim the lands seized by Russia, instead annexing German lands to the west.) Next, the USSR stationed troops in and ultimately swallowed the helpless Baltic countries, placed in its sphere of influence by the Hitler-Stalin Pact. In November, Moscow attacked Finland. The latter fought heroically but was forced to cede territory to the Soviet Union. Finally, Moscow demanded Bessarabia and Northern Bukovina from Romania.
But Stalin seriously overestimated French and British military effectiveness. And Hitler was even more cynical than the Soviet leader about their deal, never abandoning his underlying animus toward Communism. In June 1940, Hitler announced that Germany’s victories in the West “finally freed his hands for his important real task: the showdown with Bolshevism.”
Perhaps even more important, German and Soviet geopolitical interests clashed in the Balkans and Finland. Negotiations ensued over enlisting Moscow as a fourth member of the Axis, but Stalin could not be diverted to the Middle East/South Asia. For a time the German Führer appeared genuinely ambivalent about which direction to move, and Molotov visited Berlin in November 1940. Some of the talks had to be held in a bomb shelter during a British air raid, embarrassing the Germans.
The following month Stalin spoke to his generals; he anticipated war but hoped to delay conflict for at least two years to give the Red Army time to prepare. He got six months. Moscow’s demands were too heavy and Germany allowed the negotiations to lapse. Hitler complained that his Soviet counterpart “demands more and more” and is “a cold-blooded blackmailer.” Thus, the USSR “must be brought to her knees as soon as possible.” When Stalin was speaking with his generals, the German military was delivering its plan for the invasion of the Soviet Union. Originally scheduled for May 15, the action ultimately began on June 22.
Operation Barbarossa ended the Russo-German entente less than two years after it was forged. Hitler expected an easy victory. Before attacking, he declared, “We have only to kick in the door and the whole rotten structure will come crashing down.” Germany’s initial victories were great, but the Soviet Union’s resources were greater. Barely four years later, on May 2, 1945, the Red Army celebrated victory in the ruins of Berlin. Hitler’s thousand-year Reich collapsed 988 years early, with the Führer dying in the ruins of his chancellery.
Stalin died in 1953 of a stroke, or perhaps of poisoning by his secret police chief, Lavrentiy Beria. Ribbentrop was executed after trial by the Nuremburg tribunal, having been convicted for his role in promoting aggressive war and unleashing the Holocaust. Molotov lost influence after Stalin’s death but lived until 1986, when he died at the age of 96. He remained an unrepentant Stalinist to the end. Only in 1989 did Moscow admit the existence of the secret protocol; President Vladimir Putin later condemned the agreement as “immoral.”
There almost certainly would have been widespread war without the Hitler-Stalin Pact. But the character of the conflict would have been radically different. Had the German dictator proceeded to invade Poland, followed by an attack on the USSR, the Wehrmacht would have been far less prepared for extensive operations. So would the Red Army, and there would have been no American Lend-Lease program, which effectively mechanized the Soviet military. Berlin, however, would not have been able to call on significant contingents of Hungarians, Italians, and Romanians for aid. Rather than do nothing during the infamous Sitzkrieg after declaring war on Germany, Britain and France might have launched an offensive while German troops were tied down in faraway Russia.
A strike westward without safeguarding Germany’s eastern border would have been far riskier for the Reich than the conflict’s actual course. Poland might have attacked to support its allies. Moscow probably would have stayed neutral while accelerating its armaments production. But the USSR might have taken a more active role in the conflict: without a non-aggression pact, the Soviet Union would be the obvious next target for a Germany victorious in the West. But any succeeding attempt at the conquest of the Soviet Union by Berlin would have been without advanced positions in the east and the advantage of surprise when striking. America’s involvement might have remained much the same, dedicated to saving Britain and defeating Germany — and aiding Russia if the latter was attacked.
In short, there have been few treaties with consequences as great as the Hitler-Stalin Pact. It simultaneously emboldened the Third Reich, weakened the Allies, and anesthetized the Soviets. The agreement might not have changed the course of the war, but probably lengthened it while increasing the casualty toll. Perhaps the gravest humanitarian consequence was the expansion of the Holocaust. The treaty gave Germany easier access to countries with large Jewish populations and space within these countries for death camps.
Finally, the dictators’ partnership helped transform the map of Europe. If Berlin had not abandoned friendly states along Russia’s border, the Soviet Union might not have swallowed the Baltics and chopped off pieces of Finland, Poland, and Romania. Perhaps Poland would have avoided defeat, ultimately being allied with rather than a victim of the USSR.
Yet this malign “deal of the century” was well-nigh impossible to avoid. Only very late did the Allies understand the true nature of Hitler and his regime. Soviet brutality — such as the Katyn massacre of thousands of Polish military officers — retrospectively justified Warsaw’s reluctance to admit the Red Army to fight Germany. Virtually no one imagined the success of the Wehrmacht’s Blitzkrieg, without which Stalin’s plan for sitting out the conflict might have proved prescient.
Eighty years on, the picture of Stalin, Molotov, and Ribbentrop celebrating their handiwork still offends us mentally and morally. Thankfully, that world has passed. Yet evil has not disappeared from international affairs. We should never forget the moment when two of history’s worst dictators came together to do evil, leaving immeasurable death and carnage in their wake.Doug Bandow s a Senior Fellow at the Cato Institute. He is a former Special Assistant to President Ronald Reagan, and the author of Foreign Follies: America’s New Global Empire.
Across the map of the United States, the borders of Tennessee, Oklahoma, New Mexico, and Arizona draw a distinct line. It’s the 36º30′ line, a remnant of the boundary between free and slave states drawn in 1820. It is a scar across the belly of America, and a vivid symbol of the ways in which slavery still touches nearly every facet of American history.
That pervasive legacy is the subject of a series of articles in The New York Times titled “The 1619 Project.” To cover the history of slavery and its modern effects is certainly a worthy goal, and much of the Project achieves that goal effectively. Khalil Gibran Muhammad’s portrait of the Louisiana sugar industry, for instance, vividly covers a region that its victims considered the worst of all of slavery’s forms. Even better is Nikole Hannah-Jones’s celebration of black-led political movements. She is certainly correct that “without the idealistic, strenuous and patriotic efforts of black Americans, our democracy today would most likely look very different” and “might not be a democracy at all.”
Where the 1619 articles go wrong is in a persistent and off-key theme: an effort to prove that slavery “is the country’s very origin,” that slavery is the source of “nearly everything that has truly made America exceptional,” and that, in Hannah-Jones’s words, the founders “used” “racist ideology” “at the nation’s founding.” In this, the Times steps beyond history and into political polemic—one based on a falsehood and that in an essential way, repudiates the work of countless people of all races, including those Hannah-Jones celebrates, who have believed that what makes America “exceptional” is the proposition that all men are created equal.
As part of its ambitious “1619” inquiry into the legacy of slavery, The New York Times revives false 19th century revisionist history about the American founding.
For one thing, the idea that, in Hannah-Jones’ words, the “white men” who wrote the Declaration of Independence “did not believe” its words applied to black people is simply false. John Adams, James Madison, George Washington, Thomas Jefferson, and others said at the time that the doctrine of equality rendered slavery anathema. True, Jefferson also wrote the infamous passages suggesting that “the blacks…are inferior to the whites in the endowments both of body and mind,” but he thought even that was irrelevant to the question of slavery’s immorality. “Whatever be their degree of talent,” Jefferson wrote, “it is no measure of their rights. Because Sir Isaac Newton was superior to others in understanding, he was not therefore lord of the person or property of others.”
The myth that America was premised on slavery took off in the 1830s, not the 1770s. That was when John C. Calhoun, Alexander Stephens, George Fitzhugh, and others offered a new vision of America—one that either disregarded the facts of history to portray the founders as white supremacists, or denounced them for not being so. Relatively moderate figures such as Illinois Sen. Stephen Douglas twisted the language of the Declaration to say that the phrase “all men are created equal” actually meant only white men. Abraham Lincoln effectively refuted that in his debates with Douglas. Calhoun was, in a sense, more honest about his abhorrent views; he scorned the Declaration precisely because it made no color distinctions. “There is not a word of truth in it,” wrote Calhoun. People are “in no sense…either free or equal.” Indiana Sen. John Pettit was even more succinct. The Declaration, he said, was “a self-evident lie.”
It was these men—the generation after the founding—who manufactured the myth of American white supremacy. They did so against the opposition of such figures as Lincoln, Charles Sumner, Frederick Douglass, and John Quincy Adams. “From the day of the declaration of independence,” wrote Adams, the “wise rulers of the land” had counseled “to repair the injustice” of slavery, not perpetuate it. “Universal emancipation was the lesson which they had urged upon their contemporaries, and held forth as transcendent and irremissible [sic] duties to their children of the present age.” These opponents of the new white supremacist myth were hardly fringe figures. Lincoln and Douglass were national leaders backed by millions who agreed with their opposition to the white supremacist lie. Adams was a former president. Sumner was nearly assassinated in the Senate for opposing white supremacy. Yet their work is never discussed in the Times articles.
In 1857, Chief Justice Roger Taney sought to make the myth into the law of the land by asserting in Scott v. Sandford that the United States was created as, and could only ever be, a nation for whites. “The right of property in a slave,” he declared, “is distinctly and expressly affirmed in the Constitution.” This was false: the Constitution contains no legal protection for slavery, and doesn’t even use the word. Both Lincoln and Douglass answered Taney by citing the historical record as well as the text of the laws: the founders had called slavery both evil and inconsistent with their principles; they forbade the slave trade and tried to ban it in the territories; nothing in the Declaration or the Constitution established a color line; in fact, when the Constitution was ratified, black Americans were citizens in several states and could even vote. The founders deserved blame for not doing more, but the idea that they were white supremacists, said Douglass, was “a slander upon their memory.”
Lincoln provided the most thorough refutation. There was only one piece of evidence, he observed, ever offered to support the thesis that the Declaration’s authors didn’t mean “all men” when they wrote it: that was the fact that they did not free the slaves on July 4, 1776. Yet there were many other explanations for that which did not prove the Declaration was a lie. Most obviously, some founders may simply have been hypocrites. But that individual failing did not prove that the Declaration excluded non-whites, or that the Constitution guaranteed slavery.
Even some abolitionists embraced the white supremacy legend. William Lloyd Garrison denounced the Constitution because he believed it protected slavery. This, Douglass replied, was false both legally and factually: those who claimed it was pro-slavery had the burden of proof—yet they never offered any. The Constitution’s wording gave it no guarantees and provided plentiful means for abolishing it. In fact, none of its words would have to be changed for Congress to eliminate slavery overnight. It was slavery’s defenders, he argued, not its enemies, who should fear the Constitution—and secession proved him right. Slaveocrats had realized that the Constitution was, in Douglass’s words, “a glorious liberty document,” and they wanted out.
Still, after the war, “Lost Cause” historians rehabilitated the Confederate vision, claiming the Constitution was a racist document, so that the legend remains today. The United States, writes Hannah-Jones, “was founded…as a slavocracy,” and the Constitution “preserved and protected slavery.” This is once more asserted as an uncontroverted fact—and Lincoln’s and Douglass’s refutations of it go unmentioned in the Times.
No doubt Taney would be delighted at this acceptance of his thesis. What accounts for it? The myth of a white supremacist founding has always served the emotional needs of many people. For racists, it offers a rationalization for hatred. For others, it offers a vision of the founders as arch-villains. Some find it comforting to believe that an evil as colossal as slavery could only be manufactured by diabolically perfect men rather than by quotidian politics and the banality of evil. For still others, it provides a new fable of the fall from Eden, attractive because it implies the possibility of a single act of redemption. If evil entered the world at a single time, by a conscious act, maybe it could be reversed by one conscious revolution.
The reality is more complex, more dreadful, and, in some ways, more glorious. After all, slavery was abolished, segregation was overturned, and the struggle today is carried on by people ultimately driven by their commitment to the principle that all men are created equal—the principle articulated at the nation’s birth. It was precisely because millions of Americans have never bought the notion that America was built as a slavocracy—and have had historical grounds for that denial—that they were willing to lay their lives on the line, not only in the 1860s but ever since, to make good on the promissory note of the Declaration.
Their efforts raise the question of what counts as the historical “truth” about the American Dream. A nation’s history, after all, occupies a realm between fact and moral commitments. Like a marriage, a constitution, or an ethical concept like “blame,” it encompasses both what actually happened and the philosophical question of what those happenings mean. Slavery certainly happened—but so, too, did the abolitionist movement and the ratification of the Thirteenth, Fourteenth, and Fifteenth Amendments. The authors of those amendments viewed them not as changing the Constitution, but as rescuing it from Taney and other mythmakers who had tried to pervert it into a white supremacist document.
In fact, it would be more accurate to say that what makes America unique isn’t slavery but the effort to abolish it. Slavery is among the oldest and most ubiquitous of all human institutions; as the Times series’ title indicates, American slavery predated the American Revolution by a century and a half. What’s unique about America is that it alone announced at birth the principle that all men are created equal—and that its people have struggled to realize that principle since then. As a result of their efforts, the Constitution today has much more to do with what happened in 1865 than in 1776, let alone 1619. Nothing could be more worthwhile than learning slavery’s history, and remembering its victims and vanquishers. But to claim that America’s essence is white supremacy is to swallow slavery’s fatal lie.
As usual, Lincoln said it best. When the founders wrote of equality, he explained, they knew they had “no power to confer such a boon” at that instant. But that was not their purpose. Instead, they “set up a standard maxim for free society, which should be familiar to all, and revered by all; constantly looked to, constantly labored for, and even though never perfectly attained, constantly approximated, and thereby constantly spreading and deepening its influence, and augmenting the happiness and value of life to all people of all colors everywhere.” That constant labor, in the generations that followed, is the true source of “nearly everything that has truly made America exceptional.”Timothy Sandefur holds the Duncan Chair in Constitutional Government at the Goldwater Institute and is the author of Frederick Douglass: Self Made Man (Cato Institute, 2017).
Jonathan Blanks and Jeffrey A. Singer
What do gun owners and pain patients have in common? They both may be collateral damage of policy hastily enacted in response to catastrophic news. Mass shootings and drug overdoses naturally evoke fear and outrage. But with populism animating both major parties, we should be wary of policy making through fear. Visceral reactions to tragedies are normal, but new laws and restrictions rarely reduce harm and often make matters worse. The best public policy relies on data-driven evidence.
While all gun deaths have a common denominator of firearms, the vast majority of gun deaths have little in common with the mass shootings that dominate headlines. The scale of those differences is staggering and the facts undermine the current advocacy that focuses on “assault weapons.”
Visceral reactions to tragedies are normal, but new laws and restrictions rarely reduce harm and often make matters worse. The best public policy relies on data-driven evidence.
According to Mother Jones’ mass shootings database, there have been 114 mass and spree shootings in the U.S. since 1982. Those tragedies have resulted in 934 deaths and 1,406 people injured.
In 2017, there were nearly 40,000 gun deaths in the United States. Of that number, about 24,000 died by suicide. Gun suicides make up just over half of the roughly 47,000 American suicides annually. About 14,000 gun deaths were homicides, stemming primarily from street violence and intimate partner homicide.
Certainly, semi-automatic rifles made the 2017 Las Vegas shooting unfathomably deadly. But most gun deaths and most mass shootings are perpetrated with handguns. During the last federal ban on assault weapons, there was no measurable impact on gun-crime victimizations.
These facts should not preclude new gun laws, but the drivers of these deaths go beyond guns. Despite a recent uptick, homicide rates remain near historic lows after two decades of decline in violent crime. But suicides are trending upward, which is evidence that policymakers should pay more attention to the “why” rather than simply “how” so many die.
In 2017, the Center for Disease Control and Prevention reported 47,600 opioid-related deaths. Policymakers blamed excessive prescription of opioids by doctors for addicting the population.
But federal survey data consistently show no correlation between prescription volume and the nonmedical use of opioids or opioid addiction. And medically prescribed opioids have overdose rates ranging from 0.022% to 0.04%.
Many people mistake dependency for addiction, but they are two different things. Some drugs, including opioids, antidepressants, antiepileptics and beta blockers, can make a person physically dependent after prolonged use. Abruptly stopping them can cause sometimes fatal withdrawal effects.
Addiction, on the other hand, is a distinct behavioral disease, with a major genetic component, featuring compulsive behavior despite obvious self-destructive consequences. The director of the National Institute on Drug Abuse states that opioid addiction in patients is very uncommon “even among those with preexisting vulnerabilities.” Recent studies show a “misuse” rate of 0.6%in patients prescribed opioids for acute pain and roughly 1% in those on chronic opioid treatment.
High-dose prescribing is down 58% since 2008. Yet the overdose rate continues to rise, involving fentanyl or heroin 75% of the time. Evidence shows a steady exponential increase in nonmedical use of drugs since the 1970s and suggests complex socio-cultural factors are root causes. As prescription pain pills become less available for diversion into the black market, nonmedical users find cheaper and deadlier options.
Opioid dependence is real, but not necessarily detrimental. As the American Medical Association has acknowledged, there are many patients for whom opioids are the only drugs that control their pain enough to live a quality life. But our fear-based response to opioids — with top-down pill restrictions and crackdowns on prescribers — has cutoff many chronic pain patients, causing a great number to self-medicate with unpredictable and dangerous drugs on the black market. Some, in desperation, turn to turn to suicide.
The overdose problem has always been primarily a consequence of drug prohibition and the dangerous black market it fuels.To reduce overdoses, policies should be redirected from restrictive, prohibitionist interventions to those more focused on reducing the harms that result from drug use in an underground market.
Drug overdoses and gun deaths are serious problems that require changes from the status quo. However, changes should be based on data and political realities, not fears that demand policymakers “do something.” Implementing the wrong policies can obscure larger problems or make bad situations tragically worse.Jonathan Blanks is a research associate in the Cato Institute Project on Criminal Justice; Jeffrey Singer practices general surgery in Phoenix, Ariz., and is a senior fellow at the Cato Institute.
Michael D. Tanner
Democrats running for president have certainly not hesitated to criticize President Trump’s trade policies.
There is a good reason for the rhetoric. Several recent studies, from researchers at Harvard, Columbia, the IMF, and two different branches of the Federal Reserve, have all concluded that the tariffs imposed by President Trump on China and others have indeed hurt American consumers and threatened economic growth domestically and internationally. For instance, scholars at Columbia, Princeton, and the New York Fed found that the Trump tariffs had reduced U.S. real income by $1.4 billion per month by the end of 2018.
In response — or perhaps just because Americans have a reactive response to any Trump policy — polls suggest that support for free trade is on the rise. A Monmouth poll found that 52 percent of Americans in 2018 think free-trade agreements are good for the United States, a dramatic increase when compared to 24 percent in 2015.
Democrats are right to disagree with Trump. Too bad they don’t bring any good ideas to the table.
But what exactly are the Democratic presidential candidates proposing as an alternative? Their policies — as opposed to their words — don’t seem all that different. In fact, some of the Democratic plans may be even more restrictive.
For example, many experts believe that the best way to restrain China would be to join with our regional allies in some sort of block, similar to the Trans-Pacific Partnership (TPP). And there is reason to believe that our allies would be happy to have us join the pact. But with the exception of extreme long-shot Representative John Delaney, every major Democratic candidate either joins Trump in opposing the TPP or is highly critical of the current negotiation. Even former vice president Joe Biden won’t commit to the treaty his administration negotiated.
Biden’s change in position is just his latest concession to the special interests and unions that dominate the Democratic primaries. He once voted for normal trade relations in China, NAFTA, and pushed for the Trans-Pacific Partnership, but no longer.
Nor is it just the TPP that Democrats oppose. Like Trump, most of the major Democrats oppose NAFTA. But, with the exception of Beto O’Rourke, they also oppose Trump’s renegotiation of NAFTA (renamed the United States-Mexico-Canada Agreement, or USMCA). Most Democrats have also opposed other, bilateral trade deals, such as those with Korea and Colombia.
The left flank of the Democratic party is even more anti-trade. Elizabeth Warren, for instance, wants the focus of trade to be on labor, the environment, and, ironically, consumers. She wants the U.S. to trade only with countries that have signed the Paris Agreement and meet onerous human-rights and labor standards.
This policy would fall most heavily on poor nations who can least afford costly environmental or labor upgrades. Countries such as El Salvador, Honduras, and Guatemala would be devastated, sending a new flood of refugees streaming toward our border.
And Bernie Sanders’s opinions are quite similar to Warren’s. Both of them are in favor of steel and aluminum tariffs and oppose all current trade deals. Sanders, like Warren, wants all future negotiations to be centered around labor, the environment, and human rights.
This shouldn’t be a surprise. The Left has long opposed free trade. After all, the ability to buy and sell to whomever you wish is the antithesis of central planning.
Unfortunately, though, for those of us who believe in the free market, the 2020 race continues to offer less of a choice, and more of an echo.Michael Tanner is a senior fellow at the Cato Institute and the author of The Inclusive Economy: How to Bring Wealth to America’s Poor.
Steve H. Hanke
Yesterday, the ticket of Alberto Fernandez and Christina Kirchner crushed the hapless President of the Argentine Republic Mauricio Macri in a primary election. Their victory virtually guarantees that the Fernandez-Kirchner team will occupy the Casa Rosada after the presidential election scheduled for October.
For many, including the pollsters, Sunday’s results were a stunner. Not for me. I have been warning for over a year that gradualism, which is Macri’s mantra, is a formula for political disaster. If that wasn’t enough, the Argentine peso is another time bomb that has sent many politicians in Argentina into early retirement. And, to add insult to injury, Macri called in the “firefighters” from the International Monetary Fund (IMF) to salvage the peso. These three factors sealed Macri’s fate.
As it turns out, this movie has been played over-and-over again in Argentina. Argentina has seen many political gradualists bite the dust. What makes Macri unique is that he advertised gradualism as a virtue. Macri and his advisers obviously never studied the history of economic gradualism. When presidents are faced with a mountain of economic problems, it’s the Big Bangers who succeed.
As for the venom that can be injected by a peso crisis, the instances of the poison delivered by that snake bite are almost too numerous to count. To list but a few of Argentina’s major peso collapses: 1876, 1890, 1914, 1930, 1952, 1958, 1967, 1975, 1985, 1989, 2001, and 2018.
It is noteworthy that the frequency of peso crises picked up after the establishment of the Central Bank of Argentina (BCRA) in 1935. With that, serial monetary mismanagement ensued. The chart below tells the BCRA story. Before the BCRA, Argentina (the peso) held its own against the United States (the dollar), with the respective per capita GDPs being roughly equal in 1935. But, after the BCRA entered the picture, a great divergence began. Now, the U.S. GDP per capita is roughly three times higher than that of Argentina.
The BCRA’s most recent monetary mishap occurred last year, when the poor peso lost 58% of its value against the greenback from the start of 2018 until the end of May 2019. What was behind that collapse? On Macri’s watch, no less, the BCRA had been surreptitiously financing the government’s deficit spending. It did this through the sterilization of increases in the net foreign asset component of Argentina’s monetary base. This was done via the sale of bonds issued by the BCRA (LEBACS). The sterilization (and financing of the government’s deficit) was on a massive scale. In the January 2017-May 2018 period, the BCRA sterilized 50% of the total increase in the foreign asset component of the monetary base. In consequence, the BCRA was the largest source of financing for Argentina’s sizable primary fiscal deficit. These typical Argentine monetary-fiscal shenanigans were an invitation for yet another currency disaster.
After the peso rout, Macri went hat in hand to the IMF. This was the dagger in the heart of Macri’s political career. For one thing, the Argentine public distrusts, if not despises, the IMF-and for good reasons: namely, the IMF’s record of failure in Argentina. Yes, the IMF’s prescriptions have turned out to be the wrong medicine. To stabilize a half-baked currency’s (read: the Argentine peso) exchange rate, the IMF orders sky-high interest rates. With these rates, the economy collapses, as does the local currency that the IMF is trying to stabilize.
As Harvard University’s Robert Barro put it, the IMF reminds him of Ray Bradbury’s Fahrenheit 451 “in which the fire department’s mission is to start fires.” Barro’s basis for that conclusion is his own extensive research. His damning evidence finds that:
And, if that’s not bad enough, countries that participate in IMF programs tend to be recidivists. The IMF programs don’t provide cures, but create addicts.
For a clear picture of the addiction problem (read: recidivism), review the chart below. It lists the number of IMF programs that 146 countries have participated in. Haiti leads the pack with 27 programs since joining the IMF in 1953. Argentina is a heavy hitter, too. It joined the IMF in 1956 and is now hooked on its 22nd IMF program. That’s a new program every 2.8 years on average.Armed with this weekend’s election results, Argentines are exchanging pesos for greenbacks as fast as they can. The peso has shed a stunning 20.5% against the preferred greenback since last Friday. And, by my measure, which uses high-frequency data, Argentina’s inflation rate has exploded to 103%/yr (see the chart below). To end Argentina’s never-ending monetary nightmare, the Central Bank of Argentina, along with the peso, should be mothballed and put in a museum. The peso should be replaced with the U.S. dollar. Argentina’s government should do officially what all Argentines do in times of trouble: dollarize. It’s time for the elites in Argentina to wake up and face reality. Steve Hanke is a professor of applied economics at The Johns Hopkins University and senior fellow at the Cato Institute.
While President Trump’s immigration rhetoric continues to focus on the need to build a southern border wall, his administration is quietly pursuing a policy that could provide a lasting solution to the ongoing migrant surge.
The Department of Labor recently signed an agreement with Guatemala to increase bilateral cooperation for the H-2A visa program for low-skilled Guatemalans. By providing transparency and accountability measures, such as ensuring that labor recruiters are bona fide and vetted, the agreement paves the way for more Guatemalans to come legally.
The administration should sign similar agreements with the other Northern Triangle countries, El Salvador and Honduras, which are responsible for the overwhelming number of migrants, as well as exempt them from H-2A seasonality requirements. Historical experience suggests increasing legal immigration options would reduce the number who come illegally.
The Trump administration could end the Central American border surge by shelving unhelpful border wall boasts in favor of doubling down on sound H-2A visa policy initiatives.
The H-2A visa is for seasonal workers in agriculture. It offers low-skilled migrants the best — and in many cases only — opportunity to come work in the U.S., while also addressing the acute labor shortage faced by American farmers. The Trump administration seems to recognize that economic migration can be channeled into this legal system.
That’s important, because the surge of Central American migrants is not correlated with murder rates in their home countries and most arrivals aren’t referred to asylum interviews. Central Americans are primarily being pushed out of their home countries by a poor economy — exacerbated by the crash in coffee prices — and drawn in by a booming labor market here.
Neither of these push nor pull migration factors are going to change soon, so diverting the migrants onto legal H-2A worker visas is key to meaningfully fixing the situation on the southern border.
For proof of the effectiveness of H-2A visas in stemming illegal migration, the Trump administration can consult recent history. Legal Mexican migration on expanded H-2A and H-2B (seasonal, non-agricultural) visas dramatically reduced illegal Mexican immigration over the last two decades. As the U.S. government increased the annual number of H-2 visas for Mexicans from 56,090 in 2000 to 242,582 in 2018, Mexican illegal immigration fell from over 1.6 million in 2000 to almost 137,000 in 2019 so far — a 91% drop.
During that time, a single additional H-2 visa for a Mexican worker is associated with 2.6 fewer Mexicans apprehended — controlling for border enforcement.
“Most of my friends go with visas or they don’t go at all,” said Mexican agricultural worker Jose Bacilio. In previous years, Mexican workers like Bacilio would have come illegally, but now they wait for visas.
Guatemalans, Hondurans and Salvadorans currently have no reason to wait, as they only got about 9,000 H-2 visas in 2018, slightly down from 2017. If the government issues more H-2 visas to Central Americans, then it will divert much of the current economic flow of migrants into the legal market, just like it did with Mexicans.
The Trump administration could achieve this by signing similar H-2A pacts with Honduras and El Salvador then asking Congress to exempt these H-2A workers from the visa’s seasonal requirements, which requires migrants to return home after the harvest is finished. This provision would guarantee that American labor recruiters flock to hire Central Americans without impacting Mexican recruitment.
Electronic visa application processes at U.S. embassies and consulates in Central America must also be modernized and streamlined. Officials should copy their counterparts at U.S. consulates in Mexico.p>The Trump administration’s first steps in streamlining the H-2A visa process for Guatemala is promising, but more needs to be done. Extra Mexican immigration enforcement and the summer heat cut the number of Central Americans showing up at the border by 29% — but neither will last forever.
Central Americans will come again in large numbers if they can’t come legally. The Trump administration could end the Central American border surge by shelving unhelpful border wall boasts in favor of doubling down on sound H-2A visa policy initiatives.Alex Nowrasteh is the director of immigration studies at the Cato Institute.
The Endangered Species Act has been called the strongest environmental law Congress has ever written because it gives the government almost unlimited power to regulate private landowners with the objective of saving wildlife, fish, and even insects. Environmental groups that relish seeing this law enforced are upset that the Trump administration is proposing to change how the law is administered.
The Fifth Amendment to the Constitution forbids the taking of private property for public use without compensation. The Endangered Species Act violates the spirit, if not the letter, of this amendment.
Under the law, if you have an endangered species on your land, or if the government thinks you might have an endangered species on your land, or if the government knows you don’t have an endangered species on your land but thinks that you might someday have that species on your land, then the government can so strictly regulate your land that you can’t get any economic use out of it. For example, the government told Louisiana landowners that they couldn’t develop their property because it was defined as “critical habitat” for a rare frog - even though the frog didn’t, and couldn’t, live on the land without completely removing existing trees and replacing them with other species.
Effectively, the government is requiring some private landowners to house and feed certain species of wildlife at the landowners’ expense. Moreover, the government can force this without providing any compensation at all. The law doesn’t require the government to consider the cost of its regulation, so government officials can write overly strict rules just in case it might help a species.
Yet there is little evidence that giving the government this power has done much to save species. The few species that have recovered from danger did so mostly for other reasons.
Those who truly want to save rare species should support revisions to the law that give people incentives to save species without imposing the costs on a handful of landowners.
For example, America’s symbol, the bald eagle, was once considered endangered. But scientists agree that it recovered primarily because the Environmental Protection Agency banned the use of the pesticide DDT a year before the Endangered Species Act was passed.
Moreover, the Endangered Species Act may actually do more harm than good to endangered species. To avoid regulation, the law gives private landowners incentives to do everything they can to keep endangered species off their land, leading to the phrase, “shoot, shovel, and shut up.”
This entire system is unfair because it forces a few people to pay the costs for something that benefits everyone else. While it is unknown whether the Supreme Court would agree that the law is unconstitutional, we shouldn’t have to ask it because we shouldn’t have imposed such an inequitable burden on a few people in the first place.
The Trump administration has proposed to revise how the law is administered in several ways. Among other things, the proposed rules would allow the government to consider the costs of its regulation and would impose less intrusive regulations for the protection of species that are considered “threatened” as opposed to “endangered.”
While these changes may ease the burden on some private landowners, Congress and the administration could do much more to assure species recovery without imposing the costs on a few landowners. Carrots work better than sticks, meaning we can save more species by rewarding people for doing so rather than punishing them for having those species on their land.
First, a share of public land recreation fees should go into trust funds for protecting endangered species. To adequately fund this program, federal agencies such as the National Park Service, Forest Service, and Bureau of Land Management should be allowed to charge for all recreation on public lands.
Currently, most public land recreation, including hunting, fishing, hiking, boating, and off-road vehicles, is free. It is perfectly fair to ask people who use public lands to pay such fees, and many will be happy to pay such fees knowing that by doing so they are helping to save endangered species.
Second, on a case-by-case basis, it may be appropriate to give people ownership rights to selected species. In Britain, wildlife are owned by the owners of the land the wildlife use, which can give landowners incentives to protect such wildlife. Giving Americans similar ownership rights can help save many species.
People go to great lengths to save rare breeds of dogs, cattle, and other domestic animals, not for any economic reward but simply for the pride in doing so. Creating ownership rights in some species of wildlife can put this energy to work in saving rare species.
Saving endangered species is important, but imposing the costs of doing so on a few people is unfair, counterproductive, and may be unconstitutional. Those who truly want to save rare species should support revisions to the law that give people incentives to save species without imposing the costs on a handful of landowners.Randal O’Toole is a senior fellow with the Cato Institute and author of Reforming the Forest Service and co-author of The Endangered Endangered Species Act.
If government says that you are free to believe in something, but not to act on it, you are not truly free. That reality lies at the heart of a federal lawsuit filed by the Bethel Christian Academy against the state of Maryland, which kicked the academy out of a private school voucher program for having policies consistent with the school’s religious values. Such unequal treatment is unacceptable.
Immediately at issue are the school’s policies requiring that students and staff behave in ways consistent with the idea of marriage being between a man and a woman, and an individual’s proper gender being the one assigned at birth. The state maintains that those policies are discriminatory against LGBTQ individuals and that allowing public money - school vouchers from the state’s BOOST program - to flow to Bethel Christian is unacceptable.
The state’s position is totally understandable: All people should be treated equally when government is involved. The problem is that the state government is not treating religious people equally - a problem in the public education system not just in Maryland, but in every state in the country.
It would be better if Maryland had a scholarship tax credit program than a voucher. Then taxpayers could choose to direct their education dollars to religious institutions and get a credit for it, rather than all taxpayers having some sliver go to religious institutions, like it or not.
How does the current education system discriminate against religious people? Everyone is forced to pay for public schools - government run and funded schools - but those institutions cannot be religious in nature. They can teach about religion, but even that is very difficult because public schools must not be perceived as even incidentally promoting any religious precepts, much less being openly guided by them. In other words, non-religious people can get the education they want from the government schools for which they must pay, but religious people cannot.
There is an excellent reason for prohibiting the endorsement of religion by public schools: In a diverse society, it would inevitably end up with government favoring one person’s religion over another’s. Indeed, for much of our history public schools did exactly that, typically favoring Protestantism over Catholicism, Judaism, atheism and so on. The current system no longer favors Protestantism, instead favoring secularism over religion, a violation of government’s mandate to be neutral with regard to religion. Most famously, a public school can teach that the theory of evolution is true, but not creationism, a religious explanation.
If government can neither favor nor disfavor religion, what is it to do? As long as it is going to fund education, the answer is to do what BOOST begins to do: allow families to choose schools with the tax money earmarked for their children’s education. Do not have government decide what is acceptable or unacceptable for children to learn, let families decide for themselves. That is true equality under the law.
Which brings us back to Bethel: If a religious school cannot act on its religious principles without being cut off from a choice program, that program ceases to provide equality under the law. It essentially says that educators and parents may pick a school consistent with their faith, but as a practical matter that faith must be dead.
Of course, just because liberty and equality dictate that government not take sides on religious questions, it does not mean that individuals who dislike religious schools’ policies have to just accept them. They can and should use their own liberty, especially freedom of speech, to critique and even condemn them.
It would be better if Maryland had a scholarship tax credit program than a voucher. Then taxpayers could choose to direct their education dollars to religious institutions and get a credit for it, rather than all taxpayers having some sliver go to religious institutions, like it or not. But it is still far more appropriate in a free society that people can choose schools consistent with their faith rather than be rendered second class.Neal McCluskey is the director of the Cato Institute’s Center for Educational Freedom and maintains the Center’s Public Schooling Battle Map.
Ted Galen Carpenter
President Trump is once again beating the drums about the need for greater burden-sharing by U.S. allies. The latest example is his demand that South Koreans pay “substantially more” than the current $990 million a year for defraying the costs of American troops defending their country from North Korea.
This is not a new refrain from the president. Most of Trump’s spats with NATO members have focused on the financial aspects of burden-sharing. Yet the nature of his complaints leads to the inescapable conclusion that if allies were willing to spend more on collective defense efforts, he would have no problem maintaining Washington’s vast array of military deployments around the world.
Trump’s obsession with financial burden-sharing misses a far more fundamental problem. Certainly, the tendency of U.S. allies to skimp on their own defense spending and instead free ride on the oversized American military budget is annoying and unhealthy. But the more serious problem is that so many of Washington’s defense commitments to allies no longer make sense-if they ever did. Not only are such obligations a waste of tax dollars, they needlessly put American lives at risk, and given the danger of nuclear war in some cases, put America’s existence as a functioning nation in jeopardy. American military personnel should not be mercenaries defending the interests of allies and security clients when their own country’s vital interests are not at stake. Even if treaty allies offset more of the costs, as Trump demands, we should not want our military to be modern-day Hessians.
Donald Trump wants our allies to pay more, but outdated overseas defense obligations are the real problem.
Unfortunately, the current situation is not unprecedented. During the Persian Gulf War, President George H.W. Bush expressed satisfaction that allied financial contributions offset most of Washington’s expenses. That was undoubtedly true. Indeed, according to some calculations, the United States may have ended up with a modest profit. Kuwait and Saudi Arabia were especially willing to contribute financially to support the U.S.-led military campaign to expel Saddam Hussein’s forces from Kuwait. Japan, still agonizing over the alleged limitations on military action that its “peace constitution” imposed, asserted that while it could not send troops, it would contribute funds to the war effort. All three countries practiced rather blatant “checkbook diplomacy.”
The Persian Gulf War was surprisingly short, and U.S. forces incurred far fewer casualties than anticipated. However, the immediate costs were merely the beginning of an expanded American security role in the Middle East that has proven to be disastrous. The checkbook diplomacy payments of 1990 and 1991 did not even begin to offset those horrendous, ongoing costs in treasure and blood.
Financial considerations aside, it never served American interests to become the onsite gendarme of the Middle East. Those who saw the Persian Gulf War as a low-cost, perhaps even no-cost, venture from the standpoint of finances were incredibly myopic. America’s role as Hessians for Kuwait, Saudi Arabia, and other powers undoubtedly benefited the ruling elites in those countries, but it clearly has not benefited the American people.
Yet Trump’s security policies continue to evince similar myopic impulses. During the 2016 presidential campaign, he repeatedly criticized NATO members for their lack of burden-sharing, and he even indicated that Washington’s defense commitments to an “obsolete” alliance might be reconsidered. But when the allies pledged greater defense spending at the 2018 NATO summit, Trump’s grousing was replaced by praise and expressions of alliance solidarity. He greeted with even greater enthusiasm the Polish president’s offer to offset construction costs if the United States built a military base in Poland-even though such a move would deepen already worrisome tensions with Russia. The American Conservative’s Daniel Larison put it well: “Trump is often accused of wanting to ‘retreat’ from the world, but his willingness to entertain this proposal shows that he doesn’t care about stationing U.S. forces abroad so long as someone else is footing most of the bill.”
The overwhelming focus of Trump’s burden-sharing goals continues to be financial. His administration shows little receptivity to independent defense policy initiativeson the part of allies. Indeed, he and his advisers, especially National Security Adviser John Bolton, show outright hostility to proposals for a European Union army or other manifestations of greater Europeans-only security efforts, even though they would seem to constitute meaningful burden-sharing. Bolton has blasted such initiatives as “a dagger pointed at NATO’s heart.” Washington simply wants the allies to pay more for its own defense protection.
Instead, U.S. leaders need to engage in burden-shedding-eliminating security commitments that now entail far more risks than benefits to America. For example, it makes little sense to retain, much less add, obligations to defend small, strategically insignificant countries on Russia’s border. The risks of such a provocative stance clearly outweigh any potential benefits. Likewise, the risk-benefit calculation to continue providing a security shield for South Korea has changed dramatically since the days of the Cold War. Not only is South Korea now a much stronger country economically, one that can build whatever forces are needed for its defense, but North Korea is now capable of inflicting grave damage on U.S. forces stationed in East Asia and will soon be able to strike the American homeland with nuclear warheads.
Greater burden-sharing efforts by NATO members or South Korea will not change that more important risk-benefit calculation. The American people deserve a far more substantive policy change.Ted Galen Carpenter, a senior fellow in security studies at the Cato Institute and a senior editor at The American Conservative, is the author of 12 books and more than 800 articles on international affairs.
John Glaser and John Mueller
America’s longest war may be coming to an end. Although major obstacles remain, the Trump administration’s negotiations with the Taliban, led by U.S. special envoy Zalmay Khalilzad, have made progress toward an agreement that would include a U.S. military withdrawal. In July, President Trump said “it’s ridiculous” that we’re still in Afghanistan after almost two decades of stalemate. His 2020 Democratic challengers seem to agree — most have called for an end to the war — and fewer and fewer Republicans are willing to defend it.
But one persistent myth continues to frustrate the political momentum to end the war and may inhibit the impending debate over withdrawal. It is by far the most common justification for remaining in Afghanistan: the fear that, if the Taliban takes over the country, the group will let Al Qaeda reestablish a presence there, leaving the terrorist organization to once again plot attacks on the United States.
Experts have effectively contended that, although 9/11 was substantially plotted in Hamburg, Germany, just about the only reason further attacks like that haven’t taken place is that Al Qaeda needs a bigger territorial base of operations — and that such a base will inevitably be in Afghanistan.
Virtually all promoters of the war in Afghanistan have stressed this notion. Barack Obama applied it throughout his presidency. Gen. David H. Petraeus, who commanded American forces in Afghanistan, recently contended that a U.S. withdrawal is still premature and would risk leaving behind a haven for terrorist groups comparable to the rise of Islamic State following the U.S. withdrawal from Iraq in 2011, according to a Wall Street Journal op-ed he co-wrote.
Trump reflected this thinking as well when he authorized an increase of troops to Afghanistan in his first year in office. His “original instinct,” he noted, was “to pull out,” but his advisers had persuaded him to believe that “a hasty withdrawal would create a vacuum that terrorists … would instantly fill, just as happened before” the Sept. 11 attacks.
This key justification for staying in Afghanistan has gone almost entirely unexamined. It fails in several ways.
To begin with, it is unlikely that a triumphal Taliban would invite back Al Qaeda. Its relationship with the terrorist group has been strained since 1996 when Osama bin Laden showed up with his entourage. The Taliban extended hospitality, but insisted on guarantees that Bin Laden refrain from issuing incendiary messages and from engaging in terrorist activities while in the country. He repeatedly agreed and broke his pledge just as frequently. Veteran foreign correspondent Arnaud de Borchgrave said he was “stunned by the hostility” expressed for Bin Laden during an interview shortly before 9/11 with the top Taliban leader. According to Vahid Brown of the Combating Terrorism Center at West Point, relations between the Taliban and Al Qaeda during this period were “deeply contentious, and threatened by mutual distrust and divergent ambitions.”
Bin Laden’s 9/11 ploy not only shattered the agreement, but brought armed destruction upon his hosts. The last thing the Taliban would want, should it take over Afghanistan, is an active terrorist group continually drawing fire from the outside. Moreover, unlike Al Qaeda, the Taliban has an extremely localized perspective and would be primarily concerned with governing Afghanistan.
In addition, it is not at all clear that Al Qaeda would want to return to a ravaged, impoverished, insecure and factionalized Afghanistan even if it were invited. It’s difficult to see how an Afghan haven would be safer than the one Al Qaeda occupies in neighboring Pakistan.
There is also concern that the small branch of Islamic State in Afghanistan would rise if the Americans withdrew. However, Islamic State has suffered repeated tactical failures, has little to no support from the local population, and the Taliban has actively fought the group on the battlefield in Afghanistan for years, making a Taliban-sponsored safe haven for that group singularly unlikely.
Most importantly, the notion that terrorists need a lot of space and privacy to hatch plots of substantial magnitude in the West has been repeatedly undermined by tragic terrorist attacks in Madrid in 2004, London in 2005, Paris in 2015, and Brussels and Istanbul in 2016. None of the attackers in those incidents operated from a safe haven, nor were their plans coordinated by a group within a safe haven. Al Qaeda Central has not been all that effective since 9/11, but the group’s problems do not stem from failing to have enough territory in which to operate or plan.
Pretending that the Taliban can be defeated, and that an independent and democratic government can be left in its place, is unrealistic. The Taliban may very well make further gains following a U.S. withdrawal, but the myth that territorial safe havens provide great utility to terrorists planning transnational attacks should not continue to justify a war that America cannot win.John Glaser is director of foreign policy studies at the Cato Institute. John Mueller is a political scientist at Ohio State University and a senior fellow at the Cato Institute.
There are economic storm clouds on the horizon, but for now wages are rising, jobs are plentiful, and poverty is falling. Democrats running for president need an economic line of attack, so the solution has been to focus on wealth inequality. Senator Bernie Sanders claims that there has been a “massive transfer of wealth from the middle class to the top one percent.” Senator Elizabeth Warren lambastes America’s “extreme concentration of wealth.” Even the establishment Joe Biden laments, “This wealth gap that exists in the United States of America is so profound now.”
Wealth inequality has risen in recent years, but by far less than the Democrats and many media articles imply. The scarier claims about inequality usually stem from the flawed data created by French economist Thomas Piketty and his colleagues. More careful studies by other economists and the Federal Reserve Board reveal surprisingly modest changes in wealth inequality given the huge revolutions in globalization and technology that have occurred.
Are increases in wealth inequality the awful thing that Democrats claim? It depends on what causes them. Much of the recent modest rise in wealth inequality stems from innovations in our economy that are pulling everyone up. Brian Acton and Jan Koum, for example, built huge multibillion dollar fortunes by creating WhatsApp, which provides free phone service for 1.5 billion users globally.
Sanders and Warren are right to criticize crony capitalism as a cause of wealth inequality. But their big government approaches to social policy would have the opposite effect on wealth inequality than what they may believe.
Acton and Koum’s success may have increased the wealth owned by the top 1 percent, but their product has created massive consumer value as well. Most of the wealthiest Americans are entrepreneurs who have fueled economic growth, which is clear in examining the Forbes 400 list. Wealth created this way is not the zero-sum struggle that Democrats imagine it is.
That is the good news. The bad news is that the government itself generates wealth inequality in at least two ways that make us worse off. First, governments give subsidies, regulatory preferences, and other crony-capitalist benefits to wealthy insiders. In the recent Fat Leonard scandal, for example, Leonard Francis gained hundreds of millions of dollars of government contracts by cozying up to Navy officers and providing them with gifts, prostitutes, and other favors to get them to do his bidding.
The other way that the government fuels wealth inequality is a deeper scandal. The expansion of social programs over the decades has undermined incentives for lower- and middle-income families to save while reducing their ability to save because of higher taxes. Government programs have displaced or “crowded out” wealth-building by all American families but the richest.
Politicians complain loudly about wealth inequality, but their own policies are generating it. This issue receives too little policy attention, but it is profoundly important and reveals the hypocrisy of the political left.
Many Americans have saved little for retirement because Social Security discourages them doing so, as does the heavy 12.4 percent wage tax that funds the program. Economist Martin Feldstein found that every dollar increase in Social Security benefits reduces private savings by about 50 cents.
Social Security accounts for a larger share of retirement income for the non-rich than for the rich, so this crowd-out effect increases wealth inequality. In a simulation model, Jagadeesh Gokhale and Laurence Kotlikoff estimated that Social Security raises the share of overall wealth held by the top 1 percent of wealth holders by about 80 percent. This occurs because the program leaves the non-rich with “proportionately less to save, less reason to save, and a larger share of their old-age resources in a nonbequeathable form.”
A study by Baris Kaymak and Markus Poschke built a model of the U.S. economy to estimate the causes of rising wealth inequality. They found that most of the rise in the top 1 percent share of wealth in recent decades was caused by technological changes and wage dispersion, but the expansion of Social Security and Medicare caused about one-quarter of the increase. They concluded that the “redistributive nature of transfer payments was instrumental in curbing wealth accumulation for income groups outside the top 10% and, consequently, amplified wealth concentration in the U.S.”
More government benefits result in less private wealth, especially for the non-rich. It is not just Social Security and Medicare that displaces private saving, but also unemployment insurance, welfare, and other social spending. Some social programs have “asset tests” that deliberately discourage saving.
Total federal and state social spending as a share of gross domestic product soared from 6.8 percent in 1970 to 14.3 percent in 2018. That increase in handouts occurred over the same period that wealth inequality appears to have increased. Generations of Americans have grown up assuming that the government will take care of them when they are sick, unemployed, and retired, so they put too little money aside for future expenses.
Cross-country studies support these conclusions. A 2015 study by Pirmin Fessler and Martin Schurz examined European data and found that “inequality of wealth is higher in countries with a relatively more developed welfare state … given an increase of welfare state expenditure, wealth inequality measured by standard relative inequality measures, such as the Gini coefficient, will increase.”
A study by Credit Suisse found: “Strong social security programs - good public pensions, free higher education or generous student loans, unemployment and health insurance - can greatly reduce the need for personal financial assets… . This is one explanation for the high level of wealth inequality we identify in Denmark, Norway and Sweden: the top groups continue to accumulate for business and investment purposes, while the middle and lower classes have a less pressing need for personal saving.”
That is why it is absurd for politicians such as Sanders and Warren to decry wealth inequality and then turn around and demand European-style expansions in our social programs. The bigger our welfare state, the more wealth inequality we will have.
The solution is to transition to savings-based social programs. Numerous countries have Social Security systems based on private savings accounts. Chile has unemployment-insurance savings accounts. Martin Feldstein proposed a savings-based approach to Medicare. The assets in such savings accounts would be inheritable, unlike the benefits from current U.S. social programs.
Sanders and Warren are right to criticize crony capitalism as a cause of wealth inequality. But their big government approaches to social policy would have the opposite effect on wealth inequality than what they may believe.Chris Edwards is an economist at the Cato Institute. He recently testified before the House Oversight and Reform Committee about the USPS’s dire financial condition.