Sean H Vanatta at Aeon/Psyche: The American economy has always relied on household borrowing. Since before the founding, the colonies had been ever short of metallic currency. Our 18th-century forebears substituted credit for cash. They bought goods ‘on account’, borrowing to buy time until the harvest came in or some other windfall enabled them to repay what they owed.
The 19th-century shift from agriculture to industry brought many American workers predictable wages and fixed salaries. Industrial businesses – selling sewing machines, pianos, home appliances, and especially automobiles – developed novel credit arrangements to transform steady paychecks into steady repayments. Instalment credit enabled consumers to purchase expensive durable goods with a small down payment, followed by weekly or monthly payments thereafter.
In cities, department stores refined another form of borrowing: the charge account. These accounts granted affluent consumers a fixed line of credit, which they repaid monthly without paying interest. Like instalment credit, charge accounts existed to sell goods, rather than generating profits from lending as such. Charge accounts made credit convenient, encouraging consumers to buy more.
Convenience came in part through a new link between credit and identification media. Stores issued charge tokens and later charge plates – fobs and metal cards that carried consumer account information – granting affluent consumers the prestige of recognition in cities full of strangers.
Mass consumer credit greased the wheels of mass production. In the early 20th century, proponents praised a virtuous lending cycle. Credit generated consumer demand; which encouraged industrial investment; which led to economies of scale, lower costs, and more industrial work; finally encouraging further consumer demand. Critics worried that consumers, having committed future income to present consumption, would have no future buying power to turn the wheel the next cycle, or the next. ‘Larger and larger doses of the stimulant must be injected merely to prevent a relapse,’ two prominent critics warned in 1926.
The Great Depression ended the debate. The 1929 stock market crash stalled credit buying. Consumers worried. They waited. They postponed credit purchases – a month, two months, three. Individual delays, in the aggregate, froze the economy. Without credit purchases, factories had fewer orders. With fewer orders, factories idled and laid off workers. Unemployed workers cut spending further. They did not borrow to buy. They did not buy at all. The virtuous credit circle that turned in the 1920s shuttered and stopped in the 1930s.
Policymakers took an unexpected lesson from this experience: the United States’ industrial capacity had been built to run on a steady fuel of consumer borrowing. If private lenders would not provide that fuel, New Dealers reasoned, the federal government should.
The New Deal has many conflicting legacies but, from that point forward, federal policy unambiguously supported a political economy with household borrowing at the centre. Federal lending programmes legitimised credit buying by aligning it with national economic priorities of stable employment and steady growth.
hose national priorities changed course when the nation shifted from recovery to warmaking during the Second World War. Policymakers wanted consumers to save, not spend, a policy the US Federal Reserve pursued through firm controls on consumer credit. Government controls encouraged credit innovation, first to circumvent the rules, then to comply with them.
Policymakers initially targeted instalment credit, which consumers used to buy durable goods like cars and home appliances. Retailers, still eager to generate sales, modified their unregulated charge account plans to enable consumers to pay over longer periods of time. Charge accounts gained the now-familiar 30-days interest-free period, with interest charged monthly on the remaining balance.
Fed officials clamped down on regulatory avoidance, prompting businesses to invest in information technology that would help them stay within the rules. Retailers needed to maintain thorough records of their customers’ credit activity and to halt credit sales that violated Fed regulations. Large department stores adopted card-based accounting systems, called Charga-Plate, that streamlined the links between their sales floors and credit departments. These systems solidified the connection between metallic identification cards and retail credit: the credit card was born.
Federal policymakers restrained consumer spending during wartime on the explicit promise that the postwar years would bring unprecedented abundance. Department stores were apostles of this future, and they entered the postwar era with a new credit product to draw in customers. Credit cards were a key feature of department store expansion in the 1940s and ’50s, out of city centres and into the growing suburbs.
Other businesses also made credit cards central to their postwar plans. Gasoline companies, like Standard Oil of New Jersey, had developed nationwide charge account networks linking service stations in the years before the war. Wartime rationing halted credit sales. But in the late 1940s, service stations heavily promoted gasoline credit cards. Railroads, too, rolled out unified, card-based credit plans.
These travel cards set the stage for ‘universal’ travel-and-entertainment cards. Department store cards, offered by firms like Macy’s or Gimbels, were store specific. Gasoline and rail cards linked independent businesses within the travel industry under a unified credit plan. The watershed came in 1950, when Frank McNamara introduced the Diners Club card to executives in New York City. The name was self-explanatory. The card allowed executives to wine and dine clients at restaurants and clubs, first in New York and soon around the country. The plan quickly expanded to include the full suite of travel and entertainment services. The card was, in its own estimation, the ‘Indispensable New CONVENIENCE for the Executive – the Salesman – the man who gets around!’
Bankers came to universal cards by another road. In the 1950s, most US banks were small, local institutions. Financial regulations imposed during the New Deal bound banks in a web of rules that constrained their growth and profitability. Banks primarily served business customers, but many were eager to find new products and services to expand beyond the era’s tight regulatory limits.
The postwar growth of department store chains provided an opportunity. Department stores offered credit cards. Small retailers did not. In the early 1950s, a cohort of bankers in cities and towns across the country began experimenting with local card plans that linked small retailers into local credit networks. Although the plans were modest, bankers saw opportunity. Banks ‘should be the reservoirs for every type of credit in their communities,’ a Virginia banker observed in 1953, predicting that ‘banks may be handling the bulk, maybe all, charge account financing’ in the near future.
The scale of the bank credit-card market changed in the late 1950s when California’s Bank of America launched its BankAmericard plan. Unlike most US banks, the Bank of America was big. By 1958, it had more than 800 branches across the Golden State. Unlike most US banks, Bank of America also focused on consumer lending, financing the home mortgages and auto loans that made California suburban. To manage millions of consumer accounts, Bank of America invested in information processing technology. The bank was the first to adapt mainframe computers to banking in the 1950s, and executives believed that, with computers, a state-wide credit card could be possible – even profitable.
Bank of America’s executives recognised a fundamental challenge that confronted all universal credit-card plans: the bank needed to recruit enough merchant and consumer participants to make the card plan worthwhile to each group. Bankers had initially solved this problem by signing up merchants first, and then relying on merchants to solicit cardholders among their existing customers. Bank of America started from the other end. The bank had a large customer base. If it recruited cardholders first, executives reasoned, card-carrying consumers would draw merchants into the plan.
Beginning in Fresno in September 1958, Bank of America mailed unsolicited BankAmericards to its consumer accountholders, ultimately sending more than 2 million cards across California in the first few years. The cards came unexpectedly. They were also special: for the first time, the credit cards were not metal or cardboard, but embossed plastic, an innovation that added gee-wiz novelty to the bank’s card plan. The strategy proved expensive, reckless and effective. Bank of America gained a huge cardholder base, but at the cost of massive delinquencies and fraud losses.
By the early 1960s, department store and travel cards were well rooted in American wallets, but it was not yet clear that bank cards would succeed. Chase Manhattan, the nation’s second-largest bank, abandoned its credit card experiment after less than four years of trying in 1962. But other banks in the US were also struggling. Strict regulations ensured that, although banks were safe – very few banks failed in the postwar years – they were not very profitable. By the late 1960s, bankers increasingly saw credit cards, which combined innovative information technology with access to affluent consumer markets, as the road to the future – as the key to innovating around the restrictive financial rules.
More here.