to: Table of Contents

(A Shorter Workweek in the 1980s)





During the Great Depression, the American people experienced mass unemployment running as high as 25% for a period of over ten years. This was a time of drastic experimentation as the federal government assumed an active responsibility for handling the unemployment problem. Policies and programs that were conceived during the Depression are at the foundation of our national economic system. Its experience shaped our thinking about unemployment and other problems. Ultimately, however, it was World War II, rather than any program developed by economists, which brought Depression-era joblessness to an end.

War is an effective manpower policy. A war, conscripting young men into military service, soaks up unemployed people and gives them a useful and necessary, though destructive, job to perform. At the same time, it can lay the foundation for post-war prosperity in the work of rebuilding devastated factories and cities and by satisfying the unspent purchasing power of wartime workers. The offsetting cost of deficit financing seems much a secondary consideration amid patriotic appeals to invest in U.S. savings bonds, win the war, and bring the boys back home.

World War II was a blockbuster of a war and quite a cure for unemployment. Figure 1-1 indicates the extent of the manpower shifts between 1939 and 1947. A great build-up took place in military personnel and in defense-related employment between 1939 and 1945. More than 11 million Americans were added to the armed forces and 7 million to non-military employment during this time. More than 6.8 million of the nation’s unemployed thus found jobs between 1939 and 1942, and an additional 1.6 million people between 1942 and 1945. The remaining wartime workers were drawn mainly from the category, “not in the labor force” - mainly women who became employed in war-production industries - and from an expanding adult population.

Between 1945 and 1947, 9.8 million persons left military service. Unemployment, however, rose by only 1.1 million people. Most of the returning veterans found jobs in private nonagricultural industries as the post-war economy expanded to meet pent-up consumer demand and many married women vacated jobs to become housewives again.

figure 1-1
  Changes in Employment Status of Americans, 1939 to 1947
not in the
labor force

Unlike World War I, there was no post-war surge in unemployment following World War II. Strong consumer demand was certainly one factor. Another was that the statutory 40-hour workweek was now in effect in manufacturing and other goods-producing industries. Although it is true that the Fair Labor Standards Act of 1938 did not cause hours to be reduced in most industries - because of the Depression, the average manufacturing workweek was already below 40 hours - this legislation did succeed in limiting the rise in hours after the war so that more people were able to “share the prosperity”.

During the Depression, economists had learned to associate unemployment with declines in the business cycle. This affected their attitude towards the shorter workweek as a policy recommendation. Shorter hours are admittedly not the best remedy for cyclical unemployment but, rather, are designed to offset labor displacement as labor-saving technology is applied. Therefore, post-Depression economists have commonly rejected the shorter-workweek approach to solving unemployment, labeling this a “fallacy”. Some have compared its job-creation claims to “carving a (same-sized or smaller) pie into more pieces.” Instead, these economists turned to Keynesian economics, having seen in World War II the power of deficit spending.

Prior to the Great Depression, reduced work time was accepted as a legitimate tool for controlling unemployment, The average workweek of U.S. workers declined by approximately 10 hours during the first three decades of the 20th century. Six-day weeks and ten-hours days were prevalent at the turn of the century. Gradually, more workers won the eight-hour day. With the Depression, though, a precipitous decline in hours took place. A bill was introduced by Senator Hugo Black of Alabama during Roosevelt’s “first hundred days” to establish a national 30-hour workweek. This bill passed the U.S. Senate by a comfortable margin in April 1933. However, the House rules committee buried it because of opposition from the new administration.

Since World War II, economist have recommended various policies with respect to the unemployment problem. The major avenues of approach may be summarized as follows:

1. Ignore or minimize the problem.

2. Establish a goal of reaching full employment and developing better techniques of measuring the reporting unemployment.

3. Make unemployment financially less painful.

4. Adopt monetary and fiscal policies that apply countercyclical stimulus to expand economic output.

5. Restrain labor-force participation through prolonged education, liberalized disability, and earlier or more widespread retirement.

6. Combat structural unemployment by job-training or retraining programs.

7. Create jobs in the public sector.

8. Redistribute the burden of unemployment in politically or socially more acceptable ways.

9. Reduce the hours of work.

Let us consider what has been done in each area since the end of World War II.



This approach has more support than it might seem. People do not want to worry about the unemployment problem: They want to think positively. Until unemployment strikes themselves or a family member, it is convenient to regard unemployed people as social outcasts or, at least, “some other kind of person”. For instance: The unemployed are mostly minority teenagers heavily into drugs or crime. Many of them do not want to work; they prefer living off welfare or their parents. Another argument is that, because unemployment is higher among women and teenagers, a high rate of unemployment does not necessarily indicate economic hardship. Such persons are looking for work to supplement the family income.

Another view holds that rising unemployment is inevitable, even desirable, as a means of disciplining ourselves against chronic inflation. Because only a small percentage of the population is unemployed, it politically more popular to be fighting inflation which affects everyone even if this preference costs some people their jobs. Others simply believe we must get used to continuing high levels of unemployment. Unfortunately it‘s a fact of life in this post-industrial age. They contend, however, that “anyone who really wants to work can find a job. Just look at all the help-wanted ads. But most unemployed people want to start at the top.”



The first step in solving a problem is to recognize that the problem exists, to study it, and make a commitment to its solution. The federal government has twice made major commitments to policies of full employment and minimal unemployment. The Full Employment Act of 1946 made the initial commitment. Although it did not prescribe the techniques, this legislation created the President’s Council of Economic Advisers and required the President to report annually on the state of the economy. Congress and the people could judge how well each administration had succeeded in controlling unemployment.

Over the years, the definition of “full employment” has tended to encompass ever higher rates of unemployment. In the early 1950s, a 2.9% rate of unemployment was considered to be full employment. During the 1960s, a 4% unemployment rate was thought to be an acceptable level. By the 1970s, officials in the Nixon and Ford administrations were talking about 5% to 5.5% as a target. Today, many economists believe that 6% unemployment may be the best which the economy can do without aggravating inflation. Professors Jeffrey Perloff and Michael Wachter of the University of Pennsylvania argue that a 6.3% rate represents “the sustainable or nonaccelerating inflation rate of unemployment ... using general monetary and fiscal policy.”

In the face of deteriorating standards and growing public apathy, Congress enacted and President Carter signed the Humphrey-Hawkins bill, which become the “Full Employment and Balanced Growth Act of 1978”. This law requires the federal government to adopt economic policies which would reduce the unemployment rate to 4% by 1983 - 3% for adult workers, 20 years and older - and at the same time hold inflation to a 3% rate of increase each year. The Full Employment and Balanced Growth Act calls for government to develop a plan for accomplishing these goals. It adds new reporting requirements for the President and the Federal Reserve Board but otherwise does not specify how the twin objectives of full employment and slow inflation are to be achieved.

The Bureau of Labor Statistics, a division of the U.S. Department of Labor has the responsibility of calculating the nation’s unemployment rate through a monthly survey conducted on its behalf by the Census Bureau. Because of its political and economic importance, this statistic has received much media attention. Various affected groups have criticized the method of calculating it. Several Presidential commissions have been organized to find ways of improving the definitions. The Gordon Committee, summoned by President Kennedy, made certain recommendations which went into effect in January 1967. Other definitional changes were introduced in January 1970. More recently the National Commission on Employment and Unemployment, chaired by Professor Sar A. Levitan of George Washington University, has again made recommendations for revised definitions that would affect the unemployment rate. Some of the proposed changes would raise the reported rate while others would lower it. Better definitions are considered a step forward in dealing with unemployment.



Pending better solutions, one strategy for dealing with high unemployment is to develop and improve certain temporizing measures to relieve the financial pain. The Social Security Act created a system of unemployment insurance which pays weekly benefits to qualifying persons who have lost their jobs for up to a specified number of weeks. The benefits are financed by a tax on employers’ payrolls which is collected and administered by state agencies.

During the 1974-75 recession, benefit levels in many of the states were increased. The Ford Administration offered extensions in the period of benefit payments to help the long-term unemployed and federal assistance in handling the additional cost. During that period, Unemployment Insurance benefits ran as high as $20 billion a year. By late 1978, the state and federal funds had accumulated a combined deficit of $12.4 billion. A federal commission recommended that the U.S. Treasury rather than employers assume the cost of this obligation.

The Unemployment Insurance system has been criticized for creating reverse incentives. The General Accounting Office issued a report in April, 1979, confirming the popular view that unemployment benefits effectively deter unemployed workers from seeking work. Originally the benefits were set to replace 50% of a worker’s previous earnings up to a maximum benefit. However, because the benefits are often exempt from federal and state income taxes and from Social security taxes, which have increased sharply in recent years, the GAO found that, on the average, unemployment-insurance payments replaced 64% of the average take-home pay, not considering work-related expenses. A study in Pennsylvania concluded that a $15 increase in weekly benefits had the effect of raising the unemployment rate by 0.6 percentage points. Another study, conducted by New York State, found that employers were paying 40% more than they should for Unemployment Insurance due to sloppy processing of claims.

Besides Unemployment Insurance, the federal government has enacted special programs to soften the displacement of workers in certain industries or help those whose jobs were lost through foreign trade. The Regional Rail Reorganization Act of 1973, which created Conrail, provided that certain employees who were laid off or demoted tolower-paying positions as a result of the rerganization should receive monthly benefits equal to their average monthly earnings from railroad work in the previous year. Similar arrangements have been made to protect employees in the airline industry. who were threatened by loss of a job from CAB-approved mergers. Another federal program, created by the Trade Expansion Act of 1962 and the Trade Act of 1974, gave unemployed workers in certain industries hurt by increased foreign imports a “trade adjustment allowance”, which, together with Unemployment insurance, replaced 70% of their previous weekly earnings for up to 52 weeks.

In addition, unions have sometimes negotiated contracts with employers for severance pay or supplemental unemployment benefits (SUB) to be paid to discharged workers. A BLS survey in 1975 found that 38% of the workers covered by union contracts were eligible for severance pay, and 28% were eligible for SUB.



Since its inception in the 1940s, the President’s Council of Economic Advisers has relied heavily upon monetary and fiscal policies to fight unemployment. The unemployment problem was believed to be the product of insufficient demand for labor during low points in the business cycle. The strategy, therefore, was to expand the volume of economic activity. In layman’s terms, it was necessary to “increase the size of the economic pie.”

Fiscal policy, associated with economist John Maynard Keynes, recommended that government pursue deficit financing during times of recession. These budget deficits would help to create jobs when they most were needed. Later, in the more prosperous periods, government could tighten up on expenditures to produce a budget surplus. Thus, the government’s budget would be balanced over the business cycle rather than in each year. In practice, Keynesian fiscal policy has produced jobs through WPA-type employment projects and accelerated expenditures for public works as well as through timely tax cuts to strengthen purchasing power in the private sector. Government’s system of progressive taxation combined with transfer payments to the needy has created “built-in stabilizers” that apply fiscal stimulus as needed.

Monetary policy, likewise directed toward the business cycle, recommended appropriate control over the money supply. Associated with Professor Milton Friedman, this strategy has worked to reduce unemployment by permitting greater borrowing for consumer or business expenditures. An expansionist monetary policy would mean that during recessions the Federal Reserve Board would lower the discount rate or relax its requirements for deposits from member banks, or else engage in appropriate open-market activities, so that more money would become available to the economy at more reasonable rates of interest. If inflationary pressures became too severe, it would move in the opposite direction.

The heyday of Keynesian fiscal policy occurred, perhaps, during the Kennedy and Johnson administrations. Inheriting a mild recession from the Eisenhower administration, President Kennedy’s economic advisers, principally Professor Walter Heller, argued that the recession was caused by “fiscal drag” as the progressive income taxes absorbed a rising share of national income. Therefore, the remedy was a substantial tax cut to restore lost purchasing power. Almost $13 billion were returned to the taxpayers in this manner starting in 1964. Sure enough, by the end of the decade unemployment had been reduced to a mere 3.3% - proof that these policies were working.

During the 196s, economic growth was deemed essential to meet the increasing Soviet challenge. Tax cuts and incentives combined with regular budget deficits pushed GNP (in constant 1958 dollars) from $487.7 billion in 1960 to $725.6 billion in 1969. Government’s share of this “pie” increased from 19.5% to 20.1%. The Johnson administration confidently announced a “guns and butter” policy, supporting, on one hand, the Vietnam war and, other the other, expanded domestic welfare programs. The “bigger pie” turned out to be less nourishing that it first seemed. Our outlay for “guns” led to an ill-advised preoccupation with logistics in South Vietnam. Meanwhile, the “butter” programs were creating a welfare epidemic. Inflation escalated to a higher level in the following decade.

Lord Keynes had recommended that government balance its budget over the entire business cycle. Unfortunately, it was expedient to vote budgetary deficits during recessions but not the surpluses in times of prosperity which were needed to restore fiscal integrity. During the past half century the U.S. government has balanced its annual budget only eight times: in 1947, 1948, 1949, 1951, 1956, 1957, 1960, and 1969. During the past five years, the budget has been running deficits in the $50 to $100 billion range. Double-digit inflation has appeared for the first time since the late 1940s.

The recurring budge deficits have played havoc with the economy. In 1978, the national debt reached $771.5 billion. Government’s annual interest expenditures totaled $48.7 billion. Its borrowing to finance or refinance the debt was crowding out funds needed in industry. The National Taxpayers Union reported in April 1979 that government at all levels was obligated for a total of $9 trillion - mainly for Social Security, public-employee pensions, and business guarantees. This works out to $113,000 per taxpayer. In the same year, the Joint Economic Committee of Congress, chaired by Senator Lloyd Bentsen of Texas, issued its first unanimous report in 20 years, recommending “a tightening of fiscal policy.” Chairman Bentsen said this put an end to 30 years of preoccupation “with the problem of maintaining an adequate level of demand in the economy.”

Timing is the essence of monetary and fiscal policy. With deficit spending, the “pie” does not become permanently larger, but prematurely larger. Economic stimulus which is applied at one point in the business cycle is not available at another. A dollar spent for accelerated public works must at some time be refinanced or repaid.

Of course, there can be certain advantages for politicians to have the economy perking at one time rather than at another. That is, perhaps, what has made economists so valuable to holders of public office. Given the power to control the timing of stimulus, U.S. Presidents predictably have picked the time right before their own reelections. If the economy went to pieces in the off-year, they could count on the voters to have short memories.

According to a book by Professor Edward Tufte of Yale entitled “Political Control of the Economy”, all post-war Presidents except for Eisenhower have engaged in this questionable practice. The Council of Economic Advisers, set up to ensure full employment, has helped some of our recent Presidents to synchronize the business cycle with the quadrennial elections. “Between 1946 and 1976”, notes a review of Tufte’s book “the income of Americans increased most (by 3.4%) in those years in which an incumbent President was running again, less (2.6%) during congressional election years, less still in presidential years when the sitting President was not running (2%), and least of all (1.5%) in odd-numbered years without elections of any sort.”

Presidents Nixon and Johnson were, perhaps, the worst offenders Nixon saw to it, for instance, that Social Security benefits were increased by 20% in October 1972, while pressure was put on the Federal Reserve Board to increase the money supply. He was determined not to repeat the experience with a recession which might well have cost him the election in 1960.

The monetary and fiscal policies of government no longer seem a viable means of reducing unemployment because of equally serious problems with inflation. According to the “Phillips curve” analysis, there is a trade-off between the two objectives of reducing unemployment and inflation. With monetary and fiscal policy, government can either tighten or loosen the spigot of spending and money supply. To turn it one way helps to reduce unemployment but fuels inflation. To turn it the other way controls inflation but throws people out of work. To pursue a “balanced” policy is essentially to do nothing.

Which way to turn the spigot is a political decision. During the past few years, the anti-inflation forces appear to have gained the upper hand. Some economists, not wishing to seem too hard-hearted, have argued that persistent inflation is the real cause of unemployment. Therefore, it may be advisable to take a mild recession now to avoid worse unemployment at a later time. While he was director of the Council on Wage and Price Stability, Barry Bosworth estimated that “for every percentage point shaved from the inflation rate through such policies (restrictive monetary and fiscal policies), an additional one million people would have to be tossed out of work for two years.

Beryl Sprinkel, chief economist and vice-president of Chicago‘s Harris Trust and Savings Bank, was quoted in the Wall Street Journal: “We’re going to have to have restrained policies for several years with unemployment running in the 8% to 9% range” to bring down the rate of inflation to 6% to 7% annually. In other words, the unemployed are being eyed by influential economists as this nation’s prime inflation fighters; they are being asked to sacrifice their means of livelihood so that the incomes of other, more affluent persons can stretch further.

During his 1976 campaign for the Presidency, Jimmy Carter promised to deal effectively with both inflation and unemployment. The Humphrey-Hawkins legislation likewise prescribes improvement in both areas. By conventional monetary and fiscal theory, such promises are impossible to keep. Not surprisingly the public gave President Carter low marks for his handling of the economy. A column in the Times of London commented on this situation: “To be fair to the President, the confusion in the apparent actions of the administration is no more than a reflection of the confusion, many would say the bankruptcy of ideas, which pervades the economic establishment of the US as of all other developed industrial nations.”


In the post-war period, the federal government has actively encouraged workers to withdraw from the labor force or prospective workers to delay entrance into the labor force by various direct and indirect disincentives. Such persons who are classified “not in the labor force” are therefore not counted among the unemployed. Three institutions bear the stamp of this policy: education, medical disability, and retirement.

Between 1967 and 1979, the number of persons “not in the labor force who were classified as students rose from 6,745,000 to 7,392,000, an increase of 9.6% In the same period, the number of disabled nonparticipants rose from 4,509,000 to 5,274,000, an increase of 17.0%. The number of retired workers rose from 5,313,000 in 1967 to 9,935,000 in 1979, an increase of 87.0%. In the other major category of nonparticipation, “household responsibilities”, there was a decline of 7.2%, from 32,564,000 to 30,234,000, which reflects the increasing participation of married women in the work force.

The decision of married women to seek employment does not reflect deliberate government policy except that government may be largely responsible for the inflation and high taxes which have eroded their husbands’ pay checks. With respect to education, however, it is clear that public subsidies with respect to educational institutions, scholarship aid and student loans, government research grants and so forth, have enabled young people to stay in school for longer periods.

In the early 1960s, some government officials frankly recommended this approach to solving unemployment. Willard Wirtz, Secretary of Labor, proposed that the age of compulsory education be raised from 16 to 18 to relieve the pressure on the job market. Secretary of Defense, Robert McNamara, hailed the educational potential of military service. During the 1960s, the median number of years which American 25 and older had spent in school increased from 10.6 years to 12.2 years.

While today to propose more education as a cure for youth unemployment might be regarded as cynical, that, in effect, is what has happened in the case of minority youth. Professor Eli Ginzberg pointed out in a recent article in Scientific American that “in 1977 a larger proportion of blacks than whites aged 18 to 24 were enrolled in school. No other social indicator linked to family income demonstrates a more favorable condition for blacks than for whites ... There is some support in these data for the view that, confronted with poor job prospects, more blacks prolong their education.” Unemployment for blacks 16 to 24 averaged 33% in 1977, compared with 11% for whites the same age.

In the case of medical disability and retirement, it is more obvious that public policy deliberately intends for these workers to withdraw from the labor force. The Social Security system provides the financial incentive for doing so. Social Security has two trust funds: a retirement fund which pays monthly benefits to retired workers, dependents, and survivors, and a disability-insurance fund.

Figure 1-2 documents their spectacular growth of expenditures. The retirement fund, started in 1939, mailed out checks totaling $15.8 million to an average 222,000 persons in 1940. By 1979 there were 30,348,000 persons receiving $87.6 billion a year for an average monthly benefit of $240.52. The disability-insurance fund, though smaller, has followed no less ambitious a course of expansion. Begun in 1957, it was disbursing $528 a year to 687,000 recipients in 1960 and $13.4 billion a year to 4,777,000 recipients by 1979 for an average monthly benefit of $234.25.

figure 1-2
Annual Benefit Payments and Average Number of
Recipients in Social Security Programs, 1920 to 1979
  Annual Benefits (thousands of dollars)  
old-age & survivors


  Average Number of Recipients  
old-age & survivors



In 1978, besides Social Security, the federal government disbursed $10.6 billion for Civil Service pensions and $9.2 billion for military retirement benefits, $10.2 billion for veterans’ pensions and disability payments, $4.0 billion for railroad retirement, $10.9 billion for public assistance, and $1.2 billion for other cash-assistance programs. Food stamps came to an additional $5.1 billion, Medicare to $17.5 billion, Medicaid to $10.7 billion, and supplementary medical insurance to $7.1 billion in 1978. These were the federal outlays for income-security programs. In addition, there were the public-employee retirement programs of state and local governments, Workmans Compensation, etc., and similar expenditures in the private sector for health insurance, pensions, and related purposes.

Two themes run through these expenditures: subsidized medicine and subsidized leisure. In one sense, such programs represent the decision of a compassionate society that ill persons are entitled to medical care regardless of ability to pay and that society should support persons who are too old or too sick to work. In another sense, though, they represent subsidies to a particular group of people at the expense of working taxpayers.

In the case of retirement, someone apparently made the decision to give leisure in a large dose to part of the population, elderly workers, rather than to grant more free time to workers across the board in the form of shorter workweeks or longer vacations. It is hard to pinpoint when and where such a decision was made, if there was a specific occasion. The rationale for expanding retirement programs cannot be that in most cases such workers are truly too old to work: the age of retirement has been lowered even as average life expectancies have increased and better medical care has extended people’s years of activity. Rather it would seem that financial considerations have played a part in the decision.

In the case of medical disability, the rapid increase in the number of persons who are receiving disability payments cannot be because so many more workers today are suffering injuries or illness. Disability, too, has its “leisure” component. An article in the Wall Street Journal reports: “The younger workers can actually make more from disability than they earned at work - and the program lacks incentives for people to return to work. ‘I know I’m not 100% disabled,’ a Daytona, Fla., nurse says with evident disgust as she fills out an application at her local Social Security office. But, she says, she can’t afford to get off disability. ‘The system is crazy. I’m ripping off the taxpayers.’”

Starting with those people who indeed are too old or too sick to work, the Social Security program has liberalized eligibility and improved the level of benefits with the result that many younger and healthier persons have withdrawn from the work force to participate in them. This helps to keep the unemployment rate down, of course. The problem is that, besides undermining worker morale, the system is running into financial difficulties. It is estimated that under its front-ended payment schedule, indexed for inflation, Social Security gives the average recipient fives times as much in benefits as what he or she paid in during working years. Some workers who were not covered in their regular job have managed a higher return than that by putting in the required time in self-employment and thereby qualifying for the program’s generous minimum monthly benefit. Because of these welfare-like features, the estimates of Social Security’s actuarial deficit run as high as $4 trillion.

A Lou Harris poll conducted in 1979 found that 42% of those surveyed had “hardly any confidence at all” that they would receive their promised benefits under Social Security. Alice Rivlin, director of the Congressional Budget Office, has warned that “there could be a significant deterioration in the financial soundness of the Social Security system during the next five years.” In 1979, the OASDHI trust fund had assets equal to 34% of its annual expenditure for benefits. She predicted that this would drop to between 5.4% and 7.7% by October 1983, which would be “insufficient to maintain the cash flow of the program.” If Social Security goes, so go many of the other public and private retirement programs whose benefits are coordinated with it.

At the start of the 1980s, it has become clear that education is not a panacea for solving all the nation’s economic and social problems. In fact, it has created problems of its own. Peter Drucker observes: “Of the people who will reach retirement age in the next few years, fewer than a quarter will have completed high school. The majority went to work after no more than six or, at most, eight years of formal schooling . But of the young people entering the labor force, half will have been educated beyond high school and will therefore be unwilling to take traditional manual jobs. Indeed, they will be effectively disqualified from such jobs.”

Increasingly, young men and women are deciding not to go to college, discouraged by higher tuitions and the lack of job opportunities for graduates. Even Clark Kerr, president of the University of California in its heyday, has admitted that the great emphasis upon education during the 1960s produced “tragic results” from an employment standpoint.

And now it appears as well that the retirement boom of the 1970s may be on the way out. Spurred by the fears and complaints of older Americans, the 1978 amendments to the Age Discrimination in employment Act were passed which prohibit mandatory retirement of workers in private industry before age 70 (with some exceptions) and of federal employees at any age. The early reports indicated that few workers would take advantage of their new option. However, a more recent poll by Lou Harris found that as many as 51% of those surveyed intended to continue working either full time or part time beyond the age of 65.

It is now recognized that our retirement system discriminates against women and racial minorities who are less likely than white males to have remained in a job for the required period. The promise of a pension gives employers undue leverage against older employees and restricts labor mobility. Its accumulation of money seems to invite manipulation, fraud, and abuse. For the participants themselves, abrupt retirement can being emotional difficulties, not to mention the financial problem of trying to live on a reduced income. Turning older people often times into welfare clients, this approach has made leisure a terminal condition.



One theory has it that there is no overall shortage of jobs. The jobs exist but unemployed people are not occupationally or personally equipped to handle them. Frequently, manpower experts point to the volume of help-wanted ads that indicates a shortage of qualified workers. They argue that, if restrictions were placed upon working hours, severe labor shortages might develop in particular industries or occupations which would stunt the economy.

Such is the theory of “structural unemployment”. Mainly, this theory argues that people are unemployed because they lack the necessary job skills. Or, it may be that they live in geographical areas where industry is declining while employment in other parts of the country is expanding. Or, perhaps, the unemployed do not know of existing job opportunities. Or, it may be that they have lost the habit of working or the desire to do so. At any rate, the cure for structural unemployment would be a better matching of jobs to the available workers, and vice versa, rather than adjustments in the labor supply.

“Structural unemployment,” wrote Professor Charles Killingsworth of Michigan State University, “is joblessness - usually long-term -which results from basic changes in the economic structure: new technology, the decline of some industries, permanent changes in consumer tastes, changes in labor-force characteristics, and so on.” Being a “structuralist” himself, Professor Killingsworth felt that the Keynesian expansionary policies of the early 1960s were an inefficient and unnecessarily inflationary means of reducing unemployment.

Yet, this viewpoint had its supporters. On January 11, 1962, the President’s Advisory Committee on Labor-Management Policy - a blue-ribbon panel representing business, labor, and government - issued its report on automation. This report endorsed current efforts to accelerate economic growth and rejected the idea of solving unemployment through shorter hours. However, the main emphasis was placed upon treating “structural unemployment.”

Among the report’s recommendations were the following:

2. Acceptance by government agencies of the responsibility for collecting, collating, and disseminating information with respect to present and future job opportunities and requirements in a rapidly changing society.

3. Cooperation between government and private organizations in the field of education in improving and supporting educational facilities to the end that: (a) new entrants to the labor force will be better qualified to meet the occupational demands of the future; (b) the drop-out rate at grade and high school levels will be reduced; (c) better vocational, technical, and guidance programs will be available; (d) rural and depressed areas where surplus workers reside will be better served; (e) financial support will be available for deserving and needy students; and (f) there will be a general upgrading in the quality of our education ...

5. Support from both public and private organizations for retraining of workers who have and will be displaced ...

8. Vast additional improvements of the public employment service so that it can effectively place, counsel, and relocate workers both locally and across state lines.”

In 1963, appropriations under the Manpower Development and Training Act of 1962 amounted to $130 million. Its program, which established "skill centers" to train and counsel disadvantaged persons in the cities, was costing $338 million a year by 1971. A similar program called “Concentrated Employment Program” (CEP) which was begun in 1967 operated job-training centers in 80 inner-city slums and cost $158 million a year by 1971. In January 1968, President Johnson proposed a joint venture between government and private industry for training and employing persons from a disadvantaged background. Known as ”Job Opportunities in the Business Sector” (JOBS), this project was operated by the National Alliance of Businessmen (NAB), yet cost the taxpayers $177 million in 1971. Another one, called “Work Incentive Program (WIN), was set up to train and motivate welfare recipients to seek employment. There were also special programs to train inner-city youth, Vietnam veterans, displaced aircraft engineers, blacks and Chicanos, and Appalachian whites. By December 1972, more than 8 million persons had passed through these various job-training programs which had cost the taxpayers $19 billion over a 10-year period.

President Nixon vetoed the 1970 Manpower Training Act, charging that the “array of patchwork programs ... is not delivering the jobs, the training and other manpower services that this nation needs.” In his report to Congress, the President listed 24 separate federally-assisted training and support programs. Professor Frank C. Pierson mentioned in his study, “Community Manpower Services for the Disadvantaged”, that “one investigator reported .. that he had located 44 publicly financed manpower programs in New York City, but he was not certain even after diligent search that all of the programs had been found.”

During the Nixon and Ford administrations, job-training programs became a favorite target for budget cutters. However, there has been a mild resurgence of them under President Carter. The Comprehensive Employment and Training Act of 1974 (CETA) received an appropriation of $10.8 billion in 1978. Most of this money went to fund and administer public-service jobs, but about $4 billion went for job-training and placement services. In addition, President Carter revived the WIN idea of training welfare recipients for jobs under a $125-million program called “Work Equity Program” (WEP). A new private-sector initiative not unlike the old JOBS program was launched to reimburse businesses for the extra costs incurred in training and placing the hard-core unemployed.

This last program developed by the U.S. Department of Labor and the National Alliance of Business cost $250 million in 1979. Under it, for instance, Control Data Corporation of Minneapolis received a $3.3 million federal grant to train 300 disadvantaged youth for employment in the computer industry which works out to $11,000 per trainee. Although this program was developed by and for the business community, a survey of 809 business executives conducted by the Opinion Research Corp. found that only 12% knew it existed.

Despite billions of dollars spent annually on job-training and job-counseling programs on top of all the other money spent for education, the unemployment rate has climbed upwards. Many question the wisdom of training young people, minorities, displaced homemakers, and others for jobs which later fail to materialize or have limited opportunities for advancement. In the early 1970s, Time magazine reported a general feeling that “the (job training) programs have swallowed huge amounts of taxpayers’ money but failed to put enough unemployed into productive jobs ... A study by the Congressional Joint Economic Committee staff charged that a few JOBS employers used federal money to hire uneducated, foreign-born hopefuls for dead-end jobs - although the companies were supposed to train them for jobs with upward mobility.”

The drift of our current manpower policies has been towards what Tom Dewar, a researcher with the Minnesota Project, calls a “gross overcertification of work”. Mr. Dewar told a Citizens League committee meeting in Minneapolis in December 1979: “With the increase in the certification of work, employer tend to look for credentials and base their hiring on whether or not a person is certified. People who do not have certification of one kind or another are often considered deficient. The response is that they need ‘training.’”

Mr. Dewar further observed: “The services (which train disadvantaged persons) have made disadvantaged people clients and not workers. They are not known (themselves) to be good for hiring disadvantaged people ... Disadvantaged people are increasingly aware of this. More (disadvantaged) people are asking how they can get the jobs rather than the services.”

Such an arrangement has caused unemployed people to become cynical about government and our economic institutions. Not the least hypocrisy is that many of those economic institutions and others who expound the theory of “structural unemployment” are themselves employed by academic institutions; they are in the business of providing training of various kinds. This theory locates the cause of unemployment in unemployed people - a cruel suggestion in most cases - and offers them remedial help instead of jobs.

Where are the job skills acquired if not in a job? Henry Ford used to boast that the company’s chief metallurgist had started out pushing a broom in the plant. During World War II, the nation needed skilled workers in a hurry to produce airplanes and tanks. “Rosie the riveter” stepped in with little prior experience. Rosie’s sons and daughters are no less intelligent than she but employers demand from them more education. They can demand this because of an oversupply of labor available for the better jobs. So long as this oversupply exists, the fierce competition to find and keep jobs may bid up the ante for credentials indefinitely thus perpetuating occupational rigidities which create artificial shortages of skills and breed low productivity. The need for more job training can become a self-fulfilling prophecy.


Logically, the next step after job-training and counseling might be for the unemployed person actually to take a job. If the economy does not provide one, government would have to expand its role as employer of the last resort. During the Depression, the federal government sponsored a number of employment programs, notably the Works Progress Administration (WPA) and the Civilian Conservation Corps (CCC). At its peak, the WPA employed 3.5 million Americans in a variety of chores ranging from road-building to painting murals in post offices. The CCC, which employed up to half a million workers, specialized in forestry work, improvement of state and national parks, and similar projects. Later, of course, the federal government became “employer of the first resort” during World War II.

The idea of sending out unemployed people or welfare recipients to work on community projects in exchange for their income-maintenance checks has long been a popular one. This is sometimes proposed in connection with welfare reform. The stumbling block is whether a person can be required to work against his or her will. Generally, that question has been answered in the negative.

Another idea, somewhat less popular, is that unemployed youth ought to be drafted into the U.S. armed forces. As a peacetime strategy, it is recognized that forced conscription violates civil liberties and throws the cost of national defense squarely upon the shoulders of one group, young males. The Department of Defense estimates that average pay and allowances for new recruits work out to 33 cents an hour less than the minimum wage, and 48-hour workweeks are required.

During the early 1960s, the federal government ran a jobs program under the Manpower Development and Training Act. However, this program involved mostly on-the-job training, classroom instruction, and career counseling, rather than public-sector jobs. The Economic Opportunity Act of 1964 created the Neighborhood Youth Corps (NYC) which more closely resembled the Depression-type employment programs. In its first ten years, the Neighborhood Youth Corps employed nearly 5.4 million young men and women, about 3.6 million of them in summer jobs and the rest in year-round jobs compatible with continued schooling.

The 1971 Manpower Report of the President described this program in the following words: “The in-school and summer work programs of NYC have a reasonably clear-cut objective: the use of federal funds by local government or non-profit organizations to provide part-time or temporary work to disadvantaged young people to help them stay in school and develop employable skills. The out-of-school program of NYC is addressed to the much more challenging task of providing full-time work experience in public or private organizations to young people who have dropped out of school. It has proven extremely difficult to find many jobs under the latter program that might lead to worthwhile careers, especially for boys. A concerted effort is now being made to emphasize remedial education, skill training and supportive services as part of the out-of-school program with a view to inducing more of the enrollees to return to school or enter a community college.”

Since 1974, the Comprehensive Employment and Training Act (CETA) has authorized programs for increasing employment in the form of public service jobs. The U.S. Government has subcontracted such programs to “prime sponsors”, mainly city or county governments, which either hire unemployed people to do useful community work or distribute the money to other community-based organizations that do the hiring. “The cities have considerable latitude in determining what tasks they will hire unemployed persons to perform’” a story in the Wall Street Journal pointed out. “CETA workers have been hired to answer telephones in police stations relocate oysters from polluted to nonpolluted waters, distribute crutches and wheel chairs to the infirm and elderly, mind children in day-care centers and undertake thousands of other tasks.”

Upon assuming office, President Carter raised the number of CETA jobs from 300,000 in 1977 to 725,000 in 1978 as a measure against unemployment. A year later, with unemployment down to 6%, Congress voted to fund 625,000 CETA jobs. By the end of 1979, the program was down to 500,000 jobs.

Despite its apparent success with unemployment, the CETA program has been marked by controversy, partly because of its high cost - $10 to $12 billion a year - and partly because of alleged fraud or mismanagement by its prime sponsors, local governments. The principal complaint was that these governments were substituting CETA workers for employees already on their payrolls, and so were helping to balance their own budgets without creating any new jobs. Congress struck back in 1978 by placing an 18-month limitation on holding a CETA job, and it also reduced the number of authorized positions.

These moves, combined with Proposition 13 and cutbacks in countercyclical aid to distressed areas, gave municipal governments a financial jolt. They responded by layoffs of city workers, tax increases, and increased fees for public services. Meanwhile there were charges that CETA funds had been used for such purposes as teaching the Islamic religion, that promotions had been given in exchange for sexual favors, that politicians had put relatives on the CETA payroll, and that millions of dollars had been squandered through mismanagement.

On September 30, 1979, an estimated 100,000 CETA workers were given pink slips under the 18-month limitation rule. CETA officials had previously removed another 100,000 from the rolls during 1979 by phasing them into CETA’s job-training, job-search programs, which paid enrollees. Rhode Island’s director of CETA said of the September layoff: “The way the job market is now and with the recession coming, a good many people will go back into the ranks of the unemployed.”

“Make work” jobs in the public sector are jobs without a future. They are often costly jobs in terms of the cost per participant, but not in terms of the wages paid. The expression, “employer of the last resort”, is a misnomer. When government appropriates funds for jobs which must be borrowed or raised in taxes, it becomes in effect, “employer of the first resort.” The money that is taxed out of the private sector cannot be spent to create jobs in other ways. The main advantage is that government officials can point to something definite that they have done about unemployment.

Although classified as employed, the mostly young clients of these programs are denied job security and a competitive wage. Also, the make-work nature of the jobs denies them an opportunity to develop useful careers skills. The workers are not pursuing careers of their choice but are situated in a temporary holding pattern, doing work which does not need to be done. Members of a particular generation thus become occupationally backwards. Some become permanently welfare-oriented. Some become acquainted with the criminal-justice system. Some are able to launch worthwhile careers.



“Since its inception after World War II,” wrote Professor Frank C. Pierson, “national manpower policy has shifted direction frequently and dramatically. In the early 1960s the focus was on opening up jobs by stimulating economic growth, bringing jobs to people through area relocation programs, and bringing people to jobs through relocation assistance. Later in the decade, the effort shifted to training the disadvantaged to compete for job openings, subsidizing private employers to hire such workers, and opening new career opportunities in the public sector ... While the beneficiaries of the country’s manpower programs now number in the tens of thousands, there is no reason to believe that another package of programs at the same cost could not have contributed more.’”

In a word, the federal government has switched from a policy of full employment to a policy of "targeted" employment. Giving up on the promise of jobs for all, government has assumed the function of doling out scarce jobs to members of favored or protected groups. Those targeted for jobs are generally the “disadvantaged” - i.e., clients of government services. These people must be in some way inferior, and therefore they need help from counselors, psychologists, teachers, and administrators. They are the human foundation upon which a class of professionals stands, dispensing services.

Sometimes, “disadvantaged” runs along racial, ethnic, or sexual lines. In that case, targeting employment to such persons appears to offset identifiable patterns of discrimination and prejudice and may ease social tensions. At the same time, it is an obvious source of power to the ones distributing the jobs. Sooner or later, the beneficiaries become victimized by the crazy administrative processes and criteria inherent in programs of this kind. Such a policy delivers political appeasement while, in fact, building taller and thicker walls around people’s “disadvantage”.

The theory is that manpower funds may be spent more efficiently if aid is targeted to groups in the greatest need. Certainly, if normally prosperous citizens are receiving the bulk of the money, which is the case with many government aid programs, one might question the need for subsidies at all. On the other hand, to set eligibility criteria based on need creates a perverse incentive to remain disadvantaged. Those who qualify for a program on the basis of negative distinctions tend not to appreciate its benefits. For instance, one finds widespread vandalism in subsidized housing for low-income families.

Seeking an explanation for the fact that the poor often abuse the very services designed to help them, William Raspberry wrote in his syndicated column: “Part of the answer seems to lie in what might be called deservedness. People tend to value those things that they think they deserve to have, whether because they have earned them through some personal exertion or because they consider themselves innately special and therefore deserving. People tend not to value things that have come to them in ways they consider illegitimate. Housing, food, training, recreation, or jobs that are distributed on the basis of some negative attribute - poverty or criminality, for instance - are frequently treated with contempt. It isn’t whether they will be valued nor not; it is also whether the recipients consider that they are deserved.”

What has happened in the post-war U.S. economy has been an impressive improvement in productivity unrelieved by shorter hours, with the result that labor has been displaced from basic industries. Jobs have been lost in agriculture, mining and manufacturing as the new labor-saving technologies have been introduced. Even with expanded sales and production, manufacturing firms have been able to fill their orders, employing a smaller number of workers than would otherwise have been needed. Under the circumstances, the question was not whether to lay off workers but which workers to lay off.

Almost without exception, the decision has been to lay off the workers who have been hired most recently and retain those with greater seniority. In union shops, this provision is usually written into the contract. Bruce H. Millen, a Labor Department official, has written: “Most (collective bargaining) agreements assign relatively more weight to seniority than to ability and other factors in determining the order in which employees are laid off. A 1971 BLS study found that all but 1 of the 364 contracts studied made seniority a criterion in layoff procedures.”

In some instances, employers have agreed not to lay anyone off but, if necessary, to reduce the number of employees through attrition. The jobs that were vacated through retirement, transfers promotions, voluntary quits, or death would simply be abolished. The 1971 contract between the U.S. Postal Service and the Letter Carriers and American Postal union has, to date, eliminated about 80,000 positions by this means. Employers often prefer to terminate workers by attrition rather than by layoffs believing this to be a relatively humane way of handling the situation. Such a policy has been called the “silent firing.”

Unfortunately, a price has to be paid, and it is paid by persons who are currently looking for work. Because of the freeze on new hiring, they do not find as many job opportunities as before. As a result of policies which assign layoffs by seniority or eliminate job positions through attrition unemployment comes to be concentrated among the particular groups of people entering the work force for the first time or attempting to break out of occupational ghettos which had consigned them to lower-level jobs: women, racial minorities, and the young. In 1978, unemployment among women averaged 7.2% vs. 5.2% for men; among blacks and other minorities, 11.9% vs. 5.2% for whites; among teenage workers, 16.3% vs. 4.9% for workers 20 years and older. Thirty years ago the disparities were not so severe.

Politically, such discrimination could not be tolerated. An obvious solution might have been to shorten working hours so that the economy could accommodate all the newcomers and create job openings in every industry and at every level of pay. However, that approach was rejected. Instead, the policy of targeted employment was adopted.

To pacify members of the excluded groups, government officials chose to push preferential treatment of women and minorities and of “disadvantaged” youth. Employers were given tax incentives or grants to hire the chronically unemployed and other hard-to-place job applicants. Affirmative-action policies were developed to help women and blacks assume their fair share of the jobs which became available at various levels of pay and responsibility. By “bending over backwards” so obviously to help these particular groups, the politicians were suggesting to unemployed women, blacks, and young people that they were doing everything in their power to help relieve the situation.

To target jobs to people because they belong to a particular socioeconomic or demographic category suggests that such persons cannot compete successfully for jobs on their own; it suggests that they are personally incapable of handling the work and need remedial help of various kinds. That is not the problem that women, minorities, and young people face in today’s job market. Rather, it is that job opportunities became limited - especially for the more satisfying, high-quality jobs - when they happened to be entering the labor force or raising their level of job expectations.

The targeting approach has generated a backlash among white males who have complained of “reverse discrimination”. Moreover, it has failed to lower the ratio of unemployment rates between black and white workers or to narrow the gap between men’s and women’s average earnings. In 1948, blacks averaged 5.9% unemployment compared with 3.5% unemployment for whites. In 1978, as we have seen, the ratio was 11.9% to 5.2%. Women continue to earn, on the average, approximately 60% as much per week as men. Hedges and Mellor reported in Monthly Labor Review: “Real earnings of all men who usually work full time were about 12 percent higher in May 1978 than in May 1967; real earnings of all women increased about 9 percent. Thus, the relative gap between the sexes was as wide as in 1967.”

These figures are surprising in view of the publicity about women’s and blacks’ new opportunities to advance in an economy controlled by white males. Undeniably, some women and blacks, even some black women, have advanced. However, they tend to be the better-educated, managerial or professional workers rather than those in occupations where most women and minorities are employed. Affirmative-action programs presuppose that not too many women or blacks will be available for the jobs; otherwise, it might begin to arouse opposition from the white men.

People may advance through education, but education releases only a trickle of “qualified” individuals each year. The disadvantaged may not advance en masse by working their way up through the ranks - from a clerical or laboring job to a professional or managerial job - because a rigid caste system of occupations keeps them locked in place Therefore, while corporate personnel recruiters roam the country looking for black business-administration graduates or electrical engineers, and while female economists are appointed to the board of directors of major banks, the average woman or black who works in an office or factory for perhaps $4 or $5 an hour has little hope of advancement.

The young, however, are our worst victims of economic discrimination. When youth is combined with racial minority, the result can be catastrophic. Unemployment among teenagers is more than three times as great as among adult workers. Last to be hired, they are first to be laid off. This can be a real disadvantage where vacations, pensions and other benefits are award on the basis of seniority. Hedges and Mellor note that: “The 1978 purchasing power of workers age 16 to 24 remained static since 1967 for either sex while real earnings for all those 25 and older rose by about 15 percent.” From their own lower earnings, the younger workers are paying a sharply increased amount to finance Social Security benefits for others while the chance that this fund will support their own retirement steadily diminishes.

If the young have hope of an improved condition in the future, they may find that the demographics are stacked against them. Peter Drucker points out that “the class of ’79 may be the first one (of the baby-boom generation) to find that the bases ahead of them are loaded ... that every rung on the ladder (of promotions) ahead of them is occupied ... The same demographics that made for fast progress in earlier years are going to slow down the ones now entering the job market. The path of rapid advancement will be blocked by people who are just as well educated but only a little older - people who will be on the job for another 25 to 35 years ... In the last twenty years we have tended to make entrance jobs smaller and less demanding; we had to get young people ready for promotions fast. Now we will have to structure jobs on the assumption that even a capable and hard-working person may have to spend many years on or near the entry level ... There is need to counsel the young. There is need to make sure they have someone to whom they can talk in the organization, if only to unburden themselves ... There is need for someone who realizes that the young through no fault of their own or ours are going to have it tough in coming years.”

The employment situation has tightened considerably during the past decade. Those workers with the good jobs will not budge. Therefor work must be pried loose from the present economic structure. A new strategy is required, one which will redistribute income, and work and leisure, more equitably.


Historically, the shorter-workweek approach has been taken in industrial societies to keep unemployment in check as productivity rises. During this century, the average workweek of American workers declined quite rapidly for the first four decades. However, since 1940 - which is the year that the Fair Labor Standards Act implemented the 40-hour standard in manufacturing industries - the decline has slowed considerably. The average went from 60.2 hours per week in 1900 to 49.7 hours in 1920, to 44.0 hours in 1940, to 41.7 hours in 1950, to 39.1 hours in 1970, and to 38.9 hours in 1979.

As it was mentioned earlier, a 30-hour workweek bill passed the U.S.Senate in April 1933 but it was opposed by the Roosevelt administration and was killed in the House of Representatives. Instead, President Franklin D. Roosevelt sought shorter hours as part of a comprehensive program of economic recovery under the National Industrial Recovery Act, passed several months later. The National Recovery Administration (NRA), its administrative arm, drew up industrial codes which regulated wages and hours in different industries. Under NRA codes, many workers received a reduction in their workweek to 44 or 40 hours or even less. However, the U.S. Supreme Court declared the entire package unconstitutional in May 1935.

After an abortive attempt to upset this decision by packing the Supreme Court with additional justices, President Roosevelt instructed his Secretary of Labor, Frances Perkins, to prepare drafts for proposed legislation which would meet the test of constitutionality. In 1936, the Walsh-Healey Act was passed which provided that contractors furnishing $10,000 or more in materials, supplies, articles, and equipment to the federal government would be subject to the 40-hour standard. The Fair Labor Standards Act, passed in 1938, extended this practice to a broader segment of industry.

The Fair Labor Standards Act has tended to stabilize weekly hours around the 40-hour mark. The “time-and-a-half” premium wages, originally meant to discourage the scheduling of overtime, have failed in that purpose as the cost of fringe benefits has increased relative to straight-time pay as well as the cost of hiring and training new employees.

From time to time, the trade unions have resolved to reduce the workweek to below 40 hours either through collective-bargaining agreements or legislation. This demand has been raised especially by unions in industries where employment has declined because of automated equipment and in times of high unemployment. Seldom has the issue been a priority, however. Although the national AFL-CIO has long been on record as favoring a reduced workweek, the last serious attempt it made to achieve this goal was in the early 1960s.

In August 1962, the AFL-CIO Executive Council announced that labor’s top priority during the 1963 bargaining sessions, as well as in lobbying Congress, would be to establish a 35-hour workweek with no cut in weekly pay. To discourage employers from scheduling overtime, the unions proposed to raise the penalty rate from “time and one half” to “double time”.

This was a period of heady predictions about the progress toward leisure. The electricians’ union in New York City, IBEW Local #3, had just signed a “breakthrough” contract establishing a 25-hour week. David McDonald president of the United Steelworkers of America, was proposing a 32-hour workweek in the steel industry. Walter Reuther of the United Automobile Workers was advocating a flexible workweek geared to the level of unemployment. “Automation”, many said, would necessitate such changes.

Most business leaders disagreed with these ideas. So evidently did the Kennedy administration. The Secretary of Labor, Arthur Goldberg, who had previously been general counsel of the United Steelworkers, made it clear that he would not support his former employer where shorter hours were concerned. Goldberg declared: "Let me say categorically for the national administration that the President and the Administration do not feel that reduction of hours will be a cure to our economic problem or to unemployment ... It is my considered view that the effect of a general reduction in the workweek at the present time would be to impair adversely our present stable price structure by adding increased costs that industry as a whole cannot bear.”

President Kennedy’s own views were expressed in a speech delivered to the steelworkers during the 1960 campaign. He said: “In the fact of the Communist challenge, a challenge of economic as well as military strength, we must meet today’s problem of unemployment with greater production rather than by sharing the work.”

The mood at the time was one of abounding self-confidence and optimism regarding technocratic solutions to human problems. The “New Frontier” was deeply committed to a policy of economic expansion. A larger GNP would finance our competition with the Russians in the arms race and in space and also the various social programs contemplated in government. That, economists, argued, was the only real way to achieve prosperity. Education was the key to solving structural unemployment. Continued growth was essential. Work sharing, designed to meet a general insufficiency of jobs, seemed much too crude and old-fashioned to suit the economic experts.

A report of the Senate Subcommittee on Economic Statistics issued in 1961 summarized the several arguments which were heard at the time: “In a world in which we have the immense and rapidly growing challenge of Communism to meet, a world in which human needs vastly outreach the maximum production of this nation for generations to come, and in a country in which the need for schools, hospitals, homes, and a myriad of other products is still enormous, it would seem a confession of defeat to reduce the hours of labor when no case has been made that present hours involve an excessively exhausting burden or are destructive of useful leisure time ... A shorter workweek or longer vacation would either reduce the standard of living of millions of wage earners as their weekly or annual pay dropped or there would be a sharp increase in labor costs. With increasing productivity, those losses and costs might be temporary but they would also be real and serious ... But to solve the problem, increasing demand by constructive government and private outlays and by retraining the labor force through effective opportunities to retire and secure adequate education deserve higher priority.”

Striving to meet “the immense and rapidly growing challenge of communism”, the United States government set about to conduct a limited “brushfire” war against the Viet Cong guerillas in South Vietnam. By 1975, when Saigon fell to the communists, we had spent $140 billion trying to contain that brush fire. We had lost 46,000 American lives and sustained 300,000 wounded and loss of inestimable prestige around the world. Meanwhile, the domestic “War on Poverty” was causing many able-bodied Americans to desert the ranks of the poor. Schools were built in areas which a decade or two earlier faced declining enrollments. Hospital beds were added only to learn years later from Ralph Nader’s research associates that the excess hospital capacity was costing the nation $8 billion a year.

In 1957, a vice president of Inland Steel gazed far into the future and reported back: All the figures we have studied indicate that we will be short 2 million people in 1975 even if the workweek continues as it is. It seems to me that the fear should be whether or technology can keep pace with the demands on it - not whether the workweek will be shorter.” There were, in fact, in 1975 an average of 7,830,000 Americans who were unemployed that year, representing 8.5% of the work force.

Lyndon Johnson had tipped his hand on the shorter-workweek issue when, as a U.S. Senator he remarked: “Candor and frankness compel me to tell you that, in my opinion, the 40 hour week will not produce missiles.” So long as the war was raging in Vietnam and the unemployment rate was below 4%, there could be no thought of changing the workweek. In November 1963, hearings were held before the House Education and Labor committee to consider legislation to reduce the standard workweek but nothing came of them. Politically, the country had become preoccupied with controversy over the Vietnam war, disturbances in the cities, and the Watergate scandal. The work ethic was proclaimed by President Nixon to be the bedrock of our nation’s moral and economic strength.

Even so, some progress was made on the workweek front during the 1960s and 1970s. The Fair Labor Standards Act was amended several times. The 1961 amendments brought an additional 3.6 million workers under its jurisdiction, mainly in retail trade and in the service and construction industries. The 1966 amendments which extended minimum-wage protection to farm workers and others covered another 10.4 million people. There were also amendments in 1974 and 1977 covering government employees and hotel, motel, and restaurant workers. Yet, after 1963, until recently no serious effort was made to change the 40-hour standard or the penalty rate for overtime. However, a boom in alternative working hours began to build during this period.

Alternative workweeks enjoyed greater support from the business community which, indeed, had initiated them in many instances. In the early 1970s, the new “4-day workweeks”, remaining at 40 hours, appeared to be the coming thing. There were glowing reports of the increased productivity, reduced absenteeism, improved employee morale, and so forth. However, organized labor was opposed to the longer work days. Some problems were noted with scheduling and with fatigue brought on by the 10-hour days.

Later, the concept of “flexible” working hours, which gave workers greater freedom to set their own hours, caught on with many business firms. This idea originated at the Messerschmitt plant in Ottobrunn, West Germany, as a means of reducing traffic congestion during the rush hour. Again, many advantages were cited. In 1978, the federal government passed a law which required federal agencies to develop experimental “compressed workweek” or “flex-time” schedules which would be evaluated during a 3-year trial period. Another law passed at that time required that a certain percentage of federal jobs be set aside as permanent part-time jobs, with benefits and staffing allocated on the basis of full-time equivalence.

In the mid 1970s, the more traditional shorter-workweek movement again began to stir. The 1974-75 recession, worst since the Great Depression, hit the Detroit automobile plants especially hard. As a result, the United Automobile Workers made reduced working hours its principal demand in the 1976 contract talks with Ford, General Motors, and Chrysler. The UAW raised this issue specifically for the purpose of preserving jobs.

A month-long strike at Ford won union members 12 additional days of paid leisure over the contract period. Those days, known as Paid Personal Holidays (PPH), were to be spread out evenly during the year so that employer might hire additional workers without disrupting production. In the 1979 contract negotiations which were concluded without a strike the automobile workers gained a total of 26 PPH’s over the three-year contract. Workers at all of the “Big Three” automobile companies won such an agreement although the Chrysler workers had to give up their personal holidays in connection with the Chrysler bail-out legislation.

Following the 1976 strike at Ford, a group of local union leaders organized the “All Unions Committee to Shorten the Work Week” for the purpose of coordinating the shorter-workweek activities of unions around the country. Frank Runnels, president of UAW Local #22, was elected its leader. The All Unions Committee held a national rally in Dearborn, Michigan,, on April 11, 1978, which attracted 700 participants. The UAW’s international president, Douglas Fraser and Congressman John Conyers of Detroit were among the featured speakers. Later in the year a group was organized in Minnesota known as “General Committee for a Shorter Workweek”, which sought to arouse support for the shorter-workweek cause on a community-wide basis.

Although initially the All Unions Committee stressed collective-bargaining agreements, it also sought to reduce working hours through legislation. Rep. Conyers was persuaded to introduce a bill in Congress which would amend the Fair Labor Standards Act. The Conyers bill, introduced in April 1978 as HR-11784, proposed to amend this law in three respects: (1) to reduce the standard workweek to 37 1/2 hours in two years and to 35 hours after four years, (2) to increase the penalty rate for overtime from time and one half to double time, and (3) to prohibit mandatory overtime.

HR-11784 attracted five cosponsors in the 95th Congress. In the following session, the bill was resubmitted as HR-1784. On April 6, 1979, the All Unions Committee held a “Second National All-Unions Conference and Legislative Lobby” in Washington, D.C. which attracted several hundred delegates from around the country. It was announced then that hearings on the Conyers bill were scheduled in the House Education and Labor Committee during three days in late October. By the time of the hearings, thirteen members of Congress had become co-sponsors.

The hearings on October 23, 1979, in the Subcommittee on Labor Standards, chaired by Rep. Edward Beard of Rhode Island. On the first day, the AFL-CIO, UAW, UE, and other labor organizations sent representatives who testified in favor of the bill as did Congressmen John Conyers and James Oberstar (from northern Minnesota). Mayor Coleman Young of Detroit paid a surprise visit to lend his support. On the second day, spokesmen for the U.S. Chamber of Commerce and several trade associations representing the restaurant and convenience-store industries testified against. The third day featured testimony from academic experts, including Professor Wassily Leontief of NYU, a winner of the Nobel Prize in economics, who supported the bill. Many of the others did not.

Rep. John Conyers, chairman of the House Judiciary Committee’s subcommittee on crime and prominent member of the Congressional Black Caucus, is currently serving his ninth term in Congress, having been re-elected by margins exceeding 90% of the vote in Michigan’s First District. He has a particular interest in relating the problems of unemployment and crime. Rep. Conyers was one of the leaders in the struggle to enact the Humphrey-Hawkins bill. He and other supporters of a shorter workweek see the Conyers bill as a means of implementing the goal of Humphrey-Hawkins of reducing the general rate of unemployment to 4% by 1983 while simultaneously controlling inflation.

During the past several years, other concerns such as those with energy and inflation have tended to overshadow the unemployment problem. However, that situation is changing. As the nation moves through another period of economic decline, with more people out of work, it is anyone’s guess whether the old ineffectual remedies - “pump priming”, WPA-type projects, extended unemployment benefits, job-training programs, and the like - will be brought out of moth balls, repacked, and put on the political market once again, or whether something more “radical”, such as the Conyers bill, addressing itself to the fundamental relationships and causes of unemployment, will at last be considered.


Click for a translation into:

French - Spanish - German - Portuguese - Italian


to: Table of Contents


COPYRIGHT 2016 Thistlerose Publications - ALL RIGHTS RESERVED