The Law Professor Who Coached the Marquette Football Team

Posted on Categories Marquette Law School, Marquette Law School History, Public, Sports & Law1 Comment on The Law Professor Who Coached the Marquette Football Team

The Marquette University Law School has long been associated with the world of sports.  Although the National Sports Law Institute has represented the connection in recent years, the school’s relationship to the sports industry goes back much further than the 1989 founding of the Institute. Federal Judge Kenesaw Mountain Landis, later the first Commissioner of Baseball, was a lecturer at the law school shortly after it opened; Carl Zollmann, the first major sports law scholar, was on the Marquette Law faculty from 1922 to 194; and a number of outstanding athletes, including Green Bay Packer end and future U. S. Congressman Lavvy Dilweg and Olympic Gold Medalist (and future congressman) Ralph Metcalf studied at the law school in its early years.

However, no one has ever combined the two fields more perfectly than Prof. Ralph I. Heikkenin who, during the 1947-48 academic year, both taught full-time at the law school and coached the Marquette varsity football team, at a time when the team played at the highest level of collegiate competition.

Heikkinen was already well known to sports fans in the upper Midwest when it was announced that he would be joining the Marquette faculty and staff in the spring of 1947.  A native of the Upper Peninsula of Michigan, Heikkinen had grown up in the community of Ramsey.  He had enrolled in the University of Michigan in the fall of 1935 where he excelled academically. Not only was he an outstanding student, but he was a published poet and the president of the student government.  On top of that, he was an under-sized lineman who made the powerful Michigan football team as a walk on.

Although he began his career as an unheralded newcomer, by the time he was a junior, Heikkinen had developed into one of the best two-way linemen in the country. Although just 6’ tall and weighing only 183 pounds, he was voted as his school’s MVP during both his junior and senior years and was chosen unanimously as a guard on the 1938 All-American team.  During Heikkinen’s senior year, the Wolverines, under new coach Fritz Chrisler, narrowly missed a perfect season thanks to a narrow 7-6 defeat at the hands of Minnesota, in which Michigan botched an extra point kick, and a 0-0 tie with Northwestern, which featured a Michigan missed field goal from the 6 yard line.  Even so, the team finished the season 6-1-1, ranked #16 in the country in the final Associated Press poll.

After completing his college career, Heikkinen was drafted by the Brooklyn Dodgers of the National Football League.  Because of concerns over his size and his interest in playing professional football he was not chosen in the 1939 draft until the 12th round, the #105 overall pick.  Since the end of the 1938 college season, Heikkinen had been on the fence on the issue of professional football, and initially appeared to be leaning toward remaining at the University of Michigan as a graduate or law student who would also coach the linemen on the freshmen football team.

Finally, after accepting an invitation to play in the 1939 College All-Star game, which pitted the top senior collegians against the NFL campion Washington Redskins, “Heik,” as he was known, decided to sign with the Dodgers.

However, the football success he had achieved in Ann Arbor was not to be repeated in Brooklyn.  Even though NFL players in 1939 were much smaller then than they are today, Heikkinen was undersized by the NFL lineman standards of the time. Also, having missed the pre-season because of his indecision and his participation in the College All-Star game (which was won by Washington, 27-20), he had trouble earning playing time after his arrival in Brooklyn.

Although one of the Dodgers’ 1938 guards had retired and the other had been moved to tackle, Heikkinen lost out in the competition for the two guard positions to two other, less-heralded rookies.  After only three games of the 1939 season (in only two of which he actually played) the Dodgers simply released Heikkinen rather than keep him on the bench while paying his salary.

Some published accounts reported that the release had been that Heikkinen’s request so that he could accept a coaching position at the University of Virginia.  Whatever the reasons for his release, within three weeks, Heikkinen was in Charlottesville, Virginia.  There, he accepted a position as assistant line coach for the school’s football team which has coached by former Marquette head coach Frank Murray.  At the same time he enrolled as a first year student at the University of Virginia Law School, even though the fall semester was already underway.

For the next five football seasons, Heikkinen was an assistant coach on the Virginia football team.  In 1940, he was promoted to head line coach, a position that he would hold for the next five seasons. Virginia’s football fortunes increased dramatically after Heikkinen’s arrival, but that probably had more to do with the simultaneous appearance of future Hall of Famer halfback “Bullet Bill” Dudley, arguably the greatest player in the school’s history.  Although the team’s fortunes fell off after Dudley graduated, in 1944, the team had its second best record since 1925.

When not coaching the Cavaliers, Heikkinen divided his time between his legal studies and his involvement with the University of Virginia’s Flight Preparatory School which was established as part of the United States Navy’s V-12 program during the Second World War.  According to the University records, Heikkinen was enrolled as a law student in 1939-40; 1940-41; and 1944-45, although it seems likely that his coaching duties kept him from taking a full load of courses during the fall semester, and he may have taken classes in 1941-42 and 1943-44 to catch up for the work that he had missed.

In 1943 and 1944, he was an instructor in aeriel navigation and physical education for Naval Officers enrolled at UVA under the V-12 program.  (The UVA football teams in 1943 and 1944 were greatly strengthened by the presence of the Navy students who were eligible for intercollegiate sports.)  It is entirely possible that Heikkinen was also enrolled in the Navy Reserves between 1942 and 1944, in preparation for his service to the V-12 program.

In spite of his protracted time as a law student, Heikkinen excelled academically.  When he graduated, he ranked number 1 in his class, and he was selected to Phi Beta Kappa and was one of two law students in 1944 honored with membership in the Order of the Coif.  He was also chosen as a member of the University’s prestigious Raven Society.  Although his work schedule was not really compatible with law review membership, he did become a member of the staff of the Virginia Law Review during his final semester in law school.

After graduating from law school in June of 1944, Heikkinen remained on Murray’s coaching staff.  However, at the conclusion of the 1944 season, he announced his resignation from his coaching position and his decision to accept an associate’s position with the New York law firm of Cravath, Swaine and Moore.

While practicing law in New York, Heikkinen kept his hand in the world of football by serving as a scout for Lou Little’s football program at Columbia University during the 1945 and 1946 seasons.

Following the 1945 season, Coach Murray left the University of Virginia and returned to his previous employer, Marquette University, where he was a legendary figure.  As the head football coach of Marquette from 1922 to 1936, the Golden Avalanche/Hilltoppers compiled a won-lost record of 90-32-6, culminating with an appearance in the inaugural Cotton Bowl during Murray’s final game at the helm. Neither of his successors, Paddy Driscoll and Tom Stidham, came close to matching Murray’s success on the playing field, and in 1946, Murray was enthusiastically welcomed back to Marquette.

In 1946, Murray’s first season after his return, the Golden Avalanche went 4-5-0.  At the conclusion of the season, head line coach Al Thomas decided to step down. Thomas had actually been Heikkinen’s replacement at the University of Virginia, and he had come back to Marquette with Murray in 1945.  As a replacement for Thomas, Murray seized on the idea of convincing Heikkinen to return to the coaching ranks. Heikkinen was initially reluctant to return to coaching, but Marquette was willing to sweeten the pot a good deal by offering Heikkinen a full time position as Associate Professor of Law as well as a job as Murray’s chief assistant with the football team.

Moreover, Murray suffered a heart attack in the spring of 1947, a development that would require his role in the management of the football program to be reduced for the rest of the calendar year.  As a result, Heikkinen was offered the chance to run the football team’s spring practice in April and to coach the team from the bench during regular season games in the fall (although Murray would officially remain the head coach).  Heikkinen accepted the position in April of 1947, with the stipulation that he would be allowed to retain his New York affiliations and would be free to return to New York at the end of the 1947-48 academic year, if he chose to do so.  He arrived in time to oversee the 1947 spring practice.

The law school that Heikkinen joined in 1947 was thriving, as more than 400 students, many of whom were ex-GI’s, streamed into its hallways.  (Three years earlier, during the War, the enrollment had fallen to 44 students.)  Over the past two years Dean Francis X. Swietlik had quickly rebuilt the law faculty which had been largely dismantled during the war years.

To accommodate the influx of students anxious to return to civilian life and get on with their legal careers, the law school had decided to continue the “three semesters per year” curriculum that it had embraced during World War II.  With full length Summer, Fall, and Spring semesters each year, this format meant that law students could graduate from the law school in just two years.  Heikkinen’s first class was part of the Summer 1947 semester.

The addition of Heikkinen brought the number of professors on the law faculty to 15, which included eight full-time professors.  Four–Dean Francis Swietlik, Francis Darneider, E. Harold Hallows, and Willis Lang —were full professors, while four others–James Ghiardi (who joined the faculty in January 1946, after returning from military service in Europe), Warner Hendrickson, Kenneth Luce, and Heikkinen—were associate professors. Of the eight full-time professors, four—Darneider, Swietlik, Lang, and Ghiardi–were Marquette Law School alums, while the other four had law degrees from Michigan, Chicago, Harvard, and Virginia.

In addition, the faculty included seven part-time lecturers and instructors, and a regent, Rev. Edward McGrath, S.J., a Jesuit who was also a professor of jurisprudence. The most prominent of the part-time faculty was Milwaukee lawyer Carl Rix, who taught Property and who was wrapping up his term as president of the American Bar Association.

Associate Professors Ghiardi and Heikkinen, who were only a year apart in age, were both from the Upper Peninsula of Michigan (although from opposite ends) and quickly became great friends, often socializing with their wives and with colleague and fellow-Michiganer Kenneth Luce and his spouse.

As a teacher Heikkinen appears to have been readily accepted by his colleagues.  He taught a variety of courses, but he specialized in corporations and security transactions, and during the 1947-48 academic year, he and Luce contributed an article on recent developments in Wisconsin corporation law to the Marquette Law Review.  Although he was a football coach, Heikkinen had a surprisingly soft speaking voice.  As an AP wire service story noted in November of 1947, he had “such a low-pitched voice that he uses a microphone during classroom hours.”

He was also quite conscientious when it came to making sure that his coaching duties and opportunities did not interfere with his classes.  Shortly after he joined the faculty in the summer of 1947, he declined a much coveted invitation to coach the North team in the Upper Peninsula High School All-Star football game because it would have required him to cancel some classes.  During several away games during the football season that fall Coach Heik had to follow the team in a later train, and in one case, take an airplane, to avoid having to miss any classes.

Under the joint direction of Murray and Heikkinen, the 1947 Marquette football team got off to a roaring start, defeating South Dakota, St. Louis University and Detroit Mercy in its first three games by a combined score of 101 to 47.  The winning streak came to an end, however, in game four when the Hilltoppers lost in Milwaukee to a fellow Jesuit school, the University of San Francisco, 34-13.  Trailing 28-0 at half, Marquette was never in the ballgame, and the victory elevated the California school to #20 in the Associated Press rankings.

Marquette may have been over-confident coming into the San Francisco game, given that the team was undefeated, and San Francisco was coming off a home loss to Mississippi State.  The next week featured the game that most Marquette fans felt was the most important of the season, the annual match-up with the University of Wisconsin in Madison.

The 1947 game, like all the others in the series, was played at Camp Randall Stadium in Madison, and pitted the 3-1-0 Hilltoppers against the 2-1-1 Badgers.  Even though Madison was coming off of a 9-0 upset of #12 Yale the week before, Marquette fans seemed confident that this could be one of the rare years that the Catholic school might win out over the state university.

In spite of optimistic predictions of success, Marquette’s offense simply could not gain any traction, and single touchdowns in the first three quarters put the UW ahead 21-0 before MU finally scored.   The Badgers subsequently added two more TD’s to Marquette’s one, for a final score of 35-14.

The suddenly dispirited Hilltoppers proceeded to lose their next three games to Michigan State, Villanova, and Indiana, all of which had winning records in 1947.  The team finally rebounded in its last game of the season which required it to travel to Phoenix the weekend before Thanksgiving.  There, it defeated the 5-2-0 Arizona Wildcats.  Rolling to a 33-7 lead in the third quarter, Marquette coasted to a 39-21 victory to bring its final record to 4-5-0, the same mark it had achieved in 1946.  However, the season did at least end on a positive note.

Although many Marquette law students had played on the university football team in the years before World War II, the growing expectation that law students in the post-war era would be college graduates all but eliminated the law school football player.  It does not appear that any law students played on the varsity football team during Heikkinen’s year as coach.

Following the end of the football season on November 22, Heikkinen continued to be an active faculty member at the law school, and most members of the law school community assumed that he would remain at Marquette the following year.  He participated in the spring football practice in late April of 1948, and several newspapers reported that he would be part of the Marquette coaching staff in 1948.  However, in August, the university announced that Heikkinen had resigned both his law school and coaching positions so that he could return to law practice in New York.

According to Heikkinen’s friend Jim Ghiardi in a 2014 interview, no one at Marquette ever knew exactly why Heikkinen decided to leave the law school after only one year on the faculty.  He may have been disappointed with Murray’s decision to return to full-time coaching in 1948, which would have diminished his role in the program.  He also may have simply missed practicing law; after accepting the coaching position in the spring of 1947, he briefly considered turning down the faculty position in favor of a position with a Milwaukee law firm.  Also, by the summer of 1948, Heikkinen’s wife was pregnant with the couple’s third child, and Heikkenen may have decided that he could better support his planned large family—the Heikkinen’s ultimately had six children—on the salary of a Wall Street lawyer than he could on his modest assistant football coach-law professor salary at Marquette.

On the Marquette Law School faculty, Heikkinen was replaced by a young law professor named Leo W. Leary, who left the faculty at the University of Texas to return to his native Wisconsin in the fall of 1948. While he never coached the football team, Leary became a Marquette Law School legend in his own right over the next three decades. If you want to strike up an interesting conversation with any Marquette alum over age 70, just ask him or her what they thought of Leo Leary.

Shortly after his return to law practice in New York, Heikkinen became the executive secretary and attorney for the Studebaker-Packard Corporation, an automobile company that had been a Cravath client.  In 1958, he left Studebaker and went to work in the legal department of General Motors, where he remained until his retirement in 1978.  At different times in his life Heikkinen apparently battled alcohol problems, and at General Motors he was responsible for initiating and establishing corporation-wide alcohol treatment and education programs.  After leaving Marquette, he never again worked as a football coach, but at his induction into the Upper Peninsula Sports Hall of Fame in 1973, he was also identified as a former professional football scout, so his involvement with the sport may have continued after 1948.

Heikkinen died in Michigan in 1990, where he lived in the Detroit suburbs.

Although there have not been very many, Ralph Heikkinen was not the only combination football coach and law professor in American history.  Lawyer and Hall of Fame coach Daniel McGugin coached the Vanderbilt football team and taught occasional classes at the Vanderbilt law school during the first three decades of the 20th century.  Similarly, Fred Folsom taught part-time at the University of Colorado Law School while coaching the school’s football team from 1908 to 1915.  However, unlike McGugin and Folsom, Heikkinen was a full-time law professor, and he managed to hold both positions in the post-World War II era, when both coaching and law teaching were more demanding tasks than they had been forty years earlier.

Since it appears that Heikkinen is the only person to have been a full-time major college football coach and full-time law professor at the same time, it is entirely appropriate that he accomplished this distinction at the Marquette University Law School where the connection between law and sports has long been recognized.

Gordon Hylton is a Professor of Law at the University of Virginia School of Law.  Prior to joining the faculty at UVA, Professor Hylton was a longtime member of the Marquette University Law School faculty.

Is the Senate Free to Ignore President Obama’s Choice of a Replacement for Justice Scalia?

Posted on Categories Constitutional Interpretation, Constitutional Law, Election Law, Federal Law & Legal System, President & Executive Branch, Public, U.S. Supreme Court2 Comments on Is the Senate Free to Ignore President Obama’s Choice of a Replacement for Justice Scalia?

Court[The following is a guest post from Professor J. Gordon Hylton, a former member of the Marquette Law School faculty.]

Justice Scalia’s unexpected death this past weekend has raised the question of how his seat on the Supreme Court will be filled. Some Republicans have already asserted that it would be inappropriate for the president to even place someone’s name in nomination during an election year.  Others have more modestly pointed out that the Republicans in the Senate would be within their constitutional function to use their majority power to veto any potential justice that the president might put forth.  Democrats, in contrast, emphasize the president’s constitutional duty to fill the slot and reject the idea that the impending election out to somehow stay the process of replacing departed United States Supreme Court rules.

What does the history of the Supreme Court tell us about this situation? As it turns out, in the Court’s more than 225 year history, sitting justices have died or retired/resigned from the Court during an election year (or the brief stretch of the president’s term in the following year) on twenty occasions.  In 14 of the 20 cases, a new justice was appointed and confirmed before the president’s current term ended.  (In 7 of the 20 cases, the sitting president was re-elected, but in none of these cases did the nomination go into the following term.)

However, the story is a bit different when the sitting president’s political party does not control the United States Senate. Not surprisingly, in the 12 cases when the president’s party has been in control of the Senate, the open-vacancy has been filled 11 times.  The one exception came in 1968, when sitting Chief Justice Earl Warren announced in June that he planned to retire before the end of the year.

Even though the Democrats held a 62-38 majority in the Senate in 1968, President Johnson’s nominee to replace Warren, Associate Justice Abe Fortas, soon ran into trouble as evidence of perceived financial irregularities and conflicts of interest during Fortas’ years on the Court surfaced. Ultimately, the Fortas nomination was withdrawn, and Warren remained on the Court until the following June, when newly elected President Richard Nixon nominated Warren Burger as the new Chief Justice.

In the other 8 situations, in which the President’s political party did not control the Senate, which is the current situation, the vacant court position went unfilled 5 times. In fact, that was the result the first four times the scenario presented itself. In 1828, 1844, 1852, and 1860, presidents–John Quincy Adams, John Tyler, Millard Fillmore, and James Buchanan–whose parties did not control the Senate, failed in their efforts to appoint replacements for recently deceased justices.

(Technically, John Tyler was a Whig, and the Whigs did have a slight majority in the Senate during his presidency, but Tyler’s extreme States Rights beliefs alienated a majority of his fellow Whigs. He was actually more successful in working with the Democrats in Congress. Tyler’s efforts to fulfill a previous Supreme Court vacancy created by the death of Justice Smith Thompson in 1843, which was not an election year, did not succeed until he nominated Democrat Samuel Nelson shortly before the end of his term in March 1845.)

In the four post-Civil War situations where this fact pattern appeared, Presidents had better luck, largely as the result of choosing candidates designed to appeal to their political opponents who controlled the Senate. During his presidency Republican Rutherford B. Hayes faced a Senate composed of 42 Democrats, 31 Republicans, and 2 independents.  His first two nominees to the Court were chosen to appeal to the large number of Southern Democrats in the Senate by offering to restore a Southern presence to the Supreme Court that had been missing for most of the Reconstruction era.  He did this by appointing former slave-holder John Marshall Harlan of Kentucky and, in the election year of 1880, William Woods, a pre-war Democrat who had been a Union general, but who after the war had relocated to Alabama where he became a cotton planter.  However, when Hayes attempted to fill a third vacancy on the Court with fellow Ohio Republican Stanley Matthews shortly before the end of his presidency in March 1881, the Democratic Senate refused to cooperate.

Similarly, when Chief Justice Morrison Waite died in 1888, Grover Cleveland wanted to replace him with a Democrat, even though the Republicans held a narrow 39-37 margin in the Senate. Earlier in his tenure, his first nominee, Secretary of the Interior L. Q. C. Lamar, a former Confederate official, had been confirmed by a four vote margin, but only because a small number of western Republicans, apparently in appreciation of his policies when he ran the Interior Department, defected to his side.  For Chief Justice in 1888, Cleveland nominated Illinois lawyer and Maine-native Melville Westin Fuller, apparently on the presumption that the four Republican senators from Illinois and Maine would throw their support behind their native son (which they did, and he was confirmed).

The only other time an election year nomination went through the Senate without a clear majority for the president’s party was in 1956, when Democratic Justice Sherman Minton announced on September 7, just two months before the upcoming presidential election, that he would be retiring on October 15. At this point, the Senate consisted of 47 Democrats and 47 Republicans, plus two Independents, one of whom (Wayne Morse) had recently been identified with the Republicans and one (Strom Thurmond) with the Democrats.

Even though Vice-President Richard Nixon, as president of the Senate, could cast the tie-breaking vote in the Senate divided along party lines, Eisenhower avoided a potentially costly showdown with Senate Democrats by capitalizing on a Senate recess to appoint the Irish Catholic Democrat William Brennan of the New Jersey Supreme Court to the United States Supreme Court on a temporary basis through the use of the rarely invoked interim appointment to the Supreme Court. As a result, Brennan was able to join the Court the day that Minton retired, which was three weeks before the election.  (Observers then and now speculate that the decision was motivated in part by Eisenhower’s desire to appeal to Roman Catholic voters who traditionally voted Democratic.)  When Brennan actually came up for confirmation in March 1957, he was confirmed by a nearly unanimous voice vote.

Consequently, the past shows that in a situation like the current one, past Senates have not hesitated to deny confirmation to the choice of an outgoing (or potentially outgoing) president. On the other hand, there have been times through clever nomination strategies that presidents have persuaded their more powerful political opponents to go ahead and support the chosen nominee, rather than gamble on a more hospitable result in the future.

It is perhaps worth noting that none of these previous situations are particularly recent. Only two of the 20 have occurred since the Election of Franklin Roosevelt in 1932, and of these the most recent is from 1968.  Only six of the examples are from the Twentieth Century, and eight predate the conclusion of the Civil War.  Nevertheless, there is no reason to think that any modern constitutional change would have produced different results or would prevent the President or the current Republican majority in the Senate to follow a similar course.

Ted Cruz as a Natural Born Citizen

Posted on Categories Congress & Congressional Power, Federal Law & Legal System, Political Processes & Rhetoric, President & Executive Branch, Public2 Comments on Ted Cruz as a Natural Born Citizen

Ted Cruz[The following is a guest post from Professor J. Gordon Hylton, a former member of the Marquette Law School faculty.]

The debate continues over the eligibility of Sen. Ted Cruz for the United States presidency under the Constitution’s “natural born citizen” clause in Article II, Section 1. (Art II, §1 provides, in part, “No Person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President, neither shall any Person be eligible to that Office who shall not have attained to the Age of thirty five Years, and been fourteen Years a Resident within the United States.”)

The question is whether the Canadian-born Cruz, whose mother, but not father, was a United States citizen, qualifies as a “natural born citizen.” Unfortunately, the neither the Constitution itself nor the surviving records of the Constitutional Convention of 1787 define the phrase “natural born citizen,” and the Supreme Court has never offered an authoritative interpretation of the clause.

Frequently cited as support for the assertion that individuals born abroad with at least one-American born parent are qualified to hold the office of President is the 1790 Naturalization Act, the country’s first statute setting out the path to citizenship for non-citizens. (Ted Cruz himself has repeatedly made this claim.)

The statute in question was enacted on March 26, 1790, by the first Congress, just a little more than two and a half years after the September 17, 1787 signing of the Constitution by members of the Constitutional Convention. Certainly, if any legislative body was likely to understand the intended meaning of the “natural born citizen” reference in Article II, it would have been the first United States Congress, which included in its ranks 20 of the 55 members of the Constitutional Convention (11 in the Senate and 9 in the House of Representatives).

The Naturalization Act did, in fact, address the citizenship status of individuals born abroad of American parents, and it did indicate that they were to be treated as though they were “natural born citizens.” However, the purpose of the Naturalization Act was not to define who was or was not eligible to be president—that was the responsibility of the Constitution itself, not the Congress—but rather it was to determine the ways in which “non-natural born citizens” were to become eligible to be citizens of the United States.

Article I, Section 8 of the Constitution delegates this power to the Congress, to wit: “The Congress shall have Power … to establish an uniform Rule of Naturalization.”  Nothing in Article I of the Constitution (which deals with the powers of Congress) authorizes it to clarify the eligibility requirements for the presidency.

The Naturalization Act divided the pool of potential citizens into two categories. The first included aliens who could be admitted to citizenship if they were white, of good character, had resided within the United States for two years (and their current state of residency for one year), and were willing to take an oath of allegiance to the United States.  Also admitted as citizens were any children of those admitted to citizenship under this provision, so long as they were under age 21 and residing in the United States.

The second category addressed by the statute were those “children of citizens of the United States that may be born beyond the sea or out of the limits of the United States.” In regard to such individuals, the statute provided that they “shall be treated as natural born citizens,” so long as their fathers had at some point been residents of the United States.

Two things are to be noted. First, the statute does not say that children born abroad are “natural born citizens;” rather, it directs that they be treated as though they were.  The effect of this is to excuse them from the process described for true aliens seeking citizenship.  For purposes of determining citizenship, they are like “natural born citizens,” but they are themselves not “natural born.”  Second, this provision has absolutely nothing to do with eligibility for the office of President.

Had children born abroad to United States citizen parents been viewed as “natural born citizens,” then there would have been no reason to address their status in the Naturalization Act, which deals exclusively with those who are not automatically citizens. In the 1790 Act, Congress made such individuals citizens, but it was not intending to qualify them for the presidency by doing so (nor did it have the power to do so).

While modern constitutional norms were not necessarily well established by 1790, there is no way to read the language of the 1790 Naturalization Act without concluding that the members of the generation that drafted the United States Constitution believed that only those born within the “limits of the United States” could be “natural born citizens.”

As 21st century Americans we may not be bound by this original understanding, but it is simply incorrect to claim that the 1790 Naturalization Act somehow identified the foreign-born children of American citizens as “natural born citizens.”

Same-Sex Marriage Referendums: Major Metropolitan Areas Out of Step With Less Populated Regions

Posted on Categories Election Law, Political Processes & Rhetoric, Public1 Comment on Same-Sex Marriage Referendums: Major Metropolitan Areas Out of Step With Less Populated Regions

In most states same-sex marriage has become the law of the land by judicial decision. In a smaller number, the institution has been recognized by acts of the state legislature. Although there were numerous public referendums attempting to ban same sex marriage before 2008, in recent years only twice have the voters of a state had the opportunity to vote directly on the recognition of marriages between individuals of the same gender.

Both opportunities came in November 2012, as voters in Maryland and Washington State confirmed their state’s recognition of a new definition of marriage. However, both episodes revealed a sharp divide between the majority views of those who live in major metropolitan areas and those who live in less densely populated areas.

The Maryland referendum, like the one in Washington, was actually an effort, permitted under the laws of both states, to overturn an earlier statute. In February of 2012, the Maryland General Assembly narrowly approved a bill recognizing same-sex marriage, known as the Civil Marriage Protection Act. The bill was enacted by votes of 72-67 in the House of Delegates and 25-22 in the Senate and subsequently signed on March 1 by Governor Martin O’Malley.

However, by June, opponents of the bill had secured enough signatures to place the issue on the state’s ballot the following November.

The effort to override the legislation ultimately failed by a margin of 52.4% to 47.6%, but the geographic breakdown of the vote revealed that 18 of the state’s 24 counties actually voted to overturn the same-sex marriage statute. All six of the counties in which the repeal measure failed were located either in metropolitan Baltimore or metropolitan Washington, D.C. Even in these metropolitan areas, an equal number of counties (six) voted to overturn the law. The measure even carried in predominantly black Prince Georges County, which is part of the D. C. suburbs. (Overall, exit polls suggest that a majority of black Marylanders voted to override the statute.)

In the state’s twelve counties that are not part of either metropolitan Washington or Baltimore, the referendum to overturn the statute received the support of substantial majorities, and in seven of the twelve support for overturning the statute ranged from 60.9% to 73.1% of voters. The largest majorities were compiled in the rural, largely white, counties of Appalachian western Maryland and in the rural, racially-mixed counties of the Eastern Shore.

The largest majorities in support of the statute were compiled in suburban Washington’s Montgomery County, in the City of Baltimore (which is effectively a separate county in Maryland), and Howard County, which includes the suburbs south of Baltimore.

Outside of the six counties that supported the same-sex marriage statute, the combined vote in Maryland was 54.9% to overturn the statute and 45.1% to uphold it. In the twelve counties that were not part of the Washington or Baltimore metropolitan areas, the percentages were 59.5% to overturn the statute and 40.5% to uphold it. (Of course, a decade earlier, who would have believed that 40% of the voters in rural Maryland would support same-sex marriage?)

The November 2012 referendum was part of the same election that saw Maryland cast 62.1% of its votes for Barack Obama for president and only 36.6% for Mitt Romney. In only five other jurisdictions—District of Columbia, Hawaii, New York, Rhode Island, and Vermont—did the re-elected president do better than he did in Maryland.

The story in Washington State is a similar one. A bill recognizing same-sex marriage passed the Washington Senate by a vote of 28-21 on February 1, 2012, and the state House of Representatives by 55-43 on February 8. Five days later the bill was signed into law by Gov. Christine Gregoire. However, as in Maryland, opponents of the law gathered enough signatures to force a statewide referendum on the new statute.

On November 6, in what was officially designated as Referendum 74, Washington voters upheld the statute by a margin of 53.7% to 46.3%, a difference slightly larger than in Maryland.

As in Maryland, the large population of the state’s major metropolitan area overrode the wishes of the largest part of the state, at least in geographic terms. Twenty-nine of the state’s 39 counties voted to override the legislature—and 15 by margins of better than 60%-40%–but their votes were offset by those of the other ten, nine of which bordered on the Puget Sound in western Washington.

In King County (which includes Seattle), the same-sex marriage bill passed by a margin of 67% to 33%. In the other nine Puget Sound counties, a majority of voters supported the bill, but the margin was a much closer 52.7% to 47.3%. However, in the other 29 counties, the margin on Referendum 74 was 58.1% to 41.9% to overturn the statute.

As in Maryland, one could argue that it is remarkable that in 2012, slightly more than 40% of the population of the “conservative” parts of Washington State were willing to support the concept of same-sex marriage. In 2012, President Obama won Washington State over Mitt Romney by a margin of 56% to 41%.

More than five decades ago, the “one-person, one-vote” rulings of the Warren Court, especially Baker v. Carr and Reynolds v. Sims, dramatically shifted the balance of political power from rural to urban areas in many states. The same-sex marriage “referendums” in Maryland and Washington are reminders of how significant those decisions continue to be.

Is it Time to Bring Back the Marquette Law School Baseball Team?

Posted on Categories Marquette Law School, Public, Sports & Law4 Comments on Is it Time to Bring Back the Marquette Law School Baseball Team?

Vintage BaseballEvery now and then the debate over whether or not Marquette should re-establish its varsity football team gets revived. Once a respected participant in the highest level of college football, Marquette unceremoniously dropped football in 1960. (See also here.)

In spite of its long tradition in sports law, it is a not well known fact that our law school once had its own baseball team. In his The Rise of Milwaukee Baseball: The Cream City from Midwestern Outpost to the Major Leagues, 1859-1901 (p. 324), Milwaukee historian Dennis Pajot notes that in 1895, a team called The Milwaukee Law Class competed with the city’s other amateur teams.

The Milwaukee Law Class, organized by the city’s law students in 1892, was Milwaukee’s first law school. In the mid-1890’s, its name was changed to the Milwaukee Law School, and in 1908, it was acquired by Marquette University. This is why the law school celebrated its centennial in 1992. (A second centennial celebration in 2008 marked the 100th anniversary of Marquette’s acquisition of the Milwaukee Law Class/School.)

Unfortunately, we do not know very much about the 1895 team, except that the scores of some of its games were listed in Milwaukee newspapers that year. It is, of course, possible that the team began play before 1895, but with a lower profile. If it did originate before 1895, it seems likely that one of the founders and original players on the Law Class team would have been Walter Schinz.

Schinz (born 1874) was one of the founders of the Law Class in 1892 and later a prominent 20th century Milwaukee County Circuit Court judge. He was also was an avid baseball player during his youth and an enthusiastic fan of the national pastime until his death in 1957. Schinz’ Milwaukee Sentinel obituary devoted much of its content to the judge’s life-long love of baseball that began as a sandlot player in Milwaukee in the 1880’s.

There is no reason to believe that the Milwaukee Law Class baseball team was an exceptionally powerful club. At that point, the school probably had somewhere between 20 and 40 students, some of whom were probably fairly athletic but many of whom were probably not. The fact that there is no record of the team after 1895, suggests that its success was probably limited.

In contrast, the Milwaukee Medical College baseball team, which played from at least 1894 into the early 20th century, appears to have been a more powerful club. (The Milwaukee Medical College was an independent medical school which opened in 1894 and was taken over by Marquette University in 1907.)

In 1901, the Medical College team was a solid enough amateur club to have played the American League’s Milwaukee Brewers in an exhibition game just before the opening of the 1901 major league season. (The Brewers apparently won the game in a convincing fashion.)

The 1901 season was the first year that the American League played as a major league, and the Brewers were one of its original eight teams. Unfortunately, a disappointing last place finish (48-89) and a league low attendance record led to the team being transferred to St. Louis in 1902, where the Brewers became the ill-fated St. Louis Browns (who are now the Baltimore Orioles).

After the 1908 acquisition of the Milwaukee Law School by Marquette University law students were eligible to play on the Marquette varsity team, and a number, including future sports lawyer and Congressman Ray Cannon, apparently did.

Marquette University Law School and World War II

Posted on Categories Marquette Law School History, PublicLeave a comment» on Marquette University Law School and World War II

B-17_Flying_FortressAs I have described elsewhere on this blog, Marquette Law School Dean Francis X. Swietlik played a prominent role in public affairs during the Second World War, primarily because of his leadership role in the American Polish Community. As the leader of the “Chicago Poles,” as Midwesterners of Polish descent were known, Swietlik advised President Franklin Roosevelt on Polish issues and was a national spokesman for the cause of his ancestral country — Swietlik had been born in Milwaukee in 1899 — which had been dismembered in 1939 by Nazi Germany and the Soviet Union.

However, the war was hardly kind to the law school, as its enrollment quickly shriveled as potential law students found themselves in military uniforms.

During the 1940-41 academic year, the law school appeared to be prospering with an enrollment of 225 students, all but eight of whom were males. (One of the male students was Emeritus Professor James Ghiardi, who was then a second year law student.)

Although United States involvement in the War would not come until the Japanese attack on Pearl Harbor in December of 1941, the institution of the military draft and the darkening clouds on the horizon led to a decline in students in the fall of 1941, as the total enrollment dropped to 187 students. Female enrollment dropped from eight to six.

When the United States declared war that December, the law school greatly accelerated its academic calendar, which originally extended into June, so that as many of the current third-year students as possible could finish law school before being inducted into the military. Professor Ghiardi graduated just days before entering military service.

By the beginning of the 1942-43 academic year, the number of the students at the law school had dropped by more than 50% to just 85 students, and to just 77 male students. The situation got even worse after that, as enrollments for 1943-1944 and 1944-45 were only 44 and 42 students respectively.

To deal with the dramatically smaller classes, the law school cut the size of its faculty and moved to a three-semester-a-year format that allowed students to complete the law school course in just twenty-four months. Many of those who did enroll at the law school during the War were ineligible for military service. For example, James D’Amato of Waukesha at 5’1” was too short for military service, while his classmate Clifford Thompson, who was reportedly over 8 feet tall, was both too tall and too old to be drafted. Thompson, who had a successful career in Hollywood as an actor and as a performer with a number of circuses prior to law school, achieved the distinction of being the tallest lawyer in American history after his admission to the Wisconsin bar in 1944. For more on Thompson’s remarkable career, see my earlier post.

One might have thought that the onset of the war would have led to an increase in the number of female law students, but that did not happen at Marquette, as female enrollment amounted to only 5 students in 1943-44 and only 6 in 1944-45.

Moreover, the end of the war did not result in an immediate influx of new students into Marquette and other law schools. World War II did not officially end until the Japanese formally surrendered on September 2, 1945, and the logistics of demobilization made it impossible for many soldiers who wanted to pick up their lives by going to law school to enroll in time for the fall 1945 semester.

In 1945-46, enrollment at the Marquette law school did increase, but not as dramatically as one might have thought. The number of students climbed from 42 to 93 (including 11 women) but the deluge was yet to come.

The following year, 1946-1947, saw the floodgates open as 332 students, including 8 women, enrolled in the law school, which set a new all-time record for the institution.

To facilitate the movement of these former G.I.’s into the legal profession as quickly as possible, the law school preserved the three-semester format, and allowed students to enter the law school at any one of the three semesters, as students had been allowed to do during the war. It would not be until 1950 that the law school would return to the more traditional two-semester, three-year format.

A follow-up post will deal with the demolition and reconstruction of the law school faculty during the World War II era.

Remembering the 1964 All-Star Game

Posted on Categories Public, Sports & Law2 Comments on Remembering the 1964 All-Star Game

johnny callison cardLast week’s Major League All-Star Game was pretty entertaining, as All-Star games go. The game was reasonably close throughout, and the outcome was never entirely certain until the final out was made. Even though the American League jumped off to a 3-0 lead in the first inning, by the middle of the 4th inning, the game was tied at 3-3. The AL went back up 5-3 in the bottom of the 5th inning, before the offense disappeared on both sides. Neither team scored after that point, and together they combined for only two hits and two walks.

The 2014 game also ended a string of somewhat one-sided games. In 2011 and 2012, the NL prevailed by margins of 5-1 and 8-0, while last year the American League shut out a hapless NL squad by a 3-0 margin.

Submerged in the discussion of the game were occasional references to the 1964 All-Star Game of fifty years ago. That game, one of the most exciting All-Star games of all time, was played on July 7, 1964, in recently opened Shea Stadium, the new home of the hapless New York Mets. Shea had opened in April in conjunction with the 1964 New York World’s Fair, which was situated on land immediately adjacent to the park.

In the 1964 game, the lead see-sawed back and forth. The American League went up 1-0 in the first inning, only to fall behind 3-1 as the NL tallied two runs in the 4th and another in the 5th. However, the junior circuit, as the AL was still referred to in that era, came back to tie the score in the 6th inning, and then went ahead 4-3 in the top of the 7th when Los Angeles Angels shortstop Jim Fregrosi (who passed away earlier this year) drove in New York Yankee catcher and reigning American League MVP Elston Howard with a sacrifice fly.

This one-run lead held until the bottom of the 9th inning. As the inning began, Hall of Famer Willie Mays faced Boston Red Sox relief pitcher Dick “the Monster” Radatz, who was pitching his third inning of the game. Radatz had previously been unhittable, retiring all six batters that he had faced, including four by strikeout. Suddenly, however, Radatz could not find the plate, and Mays drew a walk. The Say Hey Kid then stole second and a couple of pitches later came around to score on a bloop single to right by his San Francisco Giant teammate and fellow All-Star game starter Orlando Cepeda.

Actually, Mays would not have scored on Cepeda’s hit, but for first baseman Joe Pepitone’s errant throw to the plate. Mays had already stopped on third base but Pepitone threw home anyway. Unfortunately for the American League, his throw from shallow right field landed short of home plate and bounced over the head of catcher Elston Howard, allowing Mays to scamper home with the tying run, while Cepeda advanced to second base.

With the score now tied, NL Manager Walter Alston inserted fleet-footed Curt Flood into the game as a pinch-runner for Cepeda. Radatz, apparently unshaken by his bad luck, then induced National League third baseman Ken Boyer, who had homered earlier in the game, to pop out to third base for the first out. AL manager Al Lopez then ordered Radatz to intentionally walk catcher Johnny Edwards, an average hitter at best, to set up a possible double play.

At this point, Manager Alston countered by sending legendary Milwaukee Braves outfielder Hank Aaron up to the plate to pinch-hit for Met second baseman Ron Hunt. Undeterred by Aaron’s reputation as a clutch hitter, Radatz whiffed the legendary outfielder for the second out of the inning. At this point, Radatz had only to retire Philadelphia Phillie outfielder Johnny Callison to send the game into extra innings.

This was Callison’s second at-bat against Radatz, and he alone of the National League batters had managed a solid hit off the 6’6” fireballer, having flied out to deep centerfield for the final out of the 7th inning. Rising to the occasion, Callison did even better in his second appearance.

Wasting no time, he blasted Radatz’ first pitch into the right field stands for a game-winning, three-run home run. Suddenly, a 4-4 tie, seemingly headed for extra innings, had become a 7-4 National League victory.

(Here  is a highlight film of the game, which includes Callison’s home run.)

As a reminder of how much baseball has changed since 1964, it is worth noting the All-Star Game of 1964 differed from its 2014 counterpart in a number of ways, beyond having a much more exciting ending.

1. The All-Star Game was a day-time event. In the grand tradition of daylight baseball, before 1967, the All-Star game always began in the early afternoon in the Eastern Time Zone (which meant that it frequently began before noon on the West Coast). Watching the 1964 game, which began on NBC television at 12:45 p.m., presumably required many adults to figure out a way to get the afternoon off from work.

Fortunately, I was an unemployed 12-year old, playing his final year of Little League Baseball, so I didn’t have to worry about free time. In my circles, every boy my age felt obligated not only to watch the game but also to root for one league or the other. For the record, I rooted for the American League.

Night-time All Star Games were introduced in 1967, when the game began at 4:00 p.m. in Anaheim, California, which was 7 p.m. on the East Coast. Since then no afternoon game has been played.

2. The All-Star Game in 1964 was first and last a baseball game. There was very little hoopla surrounding the game other than interest in its final score. Only the starters were introduced by name. Moreover, there was certainly nothing at the 1964 game comparable to the major production made of Derek Jeter’s impending retirement and the minor production around Bud Selig’s announced retirement as commissioner.

No player who participated in the 1964 All-Star game retired after the 1964 season, but it seems certain that if one was planning to retire, he would not have announced it until the end of the season. Commissioner Ford Frick did retire the following year, but he waited until after the 1965 All-Star game to make the announcement. In 1964, a too-early retirement announcement would likely have been denounced as a form of self-aggrandizement.

3. Attending the game in person did not cost an arm and a leg. The most expensive tickets to the 1964 game — those in the box seats — sold for $8.40. If you were willing to sit in the bleachers, you could get in for a buck-twenty ($1.20). According to a Forbes Magazine story published at the end of this past May, the average ticket price for a 2014 All-Star game ticket on the secondary market was slightly more than $1,000, with the cheapest seats going for $367 each.

4. Fewer players made the All-Star team. In 1964, each All-Star team consisted of only 25 players, the number of players on an actual team during the regular season. Although the 25-man roster is still the rule in Major League Baseball, All-Star game rosters have been greatly expanded. This year, there were 34 players on each team. Roster expansion actually began back in 1969, when the number of teams in each league expanded from ten to twelve.

Technically, the expansion in the number of teams, currently 30, has been greater than the expansion in the size of the rosters. In 1964, 10% of current Major League players (50/500) were named to the All-Star game roster; in 2014, the figure was 9% (68/750). In 1964, there was a rule, as there is today, that each team must have at least one representative on the team.

5. Players, not fans, selected the starting line-up for the game. In 1964, the All-Star starters except for the pitcher were selected by a vote of Major League players. In 2014, the starters were selected by a vote of the fans.

Before 1947, the starting line-ups were selected by the All-Star team managers, but in 1947, the selection process was turned over to a fan vote. However, between 1957 and 1970, concern over “ballot-box stuffing” by fans of a particular team, led to the adoption of a system that relied on player voting. Selection of the starting line-ups was returned to the fans in 1970. Since that time “ballot box stuffing” has been encouraged.

Throughout the history of the game, starting pitchers have always been chosen by the All-Star managers. The honor of managing the All-Star team, then as now, went to the manager of each league’s representative in the previous fall’s World Series.

In 1964, that would ordinarily have been Walter Alston of the Los Angeles Dodgers and Ralph Houk of the New York Yankees. However, after the 1963 season, Houk was promoted to General Manager of the Yankees with Yogi Berra named to replace him as field manager. Under the rules, Houk was not eligible to manage in the All-Star Game, and he was replaced, not by his Yankee successor, but by Al Lopez, the manager of the Chicago White Sox, who had finished second to the Yankees in 1963.

6. Many players never got into the game in 1964, and some starters played the entire game. Although substitutions were more frequent in the All-Star game than in a normal regular season game, there was no expectation in 1964 that every player on the All-Star roster would be used in the game. In fact, it was assumed that several of the starters would play the entire game and that many of the reserves would not get into the game, unless it went into extra innings. That a significant number of All-Stars failed to be used by their managers did not seem to generate much controversy in 1964.

In 1964, three National League and four American League starters (including AL catcher Elston Howard), played the entire game. Starters who were taken out usually came out only late in the game. At the end of the 8th inning of the 1964 contest, 10 of 16 starters were still in the game. Altogether only 37 of the 50 roster players appeared in the game, and 5 of the 37 participated only as pinch hitters or pinch runners and another played only one-half inning in the field. In other words, of the 19 position-player substitutes, only 6 played as much as one full inning. Six of the 15 pitchers saw no action at all. Among those who did not get into the game were future Hall of Famers Whitey Ford and Bill Mazeroski .

In contrast, in 2014, 62 of 68 eligible players (32 from the NL and 30 from the AL) made it into the game, and no 2014 position player’s appearance was limited to pinch hitting, pinch running, or a single half-inning in the field. Moreover, every starter had been removed from the game by the end of the 6th inning. (Technically, AL DH Giancarlo Stanton was lifted for a pinch-hitter in the 8th inning, but that was only because a DH can only be removed by being pinch hit for. Stanton last appeared in the game in the 6th inning.)

Curiously, in 2014, for the third year in a row, no San Diego Padre player appeared in the All-Star game although there was, of course, a Padres player on the National League roster each year.

7. In 1964, almost all All-Star pitchers were starting pitchers, and pitchers were expected to pitch up to the three inning maximum unless it was necessary to pinch hit for them. In 1964, American League manager Al Lopez of the White Sox chose only eight pitchers for his 25 man squad, and NL manager Walter Alston of the Dodgers chose only seven. This clearly indicated an expectation that several pitchers would hurl more than one inning. Then as now, pitchers were limited to pitching three innings, unless the game went into extra innings.

Fourteen of the 15 pitchers chosen in 1964 were starting pitchers. The one exception was Boston Red Sox reliever Dick Radatz, mentioned above, who had compiled a truly phenomenal record in relief. From 1962 to 1964, Radatz, while pitching for Red Sox teams that never finished higher than 7th place in the standings, managed to win 40 games and save 78 while striking out 487 batters in 414 innings, all in relief. In contrast, the pitching staffs of both leagues in 2014 were intentionally composed of starters, middle relievers, and closers.

Of the nine pitchers who appeared in the 1964 All-Star game, only one, Philadelphia’s Chris Short, was removed from the game simply so that someone else could pitch. (Short also gave up three hits and two runs in the only inning in which he appeared.) Both starting pitchers, Don Drysdale and Dean Chance, pitched the maximum of three innings, while two others, Juan Marichal and Dick Radatz, were still in the game when it ended. The other four pitchers who appeared were all removed from the game for pinch-hitters. Even though Radatz was a terrible hitter — his lifetime batting average at the start of the 1964 season was .083 — he was allowed to bat in the eighth inning so that he could stay in the game and pitch a third inning.

In contrast, 21 pitchers appeared in the 2014 game. No one pitched more than one inning, and eight pitched less than a full inning. Of course, with the use of the designated hitter in the modern game, the issue of whether or not to remove a pitcher never arises. Nor is the three-inning limitation apparently of any consequence.

8. The two All-Star teams placed a greater emphasis on winning the game in 1964 than they do in the modern era. Judging by the way in which the two managers operated in 1964, this seems to be a valid conclusion. However, at the time, it was a common complaint that while the National League went all out to win each All-Star game, the American League seemed to view the event more as an exhibition game designed to showcase the sport’s stars. The fact that the American League managed only one win and one tie in the 13 All-Star games played between 1960 and 1969 seemed to lend some support to this theory. (There were 13 games in the decade because two All-Star games were played in 1960, 1961, and 1962.)

As mentioned, the way in which both managers handled their rosters in 1964, especially compared to their 2014 counterparts, does suggest that winning meant more to the managers fifty years ago, if not to the players, than it does now. Although Major League Baseball introduced the “league that wins the All-Star Game gets home field advantage in the World Series” feature in 2003, to try to make the All-Star Game appear more significant, getting as many players as possible into the game still seems to be the primary objective of both managers.

9. Fifty years ago All-Star Games didn’t last as long as they do now. Presumably because they featured fewer substitutions and fewer pitching changes, the All-Star Games of the 1960’s were shorter than those of today. Even with a full 9th inning and more scoring the 1964 game lasted only 2:37, compared to the 3:13 for the 2014 event.

Returning College Athletics to College Students

Posted on Categories Higher Education, Public, Sports & Law2 Comments on Returning College Athletics to College Students

kansas city chiefs football gamesThere is a simple way to end the hypocrisy that is modern college sport and at the same time preserve the much-beloved pageantry of men’s college football and basketball.

First of all, we need to embrace the idea that college athletics should be a part of the educational mission of colleges, and not part of their “providing entertainment” function. Subject to the exception for men’s football and basketball set out below, participation in college athletics should be limited to regularly enrolled students who chose to attend their college free from the enticement of special financial support.

The first step is to abolish all athletic grants-in-aid (euphemistically called athletic scholarships) except for those awarded in men’s football and basketball. Except for a few pockets of fan support for college baseball and hockey and women’s basketball, the simple fact is that most sports fans do not care about college sports other than football and men’s basketball.

It is foolish for colleges to “hire” players for their “non-revenue” sports teams at great cost when there are so many regularly enrolled students who would be happy to participate on those teams without additional financial inducements. Marquette, for example, does not need to give athletic grant-in-aids to have men’s and women’s teams in tennis and soccer. Lots of current students would jump at the opportunity to be a member of one of those teams.

Obviously, the teams recruited from the ranks of the regular student body would not likely be as talented as those that are purchased with grants-in-aid; but what should matter more is that under this proposal regular students would have the opportunity to enjoy the benefits of athletic participation, rather than simply have the option of sitting in the bleachers, watching their professional “classmates.”

For the vast majority of students, even those who devoted much of their pre-college years to competitive sports, college athletic participation opportunities today are pretty much limited to the intramural and club sports. The unrecruited varsity “walk-on” who plays a meaningful role on a college sports team has become almost as rare as the college football player who is awarded a Phi Beta Kappa key.

Men’s football and basketball programs are exempted from the proposed grant-in-aid ban for purely historical reasons. Unlike the case in every other country in the world, at an early date in the United States colleges and universities, rather than private sector clubs or the state itself, assumed the role of sponsoring developmental professional leagues for men’s football and basketball. In this role, college teams in both sports came to be treated as the equivalent of the major professional sports leagues, at least with regard to fan interest.

“Big time” football schools have performed this function for more than a century, and having cultivated enormous fan-bases that extend well beyond the college community, it would not be feasible, or even desirable, to scale back the level of competition in these two men’s sports.

This proposal would obviously require a modification of Title IX, or at least its reinterpretation, but that should not be problematic. Title IX has from its beginning been about expanding educational opportunities and not about providing subsides for elite athletes.

Freed from a mechanical application of Title IX, this proposal would greatly expand educational opportunities. By eliminating athletic grant-in-aids in all other sports and by dramatically reducing athletic travel budgets colleges could expand the number of varsity and junior varsity opportunities for their students, both men and women. Title IX would still require schools to provide equal opportunities for male and female students, but the moneys spent on men’s football and basketball were no longer be part of the calculation. The money that would have gone to athletic grants-in-aid for non-revenue sports could be added to the institution’s regular financial aid budget.

Because “college” football and basketball are still inextricably linked to the idea that the players are students at the institutions they represent, scholarship players in men’s football and basketball would be required to remain enrolled as full-time college students, as they are now. Current eligibility rules could remain in place; players would still receive athletic grants-in-aid; and there would be no problem, at least from the perspective of this proposal, if the amount of the grant was increased to provide for additional spending money.

Schools with scholarship programs in men’s football and basketball could also operate non-scholarship teams in these two sports. Hence, Marquette could have both a scholarship varsity basketball team and a non-scholarship varsity team, each playing a separate schedule and likely in different conferences. While fan attention would likely continue to focus on the scholarship varsity team, the non-scholarship second team would give some regular Marquette students who enjoy playing basketball the opportunity to experience the benefits of participation in intercollegiate sports.

This proposal would return most college sports to students who come to college for the purpose of broadly preparing themselves for their future. It would take athletics away from those whose primary concern, reasonable or not, is for a career as a professional athlete. Superbly talented golfers, tennis players, and baseball and hockey players will find other ways to demonstrate their potential for professional careers in sport.

I know that some will object that this proposal will adversely affect those students whose only path to college is through a grant-in-aid in a non-revenue sport. However, I don’t see that as persuasive. There is nothing that will prevent a college from giving such a student regular financial aid if the student has academic potential as well. Alternatively, the school could take the money that would have gone for the athletic grant-in-aid and instead give it to an equally needy student with even greater academic potential.

This proposal could be implemented by voluntary action on the part of colleges and universities, either under the umbrella of the NCAA or outside of it. It could also be legislated into existence by Congress. However adopted, this proposal would benefit both athletics and higher education.

Why Did the Washington Redskins Choose the Name “Redskins” in the First Place, Rather than Some Other Native American Name?

Posted on Categories Public, Sports & LawTags , 18 Comments on Why Did the Washington Redskins Choose the Name “Redskins” in the First Place, Rather than Some Other Native American Name?

[This is a continuation of an earlier post, “Why the Redskins are Called the Redskins.”] 

In a recently “discovered” Associated Press story of July 5, 1933, owner George Preston Marshall of the National Football League’s Boston franchise is quoted as saying that he was changing the team’s name from “Braves” to “Redskins” to avoid confusion with Boston’s baseball Braves. This bit of evidence has been proclaimed to disprove the contemporary Washington Redskins’ claim that the name change was to honor the team’s newly appointed Indian coach, William Lone Star Dietz.

However, that is not necessarily the case. All the quote really establishes is that Marshall felt he had to change the team’s name before the 1933 season began; it does not necessarily explain why he chose the name “Redskins” as the replacement name. The name change was apparently necessary because Marshall had entered into an agreement for his team to play in Fenway Park in 1933, rather than in Braves Park, as it had done in 1932.

The story of how the team came to choose the name “Redskins” is a complicated one and for which the evidence is somewhat sketchy.

One thing that is clear is that several months before July 1933, Marshall had decided that he was going to bring “Indian football” back to the National Football League. Indian football was a wide open brand of early twentieth century football, usually played by Native American teams, that featured lots of passing and trick plays. It was most strongly associated with the college teams fielded by the Carlisle Indian Industrial School in Carlisle, Pennsylvania between 1893 and 1917, and during the 1920’s with the Haskell Indian Institute teams from Lawrence, Kansas. For two years, 1922 and 1923, the National Football League had also featured the Oorang Indians, an all-Native American team based in Larue, Ohio, that featured player-coach Jim Thorpe.

It is likely that the availability of Coach Deitz, a well-known figure in college football who had been a teammate of Thorpe at the Carlisle Indian School and had taken teams to two Rose Bowl games, figured into this decision. At the time of his hiring by Marshall, Dietz was the coach at the Haskell Indian Institute and was famous for the “trick” plays and unconventional formations deployed by his teams. While it is true that Marshall had long been fascinated by certain aspects of Native American history, it seems likely that the availability of Dietz, combined with the resignation of previous Head Coach Lud Wray, led him to embrace the idea of reviving Indian football when he did.

Although Marshall’s team had begun play in the NFL as the Boston Braves in 1932, little effort was made that first year to exploit the Native American connection. Unlike the Boston Braves baseball team, which was the first American sports team to wear an Indian insignia on its uniforms, the 1932 football Braves deployed no such imagery. In 1933, in contrast, Marshall planned to fully exploit the Native American connection. An Indian head symbol was adopted as the team’s logo and placed on the front of the players’ jerseys, and Marshall encouraged Dietz to recruit some Indian players for the team. (At least six Native Americans, most of whom had played for Dietz at Haskell, had tryouts with the team, and four made the final roster.)

In marketing the team before the 1933 season, Marshall had Dietz and some of the Indian players photographed in full Native American regalia, and during the first home game of the 1933 season the players, Indian and non-Indian alike, were required to wear war paint on their faces. Dietz stalking the sidelines wearing his Sioux headdress was also a regular sight at the team’s games, and the team’s new playbook had a clear Indian football slant. (Whether Dietz’s plays would work in the NFL was a different question.)

The original plan was to play in 1933, as in 1932, under the name Boston Braves, but with a much greater “Indian” emphasis. The decision to relocate to Fenway Park necessitated giving up the name Braves, but Marshall’s commitment to Indian football required that the team’s new name also refer in some way to Native Americans.

But why did Marshall choose “Redskins,” rather than some other name that would reflect the team’s inspiration? Why not “Indians,” or “Warriors,” or “Chiefs”?

In the American sporting landscape of 1933, there were only a handful of examples of Native American names attached to sports teams. For example, during the 1932 and 1933 seasons, there were 14 teams in Major and Minor Baseball that had Native American nicknames. “Indians” was by far the most popular, paired with the city name of teams in Cleveland, Indianapolis, Seattle (also called the Rainiers), Oklahoma City, San Antonio, and Quincy, Illinois, and Keokuk, Iowa.

In addition, three teams used the name “Chiefs” (located in Worcester, Massachusetts, Ft. Wayne, Indiana, and Muskogee, Oklahoma); two used “Braves” (Boston and Pueblo, Colorado); and the Mobile, Alabama team in the short-lived Southeastern League was called the “Red Warriors.” One team, the Memphis “Chickasaws” of the Southern Association, used a tribal name associated with its region. In college football, there were several teams with Native American names, but most, like Stanford, Dartmouth, and William and Mary used “Indians.” On the other hand, there were two schools–the University of Utah and Miami University of Ohio–that used “Redskins” as their nicknames.

Consequently, if he wanted to use a Native American team name that was somewhat familiar, Marshall’s options were limited. “Braves” was out, of course, and there was an unwritten rule in the National Football League in that era that nicknames used by major league baseball teams were reserved for the NFL teams that played in the same city. (“Braves” had been reserved for the Boston team in 1932 under this same principle.) As a result, “Indians” was also not available to Marshall. Teams with the name Cleveland Indians had played in the NFL in 1921, 1923, and 1931, and in 1933, it probably seemed likely that a new Cleveland Indians team would enter the league at some point in the future.

For all practical purposes, the list of familiar Native American team names that was available was limited to “Warriors,” “Chiefs,” and “Redskins,” unless Marshall chose to adopt a tribal name, as the baseball team the Memphis Chickasaws had done. Unfortunately, none of the tribal names associated with Boston or eastern Massachusetts—Wampanoag, Massachusett, Nauset, Nantucket, Pennacook, Pokanoket, or Pocasset—were particularly evocative, or recognizable, or even pronounceable.

The decision to choose “Redskins” may have been based, as I have argued earlier, on the phonic similarities between “Redskins” and “Red Sox,” the other team using Fenway Park in 1933. Moreover, at that time there was a certain novelty to the name “Redskins.” Although their meaning was different, the two names sounded alike, and it would be easy for fans to link the two names together. Moreover, there was an element of novelty to the name. Although the term “Redskins” was familiar to sports fans–sportswriters had regularly used “Redskins” as a synonym for “Indians” or “Braves” for years when writing about the baseball teams in Cleveland and Boston, or the football team in Cleveland–no team in the NFL had ever been officially called the “Redskins.”

Nor had there ever been a Redskins team in Major League Baseball. In fact, only once had a minor league baseball team used the name “Redskins.” That team, based in Muscogee [Muskogee], Oklahoma, played under the name “Redskins” in the Oklahoma, Arkansas, Kansas League in 1907, the Oklahoma-Kansas League in 1908, and the Western Association in 1911. In an era where team nicknames were quite fluid, the Muscogee team, which existed from 1905 to 1911, also played under the names “Reds” (1905); “Indians” (1906); and “Navigators” (1910-11).

While the name “Redskins” was used by at least two college teams in 1933, neither was a national powerhouse, so when the New England sporting public was presented with the new name in 1933, it probably sounded new and distinctive, but at the same time, not unfamiliar.

In addition, there are two other factors that may have influenced Marshall’s choice of “Redskins.”

One relates to the 1929 movie, Redskin, which, while not particularly well remembered today, contained one of the most sympathetic portrayals of Native Americans in the silent film era and is well-known to film historians. Redskin is the story of a young Navaho male named Wing Foot who unwillingly attends a government-operated Indian boarding school. After a period of adjustment, he does well at the school and later wins a scholarship to a prestigious eastern college where he wins great honor as a student and as an athlete. Nevertheless, his accomplishments are undercut when Wing Foot eventually discovers that he will eventually denied entry into white society because of his race. Moreover, when he returns to the Navajo as an educated man, he is rejected because of his white ways. Wing Foot finds himself trapped between the two cultures, no longer fitting into either one.

The movie was highly praised at the time for its sensitive portrayal of the plight of the Native American, and in 1930, the white actor Richard Dix, who had played Wing Foot in the film, was made an honorary member of the Kaw Indian tribe based on his supposedly realistic portrayal of a Native American in the film. The ruggedly handsome Dix had been a star athlete in his youth and in some accounts had briefly played football at the University of Minnesota. In the 1920’s he seemed to specialize in sports-related movies, portraying football and baseball players, amateur and professional boxers, auto racers, and aviators in a variety of films. Prior to his performance in Redskin, he also had won plaudits for his portrayal of a Native American character in the equally well-regarded 1925 film, The Vanishing American.

While it is hard to know precisely what Marshall thought of the film, he was certainly aware of it, given his personal connections to Dix and to Louise Brooks, who was also involved in the making of the film. Marshall had been an acquaintance of Dix (then known as Ernest “Pete” Brimmer), when the two men were young actors affiliated with the Morosco Theater in New York in the late 1910’s. Although his own career as a an actor ended when he took over the family laundry business following his father’s unexpected death in 1918, he remained fascinated with Broadway and Hollywood, and he regularly socialized with show business people and eventually married silent film star Corrine Griffith. In this context, he seems likely to have followed Dix’s film career, particularly his roles in movies involving sports, which was also a long standing passion of Marshall.

Moreover, actress Louise Brooks, with whom Marshall had a highly publicized love affair in the late 1920’s, was originally cast in Redskin as Corn Flower, Wing Foot’s Pueblo Indian love interest. Since this casting occurred during the Marshall-Brooks relationship, Marshall surely was aware of the movie, even before it went into production. (Brooks was eventually pulled from the cast so that she could star with William Powell and Jean Arthur in her first talking role in The Canary Murder Case, so she does not actually appear in Redskin.)

Given these connections, one possibility is that the name “Redskins” appealed to Marshall because it allowed him to envision a team of Richard Dix-like athletes—even to the point of most of them being white men portraying Native Americans. Another possibility is that Marshall was sensitive enough to see a connection between the character of Wing Foot in Redskin and Coach Dietz and the Indian players to be recruited for the team, all of whom, like Wing Foot, had presumably have been “Americanized” by Indian schools and team sports like football.

It is also possible that the hiring of Lone Star Dietz did affect Marshall’s decision to call the team “Redskins.” It is hard to track linguistic changes with precision, but over the course of the Twentieth Century, the meaning of the term “Redskin” shifted from a generic synonym for “Indian” or “Native American,” to a term that suggested a particular type of Indian—i.e., a war-like Plains Indian from the 1870’s or 1880’s. This, of course, was the time and place in which the vast majority of American western novels and movies of the mid-Twentieth Century were set. The continued usage of the term ”Redskins” in western movies and western fiction, and later in western television shows, combined with the gradual disappearance of the term from general usage, led to a change in meaning of word for many Americans. However, the extent to which this shift in meaning had occurred by 1933, and to what extent it had occurred with George Preston Marshall by that year, is hard to gauge.

However, it may be that the presence of Lone Star Dietz, who claimed to be a member of the Sioux Tribe, did affect Marshall’s thinking. William Dietz was one of the great imposters in American sports history. Although it is possible that his birth mother was Native-American (possibly an Ojibwa), he was raised by two German-American parents in Rice Lake, Wisconsin, and did not begin to present himself to the world as a Native American until he was nearly 20 years old. Often claiming to be a half-breed child of a Sioux woman and a German-American engineer who had grown up on an Indian reservation in South Dakota with the name Lone Star, Dietz was extraordinarily successful in convincing Native Americans that he was one of them.

Also a gifted artist who focused on Native American subjects, Dietz managed to talk his way into the Carlisle Indian School where he was a student and an instructor, as well as a star lineman on the football team. His first wife, Angel Decora, a noted Indian artist, believed he was a Native American, as did all of his Carlisle teammates and the players, Indian and non-Indian, that he coached at Washington State, the Mare Island Marine Base, Purdue, Wyoming, Louisiana Tech, and Haskell. Although the accuracy of his heritage claims was occasionally challenged, Dietz lived his entirely adult life successfully holding himself out to be Native American.

In retrospect, it is easy to disparage both Marshall and Dietz as frauds. Both inhabited personas of their own design. However, at the time of Dietz’s hiring, Marshall clearly believed that his new coach was a Sioux Indian. And, thanks to the legacy of the Dakota War of 1862, Custer’s Last Stand at the Battle of the Little Big Horn, Crazy Horse, Sitting Bull, and Buffalo Bill’s Wild West Show, no tribe, except possibly the Apaches of the Southwest, better exemplified the warlike Native Americans of western movies who were increasingly associated with the term “Redskins” (and who were often depicted as members of the Sioux Tribe).

It is possible, then, that Marshall chose to rename his team “Redskins” because he thought that the nickname was particularly appropriate for a team coached by an actual Sioux Indian. If that is what happened, then it may be true that the name was actual chosen to “honor” Lone Star Dietz.

We will never know for certain exactly why George Preston Marshall chose the name “Redskins” in the summer of 1933. As a general rule, Marshall was usually closed-mouth about his motivations, and he left little in the way of letters or diaries that might reveal his real thoughts. Most likely his decision to select the “Redskins” name was a result of all of the factors discussed above.

Whatever the explanation for the selection of “Redskins,” the significance of the change has probably been exaggerated, thanks to the shift in meaning that has occurred in regard to the word “Redskins” since 1933, and especially since the 1970’s, during which time the word “Redskin” has become widely perceived to be an ethnic slur, something that was not originally the case.

The Associated Press story regarding the name change mentioned above also ran in the Salt Lake City Tribune on July 6, 1933. That newspaper, which closely covered the University of Utah Redskins during the football season, ran the story under the title, “Boston Pro Grid Team Alters Name.” Not “changes” name, but merely “alters” name. In 1933, at least, the difference between “Braves” and “Redskins” seemed pretty insignificant to most American sports fans.

The legitimacy of non-Native Americans using Native American signifiers as sports team nicknames is an important part of the ongoing discussion concerning the proper use of racially-oriented vocabulary in American culture. Whether the use of Indian nicknames by non-Indian sports teams represents the improper appropriation of someone else’s cultural property, or whether it is a permissible use of materials properly in the public domain, is an important issue about which well-meaning people clearly disagree. However, the continued squabbling over the name “Redskins,” usually with very little attention to the complex history of the name, contributes very little to this important debate.

Why the Redskins Are Called the Redskins

Posted on Categories Federal Indian Law, Public, Sports & LawLeave a comment» on Why the Redskins Are Called the Redskins

Washington Redskins logoWith 50 United States senators signing a letter to the president of the NFL urging him to pressure Daniel Snyder, the owner of the Washington Redskins, to change the team’s name, and Congressman Henry Waxman calling for the House Energy and Commerce Committee to hold hearings on the name, it is clear that the controversy over the name “Redskins” has yet to subside.

In the Wednesday, May 27, Washington Post columnist Robert McCartney purported to rebut the Redskins’ claim that the team was named the Redskins in honor of its Native American coach William “Lone Star” Dietz (whom, it turns out, may not have been an Indian at all, but that was clearly unknown to team owner George Preston Marshall at the time.)  The source of McCartney’s proof is a July 6, 1933 AP story that quoted Marshall to the effect that he changed the team’s name from “Braves” to “Redskins” so that he could avoid confusion with the Boston Braves of baseball’s National League and so that he could continue to use the team’s new Indian head logo.

McCartney is clearly correct on that point.  The team already had a Native American name (Braves) when it signed Dietz as its coach.  The name was changed, as Marshall indicated in the above quote, because the team was moving to a new venue within the city of Boston.  (The team did not move to Washington until 1937.)

Here is the story:

*In 1932, George Preston Marshall and three partners were awarded an NFL team on the condition that it be located in Boston, where the previous NFL team had folded after the 1929 season.

*Needing a place to play, the options for the new team were limited.  Fenway Park was not available because of a city ordinance that prohibited professional sporting events on Sundays if they were within a certain distance of a church (and Fenway was); Harvard would not rent out its famous stadium to professional teams; and the Boston College field was not enclosed.  The only real option was playing in Braves Park, the home of the Boston Braves baseball team.  Moreover, the baseball Braves owner, Emil Fuchs, was a friend of Marshall’s co-owner Jay O’Brien, a well-known New York investor and playboy.

*Having decided to play in Braves Field, it made perfect sense to use the same name as the baseball team.  This practice was quite common in the early history of the NFL for teams in cities with major league baseball teams.  The pre-1932 NFL at different times featured teams with “baseball” names like the Cleveland Indians, Washington Senators, Detroit Tigers, New York Giants, New York Yankees, and Brooklyn Dodgers, as well as the Chicago Bears whose name was a variant of Chicago Cubs.  Moreover, in 1933, the year following the creation of the Braves, the league added teams called the Pittsburgh Pirates and Cincinnati Reds.  In addition, NFL teams from Buffalo, Kansas City, Hartford, and Louisville had earlier used the names of local minor league baseball teams.  Consequently, there was nothing particularly special about the new Boston team using the name Braves.

*During the 1932 season, the Braves went 4-4-2, without making any special effort to emphasize the fact that the team had a Native American nickname.  Braves Field was nicknamed the Wigwam, but that name had been used for years before the football team was created in reference to the baseball Braves.

*However, a sequence of events following the 1932 season would lead the Boston team to change both its playing field and its nickname. The first step came when Lud Wray, the team’s coach, resigned to become the co-owner of the expansion Philadelphia Eagles.  To replace Wray, Marshall hired Lone Star Dietz, a famous college coach, who was at the time the head coach of the Haskell Indian School in Kansas.

*Having hired Dietz, Marshall, who was a born-showman who had long been fascinated with Native Americans, decided to revive “Indian football.”  Coach Dietz may well have been the inspiration, since he had been a teammate of Jim Thorpe at the Carlisle Indian School, when that institution ruled college football.  Moreover, only a decade earlier, the NFL had featured all all-Indian team, the Oorang Indians, which in 1922 and 1923 had been captained by Thorpe, universally viewed as the greatest football player in American history.

*Marshall encouraged Dietz to sign Native American players—six ended up on that year’s Boston team—and he decided to add an Indian emblem to the team’s uniform and planned a variety of Native American symbols ranging from war paint on the players’ faces, to Dietz’ Indian headdress which he wore on the sidelines, to the supposedly Indian-inspired tricks plays that filled Dietz’ playbook.  These plans were in place while the team was still planning to play the 1933 season as the Boston Braves.

*Nevertheless, subsequent developments would bring the career of the Boston Braves to a sudden close.  For a variety of reasons Marshall was not happy with Braves Field, which he felt was poorly maintained by the penny-pinching Fuchs.  O’Brien had dropped out of the ownership group after the 1932 season, and Marshall apparently did not get along with Fuchs, whom he felt was also overcharging the football team when it came to rent.  (Fuchs did not own Braves Field and was subject to an onerous master lease himself.)

*That same summer, Boston repealed the “close to a church” ordinance, just as substantial renovations to Fenway Park were completed.  Given the opportunity to move to a newer, nicer park at less rent, Marshall signed a new lease with Tom Yawkey, the owner of the Red Sox and Fenway Park that guaranteed that the football team would play the 1933 season in a new home.

*Given that he was no longer a subtenant of the Braves, he had very little incentive to have his football team continue to play under that name.  On the other hand, he was committed to the idea of bringing back Indian football, but the pool of Indian names was limited.  The Cleveland Indians had played in the NFL as late as 1931, and that name appeared to be informally reserved for a future Cleveland team.  Consequently, Marshall chose the name Redskins, in part, one suspects, because of the way that it echoed “Red Sox.”

*In the summer of 1933, the term Redskins was widely viewed as a synonym for Indian and as no more or no less pejorative than names like Indians, Braves, Warriors, or Chiefs.  Recent events have made it clear that many Americans today, both Indian and non-Indian, view Redskins as an objectionable name.  However, that is a consequence of much more recent linguistic changes and had nothing to do with the decision to adopt the name Redskins in 1933.

A fuller account of this story and the history of Native American team names in pre-World War II American can be found here  (http://scholarship.law.marquette.edu/facpub/564/) .

Understanding the Constitutional Situation in Crimea

Posted on Categories International Law & Diplomacy, Public3 Comments on Understanding the Constitutional Situation in Crimea

As the eyes of the world turn today (Sunday) to the Crimean referendum regarding separation from Ukraine and reunification with Russia, it is worth remembering that there have been a number of previous referendums on Crimea’s status, and almost all of them have produced highly ambiguous results.

Crimea, currently an “Autonomous Republic” under the Ukrainian Constitution, had been part of the Russian Empire from 1784 until the empire’s collapse in 1917. In the early Soviet period, it was part of the Russian Federation Soviet Socialist Republic and not the Ukrainian Soviet Socialist Republic. During the 1940’s, much of the region’s indigenous Tatar population was forcibly relocated to other parts of the Soviet Union, a move that allowed ethnic Russians to become a majority in the region.

The first referendum was one that did not occur. Under the Constitution of the Soviet Union, no territory could be transferred from any of the 15 constituent S.S.R.’s without the approval of the affected people. In 1954, for reasons that are still not clear, Soviet Premier Nikita Khrushchev, an ethic Russian who had previously been appointed by Josef Stalin to head the Ukrainian S.S.R.’s government, secured the approval of the transfer of Crimea to the Ukrainian S.S.R., even though only about 20% of the Crimean population at that time were of Ukrainian ancestry.

The required referendum was never held. At the time, no one imagined that the Soviet Union would collapse someday and given that all important decisions in the U.S.S.R. would be made in the Kremlin, the transfer did not seem of great consequence. Crimea was simply incorporated into the Ukrainian S.S.R. after 1954.

However, at the very end of the Soviet period, the status of Crimea under the Constitution of the Ukrainian S.S.R. was changed. Since 1954, Crimea had been treated simply as one of twenty-some oblasts (which were the principal subdivisions of Ukraine). However, shortly before the end of the Soviet period, the status of Crimea was changed to that of “Autonomous Republic” within Ukraine as the result of a state-sanctioned referendum held on January 20, 1991.

As an “Autonomous Republic” (a category used in the Russian Soviet Federation), Crimea was granted powers not possessed by the oblasts, including the right to have its own written constitution, legislature, and budget. The Ukrainian government’s consent to the referendum was essentially an acknowledgement of the fact that Crimea had not been thoroughly integrated into the rest of Ukraine.

The next referendum came in December 1991, and confirmed the collapse of the Soviet Union.

In July 1990, the Ukrainian Verkhovna Rada (parliament) had adopted a Declaration of State Sovereignty which asserted the superiority of Ukrainian law over Soviet law, but left Ukraine still part of the Soviet Union. However, the following year, after an unsuccessful military coup directed against Mikhail Gorbachev, the progressive head of the Soviet Union, the Verkhavna Rada declared Ukraine’s independence from the U.S.S.R. on August 24, 1991.

The independence declaration was, however, subject to approval in a national referendum scheduled for December 1 of that year. Voting on the proposition, “Чи підтримуєте ви Акт про незалежність України?” (Do you support the Act of Independence of Ukraine?”), over 84% of the electorate turned out, and over 92% of those who voted supported the independence resolution. Polling data at the time also suggested that more than 55% of ethnic Russians in Ukraine supported the decision to leave the Soviet Union.

While the vote on independence passed by an overwhelming majority, support was not uniform, and nowhere was the population more divided than in Crimea. At the time of the vote, Ukraine was divided into 27 administrative units: 24 oblasts, one autonomous republic (Crimea), and two independent cities, Kiev (Kyiv) and Sevastopol (which was on the Crimean peninsula, but technically separate from the Crimean Autonomous Republic).

In 20 of the 27 districts, over 90% of those who voted, voted for independence. In 5 or the remaining 7 districts, “yes” votes exceeded 83% of the total votes cast. In contrast, the “yes” vote in Crimea was only 54%, and in Sevastopol, it was only slightly higher at just 57%.

Moreover, one might assume, as some commentators have, that most of the 16% of the eligible voters who failed to vote in the referendum were supporters of remaining in the Soviet Union and considered the secession referendum illegitimate. Even if this was true, independence was still supported by a substantial majority (more than 64%) of eligible voters in 25 of the 27 electoral districts.

However, in Crimea, the percentage of “yes” votes was only 37% of total voters, and in Sevastopol, it was just 40%. Moreover, it has been argued that many of the Russian-speaking Ukrainians who voted for independence believed that they were voting to abolish the Soviet Union, which would be followed by some sort of reunification with a non-Communist Russia.

After 1991, the status of Crimea in the now independent Ukraine was a major political issue from the beginning and the politics of the 1990’s featured a continuous struggle between the central government in Kiev and the local authorities in Crimea, before the matter was finally resolved in 1998.

Almost immediately after independence, the Crimean parliament sought to assert its autonomy, going so far as to declare its independence on May 5, 1992, only to retract that declaration the following day. On May 6, the newly adopted (in Crimea) Crimean Constitution was amended to identify Crimea as part of Ukraine (albeit a highly autonomous part). In June of 1992, the Ukrainian parliament recognized Crimea’s status as an “Autonomous Republic” under the Ukrainian Constitution, but the controversy of the scope of the powers of the Crimean government was not resolved until December 23, 1998, when the Verkhovna Rada accepted a new, less ambitious constitution that had been adopted in Crimea two months earlier. (Article 135 of the Ukrainian Constitution provides that the Crimean Constitution must be approved by the Ukrainian parliament.)

Periodically over the past six decades, some Russians have claimed that the 1954 transfer was illegitimate. Nevertheless, in 1997, Russia and Ukraine entered into a treaty agreement that recognized Ukrainian sovereignty over the Crimean peninsula.

Like everything else in Ukraine, the situation in Crimea is incredibly complex and the product of a history that is largely unpleasant. However, under the existing constitutional arrangements in Ukraine, neither oblasts nor autonomous republics enjoy a right of secession. Moreover, Russian support of the secession effort appears to be in violation of the Russian Federation’s prior treaty commitments.

Professor Hylton served on the U.S.-Ukraine Foundation’s Advisory Commission for the Ukrainian Constitutional Court from 1977-1999. He was a Fulbright Scholar in Ukraine in 2000, and has returned to lecture in Ukraine on several occasions, including during the Orange Revolution of 2004. He currently serves on the advisory board of the Ukrainian political science journal Kyiv-Mohyla Law and Politics which is published by the National University of Kyiv-Mohyla Academy.

 

Why Are There So Many Major College Post-Season Conference Basketball Tournaments When Forty Years Ago There Were Almost None?

Posted on Categories Public, Sports & Law5 Comments on Why Are There So Many Major College Post-Season Conference Basketball Tournaments When Forty Years Ago There Were Almost None?

In the modern world of college basketball, every Division I conference except the Ivy League sponsors a post-season conference tournament. In 2013, there were 31 such tournaments.

For teams that have played extremely well during the regular season, these tournaments are not crucial but a good performance can improve a team’s seeding in the NCAA tournament. For teams on the proverbial bubble, a good performance, even short of a conference championship, can be enough to push a team into the field of 68.

For teams that have no chance of being selected for the post-season on the basis of their regular season performance, their fans can always hope for a miracle run that will allow them to claim their conference’s championship and its automatic bid to the “Big Dance.”

It is not hard to understand the popularity of these tournaments. They bring together into a single building all of the conference’s teams as well as a congregation of fans from across the conference. Some fans are willing to spend large sums to attend the tournament in person, and thousands more are happy to watch it on television or listen to the games on the radio. Fans of underperforming teams know that somewhere out there in the basketball stratosphere there is a team with a losing record that is going to catch fire and will end up matching the NCAA tournament. With luck, that team will be their team.

However, students of the history of college basketball know that 40 years ago, such tournaments were quite rare in major college basketball. Although district championship tournaments were ubiquitous in high school basketball in the 1950s and 1960s, they were once shunned by college conferences.

As late as 1970, there had only been five college conferences in history that had ever used the post-season tournament and four of the five were linked to the Southern Intercollegiate Athletic Association of the early 1920s. In the 1950s and 1960s, the post-season tournament was associated with two conferences, the Southern Conference and the Atlantic Coast Conference, both of which drew their schools from the Carolinas, the Virginias, and the District of Columbia and Maryland.

This essay addresses two questions. First, how did the Southern Conference and the ACC come to decide their conference championship on the basic of a loser go home tournament when every other conference used the regular season record for that purpose? Second, how did the conference tournament become so common after the mid-1970s when it had been so rare only a few years before?

The Origins of the Post-Season Basketball Tournament

It is one thing to have a post-season tournament that matches championship teams from different conferences that are unlikely to have ever competed against each other. It is quite a different thing to play an entire season to establish a set of standings, only to redo them in a three or four day span.

The first college post-season tournament, while limited to members of a single conference, was really more like the former rather than the latter.

The first post-season college conference basketball tournament was staged in 1921 by the Southern Intercollegiate Athletic Association (SIAA). The SIAA had been founded in December 1894, as an umbrella organization that could oversee and, if necessary, police intercollegiate athletics at southern universities. Its purpose was not to organize athletic competitions and crown champions.

Membership in the SIAA varied from year to year. Seven schools were at the organizational meeting in 1894, and 17 were designated as charter members in 1895. Eventually 72 different colleges joined the organization at one time or another, and in any given year, the number of member schools was typically somewhere between 30 and 40.

Traditionally, the SIAA did not attempt to organize championship competitions, although it did from time to time organize track and field events, and in 1921, it decided to sponsor a basketball tournament in Atlanta, Georgia, that would be open to any member school that wished to participate. The winner would be designated the Association champion for 1921.

Somewhere in the neighborhood of 16 colleges decided to compete, and the tournament title went to the University of Kentucky which defeated Tulane, Mercer, Mississippi A&M (now Mississippi State), and the University of Georgia in the single-elimination affair.

Few of the schools that entered the 1921 competition had played each other during the regular season. Kentucky, for example, had gone 9-1 during its regular season, but it had only played the University of Cincinnati and other college teams in Kentucky and Tennessee.

A second tournament was held in 1922, and this time the competition was won by the University of North Carolina.

By 1922, the SIAA was on the verge of breaking apart over certain policy issues like freshman eligibility for varsity participation, and whether college athletes should be permitted to play baseball for money during the summer vacation. As a general rule, the larger schools opposed freshman eligibility and summer professional baseball.

At the end of the 1921-22 academic year, eight schools left the SIAA and with six additional schools from the upper South that were not SIAA members, organized the Southern Conference. Initially, most Southern Conference schools remained members of the SIAA, but after the 1921-22 academic year, they decided to go their separate ways.

One of the reasons for the group withdrawal from the SIAA was the belief that an organization with more than 30 members at any given time was too large to have a meaningful conference regular season. That each school in the SIAA might play each other school at least once, let alone twice, during the same season was simply impossible, given the size of the organization.

The original idea was that the Southern Conference would be a smaller, more compact organization. However, the popularity of the new organization, and the unwillingness of the founding schools to turn down applications from colleges that they considered equal to themselves, left the new Southern Conference with size problems of its own.

Starting with 14 initial members, the Southern Conference expanded to 20 schools in 1922, 21 in 1923, 22 in 1924, and 25 in 1928. In an era when some Southern Conference schools played as many as 25 games in a season while others played as few as 10, it was impossible to say that the team with the best winning percentage in conference games was the best team in the conference, so the idea of a post-season, championship tournament was carried over into the Southern Conference from the SIAA.

However, in the pre-World War II era, no other athletic conference adopted the idea of determining its basketball championship on the basis of a post-season tournament. Of course, other conferences were significantly smaller than the Southern—in 1931-32, for example, while the Southern had 23 teams, only two other conferences had as many as 10: the Mountain States Athletic Conference had 12 teams which were divided into two divisions, each of which crowned its own champion; and the Big 10’s ten members played a 12-game schedule that guaranteed that each school would play every other school in the conference at least once each year.

By 1932, the Southern Conference tournament has become an important part of the southern collegiate basketball landscape, and its annual winner was widely recognized as the champion of “southern college basketball.” However, that year, the Southern Conference split into two conferences when the 13 schools located west and south of the Appalachians withdrew to form the Southeastern Conference. This left the Southern Conference with only 10 members, but within four years that number had expanded to 16.

Although the two post-1932 conferences were significantly smaller than the old Southern Conference, both retained the post-season tournament. The Southeastern Conference actually abandoned the tournament in 1934, but criticism on the part of fans led to its reinstatement the following year.

In 1939, the landscape of post-season basketball changed with the introduction of the first NCAA basketball play-offs. Between 1939 and 1950, the tournament was an eight-team event for which no team automatically qualified. Given the small size of the tournament, and an early commitment on the part of the NCAA to choose one college from each of eight geographic subdivisions of the United States, there was no guarantee that either the conference’s regular season or the tournament champion would be invited to the tournament, but if one team was invited, which champion would it be?

As it turned out, this was not a critical issue in regard to the Southeastern Conference, since between 1939 and 1950, the team that won the SEC regular season basketball championship also won the post-season tournament each year. (The University of Kentucky, which dominated SEC basketball in this era, was the double-winner on ten occasions, and University of Tennessee twice won both titles.)

The situation in the Southern Conference was different. Given the larger size of the Southern Conference and its irregular scheduling practices, it was not surprising that the conference tournament winner was frequently not the team with the best regular season winning percentage. In fact, in the eight seasons from 1939 to 1946, the regular season and tournament championships were captured by the same school only once.

However, when it came to invitations, the NCAA clearly favored the Southern Conference regular season winner. In only three of those eight seasons was a Southern Conference team invited to the NCAA tournament, and in each of those years the invitation went to the regular season winner, not the tournament champion.

In 1951, two important changes were implemented. The NCAA expanded its tournament to 16 teams, and it announced that bids would be automatically extended to the champions of ten specific conferences (which included the Southern and Southeastern). This obviously required the two conferences to designate either their regular season champion or the tournament winner as the champion for NCAA tournament purposes.

At this point, the SEC voted to play, for the first time, a 14-game, round-robin schedule with the team with the overall best record being the conference’s official champion. The post-season conference tournament was continued, but its winner was only the tournament champion.

The Ohio Valley Conference, organized in 1948, had become the third conference to adopt a post-season tournament, but in 1951, it also designated its regular season winner as its champion. (The Ohio Valley Conference did not have an automatic bid to the NCAA, but in 1953, the NCAA selected regular season champion Eastern Kentucky University as an at-large team, by-passing Western Kentucky University which had both won the post-season tournament and had a better overall record.)

The Southern Conference chose a different route. Because of its size (16 schools) and the wide variation in the number of conference games played by each member—in 1950, totals ranged from 12 to 19 games—the conference felt that it had no option other than to designate the tournament winner as the official conference champion.

The Southern Conference’s decision to designate its tournament winner as the official champion of course made the tournament extremely exciting, and between 1951 and 1960, conference regular season winners made it to the NCAA only six times in ten years.

Without the conference championship being on the line, post-season tournaments were somewhat meaningless, and fan interest quickly waned. Both the SEC (1952) and the Ohio Valley (1955) had dropped their tournaments by the mid-1950s. However, a second championship tournament had been created when the Southern Conference split in half in 1953.

That year, seven of the most prominent schools in the Southern Conference withdrew and formed the Atlantic Coast Conference. (The seven were joined by an 8th member, the University of Virginia, which had already left the Southern Conference.)

With the ACC at eight members, and the Southern reduced to nine, both conferences moved to a round-robin format, which should have removed the need for a post-season tournament. However, the popularity of the 30-plus year old Southern Conference tournament was such, and the “title on the line” aspects were so popular with fans, that both leagues continued to hold post-season tournaments with the tournament winner receiving the conference’s automatic bid to the NCAA tournament.

For the next 20 years, the ACC and Southern Conference tournaments were well-known exceptions to the general rule that conference championships were won in the regular season. The Ohio Valley Conference resumed its “beauty contest” post-season tournament in 1964, but indifferent crowds led to its cancelation in 1967.

Why did not other leading basketball conferences follow the lead of the ACC and the Southern in this era, especially in the 1960s when the ACC tournament became a widely followed and very successful revenue generating event?

The answer is fairly simple. As exciting as the ACC and Southern Conference tournaments may have been with their winner-take-all format, most conferences felt it was unfair that a team that had demonstrated excellence over the course of a season could be eliminated from national championship competition simply because it happened to have a bad game.

In 1970, the University of South Carolina, a charter member of the ACC, was upset in double-overtime in the finals of the ACC tournament, after having been the first team in conference history to go through the regular season undefeated. When the other schools refused to alter the existing format for determining a champion, South Carolina resigned from the ACC amid a great deal of sympathy from college basketball fans.

While the ACC and the Southern could defend their approach by pointing out that this was the way it had always been done in those conferences and in their predecessors, conferences that had followed the traditional approach were simply unwilling to switch, even if it would have been profitable.

Of course, one could have a tournament just for the sake of a tournament, but the experience of the Southeastern Conference and the Ohio Valley Conference after 1951 suggested that basketball fans were not interested in games that had no effect on the conference championship.

So What Happened?

By 1974, it was widely rumored that the NCAA planned to expand the size of its post-season tournament and that it might also change the rule that limited conferences to a single participant.

Both happened in 1975. First, the number of teams in the tournament was immediately increased from 25 to 32, and the old limitation of one team per conference was replaced by a two-teams-per-conference rule.

Since a conference’s regular season champion was likely to have an outstanding overall record, suddenly making the tournament champion the conference champion would no longer make it possible that the conference’s strongest team might be eliminated, since the regular season champion was likely to be chosen for one of the now expanded number of at-large bids.

Moreover, by giving the tournament winner the automatic bid, fans of every team in the conference had reasons to attend the tournament or at least watch it on television.

Furthermore, the size of the NCAA tournament kept expanding over the course of the next decade. In 1979, it was increased to 40 teams, and in 1980, to 48. In 1983, the number was increased again to 52 teams, then to 53 the next year, and to 64 in 1985. Today, the number is up to 68, and some observers are predicting an impending move to a 96-teams tournament.

On top of that, in 1980, the limit of two teams per conference was also repealed, so that three or more teams from the same conference could theoretically make the NCAA tournament the same year. Suddenly, there was another reason to have a post-season tournament. Even teams that didn’t win the tournament might be able to showcase their talents and win one of the growing numbers of at-large bids.

In response to these changes, the number of post-season conference tournaments began to increase rapidly, from two in 1974, to six in 1975, to nine in 1976, to 13 in 1977, to 20 in 1980, to 24 in 1983, and to 29 in 1987. By 1987, every conference but the Ivy League, the Big 10 and the Gulf Star Conference had a post-season tournament. The Big 10 held out until 1998, but eventually joined the crowd.

It seems that as long as conferences can send more than one representative to the NCAA play-offs, conference tournaments are here to stay.