
Constitution Breakdown #9: Alondra Nelson
Audio Summary
AI Summary
This episode of the 99% Invisible Breakdown of the Constitution delves into Articles 6 and 7, with a particular focus on the crucial Supremacy Clause and its modern implications, especially concerning the regulation of artificial intelligence.
Article 7, the ratification clause, is presented as a foundational element that established the legitimacy of the Constitution. It stipulated that the agreement would become effective once ratified by nine of the thirteen states, a threshold that was met on June 21, 1788, when New Hampshire became the ninth state to ratify. While historically significant for the Constitution's adoption, it holds little contemporary legal relevance.
Article 6, however, is a more complex and enduring section. Clause 1 addresses the United States' pre-existing debts and obligations, a provision designed to reassure creditors that the new government would honor financial commitments incurred during the Revolutionary War. While largely historical, it's noted that the original draft included both the obligation and the power to pay these debts, with the latter being moved to Article 1, forming part of Congress's spending authority.
Clause 3 of Article 6 introduces the "no religious test" clause, mandating that all federal and state officials must swear an oath to support the Constitution, but prohibiting any religious test as a qualification for public office. This clause, rooted in English common law traditions that required oaths supporting the Church of England, signifies a departure towards religious neutrality in government. Although it's a cornerstone of religious freedom, its legal application is now largely superseded by the First Amendment's Free Exercise Clause, leading to less direct Supreme Court attention.
The centerpiece of Article 6 is Clause 2, the Supremacy Clause. This clause declares that the Constitution, federal laws made in pursuance thereof, and treaties made under U.S. authority are the supreme law of the land. Historically, it was a direct response to the inadequacies of the Articles of Confederation, where state courts often disregarded federal law. The Supremacy Clause resolves this by establishing a clear hierarchy, ensuring that federal law supersedes any conflicting state law.
The practical implication of the Supremacy Clause is the doctrine of federal preemption. Congress, through its legislative power, can preempt or displace state and local laws that conflict with federal legislation. While Congress can explicitly state its intent to preempt, many cases involve implied preemption, where courts must determine if a federal law occupies a field so thoroughly or conflicts so directly with state law that the latter must yield. This can occur when federal law is so pervasive that it leaves no room for state regulation, or when compliance with both federal and state law is impossible, or when state law obstructs the objectives of federal law.
The discussion then pivots to the contemporary relevance of preemption, particularly in the rapidly evolving field of artificial intelligence (AI). AI, with its potential for immense societal benefit and significant risks, presents a complex regulatory challenge where both federal and state governments are attempting to establish oversight. The guest, Dr. Alondra Nelson, a scholar of technology and social science, highlights the broad definition of AI, encompassing systems that use statistics and math to make inferences and generate outputs like predictions and recommendations. She notes that while generative AI has gained public attention, AI encompasses a wider range of technologies with varying levels of autonomy.
Dr. Nelson outlines both the transformative and concerning aspects of AI. On the positive side, AI holds promise in medical diagnostics, agriculture, and traffic management. However, significant concerns exist regarding AI's role in job screening, potentially perpetuating historical biases; predictive policing, which can reinforce existing disparities; and generative AI's capacity to spread misinformation or provide harmful advice, as seen in a New York City chatbot incident.
This leads to the "Blueprint for an AI Bill of Rights," developed during Dr. Nelson's tenure in the Biden administration. Inspired by historical "Bills of Rights," this policy paper aims to establish guardrails against powerful AI technologies. It proposes five core principles: AI systems should be safe and effective; individuals should be protected from algorithmic discrimination; there should be data privacy; individuals should receive notice and explanation when AI is used in consequential decision-making; and there should be human alternatives or fallbacks.
The concept of "thick alignment" is introduced as a framework for AI governance. Moving beyond purely technical definitions of alignment, thick alignment emphasizes understanding AI within its specific contexts, considering diverse societal values, and engaging in continuous dialogue about its implications. This contrasts with a narrow, technical view where a system might be deemed "aligned" based on performance metrics, yet still produce harmful outcomes in real-world applications.
The discussion highlights the current regulatory landscape, characterized by a "patchwork" of state-level initiatives. California, Colorado, and Texas are noted for their different approaches to AI regulation, ranging from data disclosure and algorithmic discrimination laws to more comprehensive bills. This state-led experimentation, while creating compliance burdens for companies, is seen as a "laboratory of democracy" that can inform future federal policy. The absence of comprehensive federal AI legislation has created a vacuum, prompting states to act.
The conversation touches upon the Trump administration's approach, which prioritized accelerating AI development over safety and ethics, and its call for federal preemption of state AI laws. The Biden administration's executive order on AI, building on the AI Bill of Rights, aimed to ensure safe and ethical use, but lacked the authority to preempt state laws. The episode suggests that while federal lawmaking in technology has been slow, growing public awareness and concern across the political spectrum are creating pressure for more robust AI governance.
Finally, Dr. Nelson expresses cautious optimism, not about the speed of federal legislation, but about the increasing public empowerment and demand for responsible AI development. She points to community pushback against data center development and concerns about AI's impact on young people as evidence of growing public agency. The complex and bipartisan nature of AI concerns, spanning issues from discrimination and child safety to fraud and job displacement, suggests a fertile ground for future regulatory consensus, even if the path forward remains challenging.