Is our government getting the right advice on AI?
Is the AI advising team well-positioned and -prepared to inform the government?
I was writing an article on how we should assess actual progress on the AI front. I wanted to focus on whether we are sensibly informed about the value that AI brings, how that value impacts businesses and society, and not succumbing to rhetoric. But then this week, I came across this interview (gift link) of Ben Buchanan, former special adviser for artificial intelligence in the Biden White House, by Ezra Klein:
Now, I should say that this post is not about casting personal aspersions. It should not be viewed as a personal attack on the participants (both of whom I respect). Rather, based on the views expressed, I am trying to glean the level at which the understanding of AI exists within the government and whether it is armed with the most reliable, evidence-based, verifiable data to inform policymaking and national strategic priorities.
The interview and the entire subsequent discussion rely on AGI being imminent (and “the government knows it”) as the basis. This is a highly debatable position. Gary Marcus, in his recent post, covered the AGI aspects. Moreover, neither from the interview nor from any other public sources so far has it been evident that either AGI is imminent or that the government knows much more than researchers and has some additional evidence of “AGI” that goes beyond what at least the AI community understands. But assuming an imminent AGI as a base position does take away the much-needed focus from the broader AI landscape, the potential, and urgent challenges.
However, I’d like to focus on a more serious concern here: Is the government being well-informed on AI in its entire, necessary context?
What I was hoping to get from the interview was a glimpse into a coherent set of policy goals and at least a conceptual framework of how we plan to achieve (or make progress towards) those goals. Unfortunately, the interview doesn’t inspire much confidence into the government’s understanding the technology or its impact on society, let alone making informed policy decisions. While unfortunate, the discussion gave some explanation as to why our policy apparatus has lagged so far — and spectacularly so — when it comes to the opportunities and challenges from AI and other concurrent technological developments that our society faces. I have mentioned in the past the need to refine our approach and focus on a comprehensive policy framework for AI, but unfortunately after reading this discussion, I feel we are much further behind and do not have any reasonable chance of making headway in building our readiness as a nation, or as a society, for the challenges and opportunities that today’s technologies, especially AI, bring.
Coming from the special adviser on AI to the White House, I expected a nuanced understanding of the technology (even if not in-depth), its breadth and potential, its ongoing impacts, the ramifications of its interactions with various other realms, and the challenges that the society and the nation confront. Going through the interview and not seeing any such evidence has been deeply discomforting.
One thing did become clear: We are employing frameworks of the past to address the challenges of the future, while very likely messing up the present.
I don’t believe dissecting the entire interview is productive here or the best use of time. But let me highlight some aspects that I found immediately concerning:
The interview kicks off with a lack of proper description of the AI landscape. So, the topic of the interview seemed like a hand-wavy description at best. Here’s an excerpt on the “definition” of the topic per the adviser:
A canonical definition of A.G.I. is a system capable of doing almost any cognitive task a human can do. I don’t know that we’ll quite see that in the next four years or so, but I do think we’ll see something like that, where the breadth of the system is remarkable but also its depth, its capacity to, in some cases, exceed human capabilities, regardless of the cognitive discipline —
Not to mention that we are far from AGI, it is debatable if our current paradigms will lead us to AGI. But even so, when discussing AI, it seems like the rest of the current AI landscape has been ignored. There has been a wide body of work on the impact, effects, and consequences of AI integration into various walks of life. We have a variety of systems that have leveraged AI (but still fall out of a systematic policy framework) and already pose a variety of challenges. Many academic and industrial reports, and research efforts have investigated the effects including challenges, such as information distortion, social media’s effects on mental health, effects on media, the data ecosystem and so on. But to get to a holistic understanding of these areas, we need a coherent conceptual understanding of the technologies involved (AI cannot be focused on in isolation since the AI systems overlap, power and interact with various other areas and systems).
Moreover, I concede that everyone has had different definitions of AGI (e.g., openAI with their economic work, and Anthropic with their cognitive task that humans can do). It’s a vague term but that doesn’t mean that the government should use it vaguely especially when policies are being set based on this position. Hence, if the technological landscape can’t even be articulated well, how would the team constitute a coherent position or advise on the priorities? This is deeply worrying.
Based on the interview, it seems like we have adopted an extremely myopic view of the technology, the state of the world, and US competitiveness. It was strange to seemingly put more or less the entire policy/strategy focus on AI around competing with China. There was very little mention of the societal challenges, real opportunities where AI can benefit (or is already benefitting), how to actually support a healthy innovation ecosystem, how to actionably guard against the risks, how to build guardrails, or how to chart a sensible course toward the future. Even in the global geopolitical context, there are various areas in addition to China that need attention — and not just from a defense perspective but from various other national security and competitiveness aspects. None were covered or discussed.
The interview gave no confidence that a real understanding exists on the impact of AI on society, ethics, or safety. It was extremely alarming to see that when asked about the efforts around AI safety and ethics, the primary response was that conferences and meetings were done. That is an unacceptable response from the topmost level advisory and task force in the world’s most powerful country. It seems like we have no concrete position on AI safety, ethics, reliability, or effects on the society. It’s even more worrying when we try to reconcile this with the argument of imminency of such super powerful systems. If the team and the government believes that AGI is imminent (again, this is not empirically supported), didn’t the team or the government feel any urgency to move beyond conferences and meetings? It’s certainly too much to expect any actual efforts or actionable suggestions on policy, regulatory or innovation support front. Even on the most talked about topics such as effects on labor market, the response was practically washing their hands of the matter. Not being a labor market economist (the reason given for not having an opinion on AI’s effect on the labor market) doesn’t absolve one from the discussion on advice on AI’s impact, especially when the lack of requisite expertise on AI doesn’t stop them from advising on AI. As an aside, the labor market disruption is still a possibility even without AGI in the near future. The job is specifically to understand the impacts and have an informed opinion and position on them. There is enough empowerment to gather the data and develop such an informed position.
And not just that, there does exist a significant body of research and position papers from academia, the research community and industry on the evolving effects of AI across the breadth of disciplines and areas. NIST has been developing frameworks around AI. The discussion didn’t give the impression that this body of work informed the team. Shouldn’t the advising team have a joint understanding of such effects of AI? How and what were they advising on? How was data gathered to inform the advising team itself? Was there any empirical basis at all? The interview feels much like it was instinct-driven advising rather than being anchored in real-world evidence. No wonder we end up with outputs such as the US AI Safety Institute which is neither mandated nor empowered in any meaningful manner so as to be effective.
The interview gave the impression that we are employing the arsenal of the past to build competitive advantage on a technology of the future. It’s as if both the understanding of the current world and the creativity in finding solutions, were on a break. For instance, the adviser doubled down on the worn-out export control regime that has had limited advantages even when the government had deep and tight control over the technologies of the past (such as nuclear), but is even less so in the information world of today. And then the interview strangely talks about model weights residing in some data center facility in some petrostate as a deep concern (it may be a legitimate concern, but not for the model weights — I can’t believe I have to write this). Also, then there was some strange position about protecting US companies building foundation models. That is a different level of lack of awareness about the real world. Many experts have said and reiterated in the past that the challenges with AI are not some kind of special sauce or hardware advantage. There may be some incremental advantages that these arguably create, but the overall technology is well known, and that is the reason it is being replicated so often and so broadly. There is no deep confidential know-how that the foundational model companies have that needs to (or can) be guarded here. Yes, there might be some IP but that is no different from other technology companies. AI researchers and scientists would have detailed these aspects if the team had made an effort to reach out to them.
The concerns seem to focus on a hypothetical future (AGI) while disregarding the present challenges and dangers. Then there was the entire discussion on what a hypothetical AGI can do across the board (again without any real depth) and hence it needs to be “protected” or kept within our boundaries. Not even sure what it means. And then when there was mention of Deepseek or similar efforts, weirdly the positions taken were to double down on export controls in the face of evident failure. Again, I am not saying that the entire policy of export control is flawed, but the rationale presented here definitely did not help in instilling much confidence. The Deepseek events and the lessons of those developments should get us to think beyond the trivia and should have urgently encouraged a discussion on the broader innovation ecosystem and how we are incentivizing that at home. Unfortunately, the team didn’t seem to have the readiness to do so — all this, while not even acknowledging the challenges that we face presently.
The discussion on national security seemed extremely shallow. It seemed to be all over the place, confounding national security challenges, automation, AI, large-scale systems, cybersecurity, and other areas. A prime example as part of national security concern was cited as the inability to analyze satellite data at scale. I can’t imagine that this was the most pressing challenge or something that kept the national security apparatus up at night as if the problem was unsolvable. There clearly are more serious challenges when it comes to AI, and it didn’t seem that they were well understood. Similar issues with the data challenges such as the data collection of citizens — the shallowness of the discussion was breathtaking.
Quite likely as a consequence of what I detailed above, there seems to be a lack of clear communication on the government’s position. For instance, around AI innovation, the massive disconnect between Marc Andreessen’s account of interaction with the administration (based on what’s publicly reported) and the adviser’s response is worrying. While I don’t copy the excerpt from the interview here, instead of a vacillating response, I would have expected a clear and well-articulated position on supporting the wider innovation ecosystem, substantiated by evidence from actions such as the OMB memo or the CHIPS Act referred to therein.
There is so much more to break down in this telling interview. It raises various concerns: How are the advisers appointed, and how was the team tasked? What was the mechanism to gather inputs and collect real-world evidence to support advised opinions? What has been the basis of informing on AI? How are the participants and stakeholders on these topics selected?
Based on the interaction in this interview, it seems that the policy approach was concerningly uncertain—I’d venture to say, less-than-informed. I don’t expect to get all the answers, but I do expect to get a good understanding of what questions were being focused on, how the positions were informed, what systematic efforts were being put in place, and how a systematic policy apparatus is being thought about.
Watching/reading this interview raised many more questions than it gave answers, which makes us ask the question in the title:
Is our government even getting qualified, informed advice on AI?
Finally, I hope that I am wrong about these takeaways and that there is indeed a robust basis informing the government enabling it to come up with a systematic policy framework for AI. However, based on this interview and the policy efforts so far, I feel concerned.