Transcripts For CSPAN3 Historical 20240705 : vimarsana.com

CSPAN3 Historical July 5, 2024

Historical associations congressional briefing on historical perspectives of artificial. Im alexandra levy, Communications Manager at the American Store association, and i coordinate the briefings for the they thank you all for taking the time to be today. Im going to provide a little context for what were doing here this morning. The chase congressional briefings program, which dates back to 2005, provides congressional staff, journalists and others in the policy community with the historical background necessary to understand the context. Current legislative concerns. The program rests on the premise that everything has a history and that anyone involved in policies and decisions of any kind, whether in the corporate government or nonprofit worlds, ought to understand how positive hints and policies have shaped the current landscape. Our second premise is it is possible to provide that Historical Context in a manner that is nonpartisan and unencumbered with a particular legislative agenda. And we ask all of our speakers to adhere to that standard. The fha is grateful to the Mellon Foundation for providing the Financial Resources that make this work possible. We hope to see you at future briefings and encourage you to email us any ideas for future briefings. We have over 11,000 members and can find a historian to speak on any topic. Todays briefing will be moderated by matthew jones. Dr. Jones is the smith family professor of history at Princeton University in 2023. Norton published his. How did a happened. A history from the age of reason to the age of algorithms written with with chris wiggins. He is completing a book great exploitations on state surveillance of communication and information warfare. His previous books include reckoning with matter, calculating machines, innovation and thinking about, thinking from pascal to babbage and the good life in the scientific revolution. Descartes. Pascal, leibniz and the cultivation of virtue. I will now turn it over to him to introduce the panelists. Thank you, alex. And thank you for of you for coming today. Were enormously lucky in in todays congressional briefing, ai to have two real world experts on the history of ai and its broader place in the development of computing within society. And one expert on the use of ai style tools for historians confronting the vast amount of documentation that we have in in more contemporary forms of forms of history. There are lots of things that historians can bring. Indeed, everything has a history. But in particularly in questions of history, of technology, historians offer an opportunity to get leverage that the history is one that is continued and that is not simply driven by technology in which human choice is crucial, even if it often doesnt seem that and are our our commentators today can illuminate fundamental technical questions of the development of this technology, the privacy and ethical challenges, the threats to expertise how the threat of automation which long come in many other domains, is now threatening many of the sort of most domains we can imagine from Clinical Knowledge to legal knowledge to the practice of history, questions of Environmental Impact and the challenge regulation, good and bad. Theres a lively debate, as most of you will know currently around ai, that questions whether we ought to be focusing on harms in the here now to existing communities versus speculative harms that often take an almost apocalyptic character. Likewise, there is the possibilities that i might provide for people doing historical them for near and for the societies in which we in which we live, which produced documentation that such a scale that the traditional tools of the of the historian often are inadequate or appear inadequate to deal with. So we we have a moment here to think with these wonderful historians, to think about how technology can best comport with the values we have. And ill introduce our three speakers. So first, were going to hear from Geoffrey Jeffrey yost, who is a Research Professor of the history of science and technology and is the director of the Charles Babbage institute for computing information and culture at the university of that is the premier site in the United States, if not the for documentation about the development of Information Technologies computers. He is published seven books, most recently a coauthored wonderful textbook called computer a history of the Information Machine and a crucial book in the labor history of computing called making it work. He coedited a book series called computing culture for John Hopkins University press, and its a journal called interfaces, which i highly recommend too, and founded the Blockchain Society blog. Secondly, our second historian and again, were enormously lucky to have both of these historians of computing is janet abertay, a professor science, technology and society at virginia tech. She wrote to transformative volumes one inventing the internet, which really was the first scholarly attempt to understand how it is that the internet came to be, how it emerged differently than. Many of its progenitors imagined. For good and for bad. She also published a transformer of a volume called recoding gender about womens changing participation in computing, which i can testify is often transformative for students in reading. Most recently, she has an edited volume with stephanie of simon fraser called abstractions and embodiment, which collects many of the sort of most lively historians of computing today. And shes writing a history of Computer Science as an intellectual discipline. So. Yost and and are historians of computing very and the best Matthew Connelly who is a professor of International Global history at columbia and is director of the center for the study of existential risk at cambridge in the uk is is a historian who has pioneered the use of a large number of tools that we now would associate with ai for the study of history. He is the principal of the history lab at columbia, funded by nsf, an nih that data science to analyze data. He has numerous books, including a diplomatic revolution, fatal misconception and most recently and most relevant to discussion here the declassified declassification engine. What history about americas top secret . With that, i will allow the three speakers to talk, after which we will have what i hope is a lively question and answer session. Unfortunately, we have to be done by 1015 at the latest. So if youre asking too question, i will ask you to be concise, which i know for the former academics in the room is a challenge. Okay, thank you, matt. Ive titled these remarks exploring ais dual contextualization and why metaphors matter. Artificial intelligence or a. I. , paradoxically, is overhyped and underappreciated. Is overhyped in a misplaced Science Fiction sense of the nexus of an existential threat. Machines will not take over job losses will occur, but forecasts are exaggerated. If history is a guide, it will be far less than the 300 billion or 400 billion lost or drastically transformed jobs that Goldman Sachs and mckinsey have predicted. Automation far better descriptive term than intelligence thats been hyped three quarters of a century. A quick shift or a cashless idea has been predicted by Management Consultant for 60 years. Electronic transactions have grown, but we still have cash. Air is under appreciated both in terms of its long history and with recent generative ai. The risks of amplifying social biases. Generative pretrained transform. Murmurs their gpt is are new, but they extend from Neural Networks and Natural Language Processing Research areas commencing seven plus decades ago. In 1943, warren mcauliffe and walter pitts conceptualized their first algorithm. Algorithm like neuron natural language. Language processing took off in the 1950s with noam chomsky, his work. My comments today will focus on what i term the dual contextualization of ai, namely taking history and data of context. So in a name. How did we get the term Artificial Intelligence . And why is this important . Prior to World War Two and the advent of Digital Computers, we had mathematicians speculating on future computing machines. In retrospect, this was the start of theoretical Computer Science. By the mid 1930s, alan, was this Research Areas intellectual leader. His 1950 paper computing machinery and intelligence introduced the imitation game as a close analog to thinking can a human correctly guess if a human or machine is responding to them . Thinking or intelligence, however, is more than imitation. This was emphasized by philosopher john searle in 1980 and his critique of turings 1950 paper in late 1940s, 1950s. Journalists and scientist as use terms electronic brains. Giant brains refer to early Digital Computers for. Example scientist edmond berkeley framed. Simon edmund berkeleys famed 1949 book giant brains machines that think so in 1956 and organizing a summer workshop of top scientists exploring ideas of Information Processing and automation at dartmouth. John mccarthy coined the term intelligence. He did so with intentionality. You correctly thought it would help raise military funding for this area of research. It matters little that a. I. Then and a. I. Now was not and is not intelligent and then and ai now was and is not artificial is mathematics abstraction and automation depends heavily on labor and guidance. I was and is a misnomer and distracting metaphor. Responding in 1950s. Responding to 1957. Sputnik. The dod in 1958 founded the advanced Research Projects agency or arpa. In 1962, jcr licklider became the founding director of arpa Information Processing techniques, office or dto in. The 1960s arpa zapdos showered funds and for Computer Science areas. A. I. Graphics, timesharing and networking of the arpanet in the 1960s, a a. I. Labs at carnegie mellon, mit and stanford. Despite generous of funding efforts, floundered relative to grandiose rhetoric and promises by a. I. Researchers, there have been successes in limited domains such as computer chess. But the dod cared about transformative military and economic applications. Hence the 1970s winter or scarcity of research funds. This winter ended with of japans fifth generation a. I. In 1982, catalyzing a flood of u. S. Government funding for a. I. A new summer in the late 1980s and other winter began, and a late 1990 short summer with the dot. Com bubble. A longer winter when it burst and the current summer of the past decade, plus financial greed as well as real and a proper rhetorical fear of or military adversaries drive, hype filled air summers fear of japan in the early 1980s and of china today. Conversely, under delivery of research and recognize an overblown hype leads to winters riding. The seasonal coaster. I expert systems were invented by stanford Computer Scientist edward feigenbaum. He partnered top scientists Joshua Lederberg and edward short to pioneer such systems in the 1960s and 1970s. Short list 1970s era expert system meissen for medical diagnostics drew on an inference engine and database. It could be a tool for advancing evidence based medicine help avoid physician case bias had perfect memory and was never fatigued. It was also overly hierarchical and physician were mixed on its usefulness. My sense for antibiotic prescriptions could reduce overprescribing to combat community antibiotic resistance short less follow an anxious then aided with prescribing complex chemotherapy regimens in medical expert systems, limited domains, transparency, User Feedback and context were critical and they remain a fundamental lesson from history. Jumping to Large Language Models of the past decade, they surveil and they scrape the web publications, other data sources indiscriminately and out of context. The systems are opaque and complex and quality metaphors can help to better understand. Georgetowns helen toner has argued for an acting improv metaphor for a. I. Chat bots. I respectfully disagree. That metaphor highlights machines, humanlike intelligence and creativity. They do not. I much prefer a computational linguist emily bender at all. Stochastic parrots metaphor for generative a. I. Parrots are not human, but they put together sentences, ones devoid of larger context. This is spot on to characterizing how these systems work. Rules and statistics. Many of the responses reasonable hallucinations are a metaphor for a. I. Bots, irrational outputs. But this metaphor is misleading. Conjuring infrequent software bugs. It is not as efficacious in all machines where slip the brown. The very model of is scraping data without proper conduct context. Hence hallucinations are inherent to the system. Hallucinate just imitate non hallucinations. Gpt four uses far more Energy Search as an information seeking tool. Some estimates place it at ten times search regarding cybersecurity, a. I. Seemed in balance, helping white hats protect as much as giving black. Black had hackers new tools. With recent generative a. I. Surveillance, concentration of data and deepfakes, security and privacy risks shifting dangerously in black hats favor openai founders chose its name to associate with the half century old open Source Software movement at islamia to largely models promoted as open source. The open Source Software movement and is a force for sharing and democratizing computing, as signaled president Meredith Whittaker has pointed out. Open and generative a. I. Is quite different and presents significant risks, especially concentration of power with the largest a. I. Enterprises. Those with the most surveillance and data and their models. To sum up, automation is a more accurate descriptor than Artificial Intelligence. Decade long hype cycles followed by winters are the norm. A. I. Is more useful and safer with domain specific and transparent datasets. Chart bots are Energy Intensive blunt instruments for information seeking a. I. Outside of Large Language Models has much to offer, especially in medicine. Generative a. I. May add to productivity in some areas, but questions of quality and remain largely language model based. A. I. Takes data out of context and as such carries substantial risks. Accelerating surveillance, misinformation, proliferation cycles and deep fakes. Along with amplifying racial gender ableist and other biases. Thank you. Thank you, matt. Thank you all for coming here. And its a for organizing this. Ill be using historical examples to unpack a seemingly simple question how do we define success for a. I. . History shows theres been no one definitive answer. Instead, the success or failure of a. I. Depends on what we think it means. Be intelligent to be human and to adequately solve a problem. The notion of intelligence has many different facets that can range from logical reasoning to creative intuition. Emotional intelligence, and the wisdom of experience. We expect human beings to possess all these kinds of intelligence, but computer intelligence, a. I. Has been defined in much more circumscribed bad ways. A few examples will illustrate it, as dr. You just mentioned, one formative idea was the turing test or imitation game proposed by alan turing in 1950, in which a computer is considered intelligent. If it can fool a person into thinking is a human being. The rationale is that if people are intelligent and a computer can act like a person, then it must be intelligent to notice that this is an objective definition of intelligent behavior, but essentially says that intelligence is in the eye the beholder. This idea that i should attempt to imitate how people express themselves in language has resurfaced more recently with generative a. I. A remarkable thing about turing test that is often overlooked is that it essentially equates with the ability to tell convincing lie. The goal is for the machine to deceive and this again anticipates current ais tendency to produce made up results that merely look plausible. Socalled hallucinations. But this is only to be expected if the standard for intelligence is imitation, rather than authenticity or accuracy or accountability. In the 1960s, the intelligence of asis atoms was often measured by their ability to play and win the game of chess and more recently, the game of go been added to that that benchmark. The choice of chess as a measure of intelligence, cultural stereotypes held by largely white male researchers about what types of mental tasks are challenging and worthy of effort. One could argue, for example, it takes more intelligence to raise a child than to win a game of chess. Chess is a very abstract bound game. And of course, its also zero sum game of war. It has no practical utility. Its not a universal human experience. Its its an elite version of what it means to be smart. This shows how benchmarks of intelligence always encode social values that we should be aware of in the seventies and eighties, socalled expert systems gained popularity these systems and attempted to incorporate the Knowledge Base of human experts in order to produce results in specialize domains such as diagnosin

© 2025 Vimarsana