Modeling Science, Technology & Innovation Conference | Washington D.C. | May 17-18, 2016
Wednesday, May 18th 2016 | 3:30 PM – 5:00 PM
Models of Science and Innovation
Learn how models of science and innovation can improve decision making and how computer simulations that help understand the impact of (policy) decisions on future developments.
Analog Devices, Inc.
Bio: Bruce Hecht received the M.A.Sc. and B.A.Sc. degrees in Electrical Engineering from the University of Waterloo, Ontario, Canada. Originally from Montreal, Quebec, he joined Analog Devices in 1994, where he is currently with the Worldwide Quality Systems Engineering Group in Wilmington, MA, USA. His interests are in design of all kinds of electronic systems for medical, automotive, industrial, consumer, and communications systems. View his website at http://www.analog.com.
University of Washington
Why Scientists Chase Big Problems: Individual Strategy and Social Optimality
Abstract: Scientists pursue personal recognition as well as collective knowledge. When scientists decide whether or not to work on a hot new topic, they weigh the potential benefits of a big discovery against the costs of setting aside other projects. These self-interested choices can potentially spread researchers across problems in an efficient manner, but efficiency is not guaranteed. We use simple economic models to understand such decisions and their collective consequences. Academic science differs from industrial R&D in that academics often share partial solutions to gain reputation. This convention of “Open Science” is thought to accelerate collective discovery — but extending our model to allow the option of partial publication, we find that it need not do so. The ability to share partial results influences which scientists work on a problem; consequently, Open Science can slow down the solution of a particular problem if it deters entry by important actors.
Bio: Carl T. Bergstrom is a Professor in the Department of Biology at the University of Washington. Dr. Bergstrom’s research uses mathematical, computational, and statistical models to understand how information flows through biological and social systems. His recent projects include contributions to the game theory of communication and deception, use of information theory to the study of evolution by natural selection, game-theoretic models and empirical work on the sociology of science, and development of mathematical techniques for mapping and comprehending large network datasets. In the applied domain, Dr. Bergstrom’s work illustrates the value of evolutionary biology for solving practical problems in medicine and beyond. These problems include dealing with drug resistance, handling the economic externalities associated with anthropogenic evolution, and controlling novel emerging pathogens such as the SARS virus, Ebola virus, and H5N1 avian influenza virus. He is the coauthor of the college textbook Evolution, published by W. W. Norton and Co., and teaches undergraduate courses on evolutionary biology, evolutionary game theory, and the importance of evolutionary biology to the fields of medicine and public health. Dr. Bergstrom received his Ph.D. in theoretical population genetics from Stanford University in 1998; after a two-year postdoctoral fellowship at Emory University, where he studied the ecology and evolution of infectious diseases, he joined the faculty at the University of Washington in 2001.
George Mason University/NICO at Northwestern University
Emergent Technological Epochs and The Policies that Foment Them
Abstract: We model the evolution of economic goods and services as a stochastic process of recombination conducted by purposive agents. There is an initial ‘seed set’ of goods, each having an intrinsic economic fitness, which is assessed subjectively by individual agents. Each agent adopts some subset of these goods based on its subjective perceptions. The innovation process proceeds by individual agents attempting to invent new goods by combining current goods that it uses. Such attempts at innovation are mostly unsuccessful, insofar as they lead to goods having low economic fitness. However, some inventions do have fitness exceeding other goods in the economy and can therefore be adopted by one or more agents. The adoption process proceeds with each agent considering the new good, developing an idiosyncratic assessment of its value, and accepting it only if this value exceeds the least valuable item it currently holds, in which case it sheds the least valuable item from its holdings. If no agent adopts the new product then it is considered unsuccessful and has no further possibility of being adopted. If all agents drop a particular good then it no longer exists in the economy and cannot be brought back into existence, unless it is reinvented, which is possible. This simple model has a variety of robust properties. First, agent welfare is monotonically increasing in the model, since agents only adopt subjectively superior products over time. Second, the population of goods is transient, with the initial seed set becoming extinct eventually, and all goods having finite lifetimes. We compare the distribution of invention lifetimes in the model with stylized facts. Third, the number of new inventions adopted by some one or more agents is very volatile, and displays clustered volatility. Fourth, the total number of goods in the economy over time is very irregular, displaying periods of relative stasis in which few inventions are successful, punctuated by periods of rapid technological progress in which there is dramatic change in the goods being used. These episodes of technological change are often instances of ‘creative destruction’ insofar as one or a few successful inventions can lead to the extinction of some larger number of other, older goods. Finally, each realization of the model generates a graph of technological antecedents that has somewhat peculiar properties, which we characterize, and from which the lineages of current technologies can be readily determined. The relation of this model to other models of evolution is described. While this model is rather abstract it is, in many ways, much more realistic than conventional economic models. Its manifold policy conclusions will be discussed.
Bio: I have a ‘Public Policy’ degree and work at the intersection of computer science, economics, game theory, AI, finance, and politics, focusing on modeling human social phenomena in software. Typically we try to use all available data in our models, which may run at full (1-to-1) scale with the phenomena under study. My MIT Press book, “Growing Artificial Societies: Social Science from the Bottom Up,” is a citation classic. My new book, “Dynamics of Firms from the Bottom Up: Data, Theories and Models,” is due out at the end of this year (MIT Press). It uses data on all 6 million American firms that have employees to synthesize a model of the entire U.S. private sector, and draws many important policy conclusions. In other, related work, I have built models of science and innovation processes.
University of Chicago
How Science Thinks (and How to Think Better)
Abstract: I explore how modeling the scientific process can create opportunities for improving it. I begin by demonstrating how the complex network of modern biomedical science provides a substrate on which a scientist–-and indeed science as a whole-–thinks, and its consequences for ongoing scientific discovery and human health. Using millions of scientific articles from MEDLINE, I show how science moves conservatively from problems posed and questions answered in one year to those examined in the next. Along the way, I show how contemporary science “changes its mind”; how it has become more risk-averse and less efficient at discovery; and how the atmosphere of its own internal puzzles have largely decoupled it from health needs. We use this as an opportunity to demonstrate how much more efficient strategies can be found for mature fields, which involve greater individual risk-taking than the structure of modern scientific careers supports, and propose institutional alternatives that maximize a range of valuable objectives, from scientific discovery to robust understanding to technological advance.
Bio: James Evans is Associate Professor of Sociology at the University of Chicago, member of the Committee on the Conceptual and Historical Studies of Science, Senior Fellow at the Computation Institute, Director of Knowledge Lab (knowledgelab.org) and Director of the Computational Social Science program (macss.uchicago.edu). His work explores the sources, structure, dynamics and consequences of modern knowledge. Evans is particularly interested in the relation of markets to science and knowledge more broadly, and how evolutionary and generative models can inform our understanding of collective representations, experiences and certainty. He has studied how industry collaboration shapes the ethos, secrecy and organization of academic science; the web of individuals and institutions that produce innovations; and markets for ideas and their creators. Evans has also examined the impact of the Internet on knowledge in society. His work uses natural language processing, the analysis of social and semantic networks, statistical modeling, and field-based observation and interviews. Evans’ research is funded by the National Science Foundation, the National Institutes of Health, the Mellon and Templeton Foundations and has been published in Science, PNAS, American Journal of Sociology, American Sociological Review, Social Studies of Science, Administrative Science Quarterly and other journals. His work has been featured in Nature, the Economist, Atlantic Monthly, Wired, NPR, BBC, El Pais, CNN and many other outlets.
Analyzing the Structure of Genomic Science
Daifeng Wang, Koon-Kiu Yan, Joel Rozowsky, Eric Pan, Mark Gerstein, Temporal dynamics of collaborative networks driven by large scientific consortia, Trends in Genetics, 2016
Bio: Mark Gerstein, Ph.D. is the Albert Williams Professor of Biomedical Informatics. His lab (http://gersteinlab.org) was one of the first to perform integrated data mining on functional genomics data and to do genome-wide surveys. His tools for analyzing motions and packing are widely used. Most recently, he has designed and developed a wide array of databases and computational tools to mine genome data in humans, as well as in many other organisms. He has worked extensively in the 1000 genomes project in the SV and FIG groups. He also worked in the ENCODE pilot project and currently works extensively in the ENCODE and modENCODE production projects. He is also a co-PI in DOE KBase and the leader of the Data Analysis Center for the NIH exRNA consortium. In these roles Dr. Gerstein has designed and developed a wide array of databases and computational tools to mine genomic data in humans as well as in many other organisms.