IST Distinguished Lecture Series

The College of Information Sciences and Technology’s Distinguished Lecture Series connects researchers, experts, and thought leaders with the University community to share perspectives and insights on a variety of topics at the frontier of information sciences and technology. The series aims to enrich the educational experiences of attendees, inspire thought-provoking conversations and collaborations, and showcase a diverse array of people, backgrounds, and ideas in the information sciences and related domains.

All lectures are free and open to the Penn State community unless otherwise noted.

Past Events

Watch This Lecture

“The Quest for Ethical Artificial Intelligence”

The Distributed Artificial Intelligence Research Institute (DAIR) was launched in December 2021 by Timnit Gebru as a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence. Gebru believes that the harms embedded in AI technology are preventable and that when its production and deployment include diverse perspectives and deliberate processes, it can be put to work for people, rather than against them. With DAIR, Gebru aims to create an environment that is independent from the structures and systems that incentivize profit over ethics and individual well-being. In this talk, Gebru will discuss why she founded DAIR and what she hopes this interdisciplinary, community-based, global network of AI researchers can accomplish.

About the Speaker:

Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google in December 2020 for raising issues of discrimination in the workplace, where she was serving as co-lead of the Ethical AI research team. She received her PhD from Stanford University, and did a postdoc at Microsoft Research, New York City in the FATE (Fairness Accountability Transparency and Ethics in AI) group, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. Gebru also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian high school students, free of charge.

About the Event:

This event is hosted as part of the Penn State Center for Socially Responsible Artificial Intelligence's Distinguished Lecture Series in partnership with the College of Information Sciences and Technology. The series highlights world-renowned scholars of repute who have made fundamental contributions to the advancement of socially responsible artificial intelligence. The series aims to provoke attendees and participants to have thoughtful conversations and to facilitate discussion among students, faculty, and industry affiliates of the Center.

Watch This Lecture

“Designing Useful and Usable Privacy Interfaces”

Users who wish to exercise privacy rights or make privacy choices must often rely on website or app user interfaces. However, too often, these user interfaces suffer from usability deficiencies ranging from being difficult to find, hard to understand, or time-consuming to use, to being deceptive and dangerously misleading. This problem is often exacerbated when trying to make privacy choices for mobile or IoT devices with small or non-existent screens. This talk will provide insights into the reasons why it can be difficult to design privacy interfaces that are usable and useful and suggest user-centric approaches to designing privacy interfaces that better meet user needs and reduce the overwhelming number of privacy choices. I’ll discuss some of our research along these lines at Carnegie Mellon University including our design and evaluation of privacy "nutrition" labels for websites, mobile apps, and IoT devices, as well as personal privacy assistants and other tools.

About the Speaker

Lorrie Faith Cranor is the Director and Bosch Distinguished Professor of the CyLab Security and Privacy Institute and FORE Systems Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University.  She is also co-director of the Collaboratory Against Hate: Research and Action Center at Carnegie Mellon and the University of Pittsburgh. In addition, she directs the CyLab Usable Privacy and Security Laboratory (CUPS) and co-directs the MSIT-Privacy Engineering masters program. In 2016 she served as Chief Technologist at the US Federal Trade Commission. She co-founded Wombat Security Technologies, a security awareness training company that was acquired by Proofpoint. She is a fellow of the ACM, IEEE, and AAAS; a member of the ACM CHI Academy; and a recipient of the IAPP Privacy Leadership Award. Her pandemic pet is a bass flute.

Watch This Lecture

“Capturing and Rendering the World from Photos”

Imagine a futuristic mapping service that could dial up any possible view along any street in the world at any possible time. Effectively, such a service would be a recording of the plenoptic function—the hypothetical function described by Adelson and Bergen that captures all light rays passing through space at all times. While the plenoptic function is completely impractical to capture in its totality, every photo ever taken represents a sample of this function. I will present recent methods we've developed to attempt to reconstruct the plenoptic function from sparse space-time samples of photos—including Google Street View data and tourist photos on the internet. The results of this work include compelling new ways to render new views of the world in space and time.

About the Speaker

Noah Snavely is an associate professor of Computer Science at Cornell University and Cornell Tech, and a researcher at Google Research in NYC. Noah's research interests are in computer vision and graphics, in particular 3D understanding and depiction of scenes from images. Noah is the recipient of a PECASE, a Microsoft New Faculty Fellowship, an Alfred P. Sloan Fellowship, and a SIGGRAPH Significant New Researcher Award.

Watch This Lecture

"Deep Learning for Automating Software Documentation Maintenance"

Applying deep learning to large open-source software repositories offers the potential to develop many useful tools for aiding software development, including automated program synthesis and documentation generation. Specifically, we have developed methods that learn to automatically update existing natural language comments based on changes to the body of code they accompany. Developers frequently forget to update comments when they change code, which is Join detrimental to the software development cycle, causing confusion and bugs. First, we use methods for "just in time" comment/code inconsistency detection which learn to recognize when changes to code render it incompatible with its existing documentation. We then learn a model that appropriately updates a comment when it is judged to be inconsistent. Our approach learns to correlate changes across two distinct language representations, generating a sequence of edits that are applied to an existing comment to reflect source code modifications. We train and evaluate our model using a large dataset collected from commit histories of open-source Java software projects, with each example consisting of an update to a method and any concurrent edit to its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms many baselines with respect to detecting inconsistent comments and appropriately updating them.

About the Speaker

Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 180 published research papers, primarily in the areas of machine learning and natural language processing. He was the President of the International Machine Learning Society from 2008-2011, program co-chair for AAAI 2006, general chair for HLT-EMNLP 2005, and co-chair for ICML 1990. He is a Fellow of AAAI, ACM, and ACL and the recipient of the Classic Paper award from AAAI-19 and best paper awards from AAAI-96, KDD-04, ICML-05 and ACL-07.

"Natural Language Understanding with Incidental Supervision"

The fundamental issue underlying natural language understanding is that of semantics – there is a need to move toward understanding natural language at an appropriate level of abstraction, beyond the word level, in order to support knowledge extraction, natural language understanding, and communication. Machine learning and inference methods have become ubiquitous in our attempt to induce semantic representations of natural language and support decisions that depend on it. However, learning models that support high level tasks is difficult, partly since most they are very sparse and generating supervision signals for it does not scale. Consequently, making natural language understanding decisions, which typically depend on multiple, interdependent models, becomes even more challenging.

I will describe some of our research on developing machine learning and inference methods in pursuit of understanding natural language text. My focus will be on identifying and using incidental supervision signals in pursuing a range of semantics tasks, and I will point to some of the key challenges as well some possible directions for studying this problem from a principled perspective.

About the Speaker

Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, University of Pennsylvania, and a Fellow of the AAAS, the ACM, AAAI, and the ACL.

In 2017 Roth was awarded the John McCarthy Award, the highest award the AI community gives to mid-career AI researchers. Roth was recognized “for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning.”

Roth has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely. Until February 2017 Roth was the Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR).

Prof. Roth received his B.A. in Mathematics summa cum laude from the Technion-Israel Institute of Technology, and his Ph.D. in Computer Science from Harvard University in 1995.

Watch This Lecture

"Going to school on a robot: Using telepresence robots to let homebound children go to school."

Children who are homebound because of medical conditions like cancer treatments or immune deficiency are normally offered tutoring for 4-5 hours a week. This tutoring may help them keep up academically, but it does nothing for their friendships or social learning. Recently, technology has created the opportunity to bring these students to school using telepresence robots, units that pair videoconferencing with a remote controlled robot. How do these students fare? Do they feel “present” in school? How do teachers and classmates accommodate to these students? The telepresence robots were designed for use by adults in offices and hospitals; what features should be changed to accommodate children going to school? What other kinds of students might benefit from the robot?

About the Speaker

Judith Olson is the Donald Bren Professor Emerita of Information and Computer Sciences at the University of California Irvine. Previously, she was at the University of Michigan where she was a professor in the School of Information, the Business School, and the Psychology Department. She got her Ph.D. in Psychology at the University of Michigan then held a postdoctoral fellowship at Stanford University before returning to Michigan as a faculty member. Her research focuses on the technology and social practices necessary for successful distance work, encompassing both laboratory and field studies along with agent based modeling. In 2001, she was one of the first seven inductees into the CHI Academy. She and her husband/collaborator Gary were awarded the 2006 CHI Lifetime Achievement Award. She is an ACM Fellow and in 2011 she was awarded the ACM-W Athena Lecture (equivalent to woman of the year in computer science).

Watch This Lecture

"Factors influencing outcomes in collaborative writing: An analysis of the processes"

Today’s commercially available word processors allow people to write collaboratively in the cloud, both in the familiar asynchronous mode and now in synchronous mode as well. This opens up new ways of working together. We examined the data traces of collaborative writing behavior in student teams’ use of Google Docs to discover how they are writing together now. We found that student teams write both synchronously and asynchronously, take fluid roles in the writing and editing of the documents, and show a variety of styles of collaborative writing, including writing from scratch, beginning with an outline, pasting in a related example as a template to organize their own writing, and three more. We also found that the document serves as a place where they share a number of things not included in the final document, including links or references to related materials, the assignment requirements from the instructor, and informal discussions to coordinate the collaboration or to structure the document. We computed a number of measures to depict a group’s collaboration behavior and asked external graders to score these documents for quality. This allowed us to examine the factors that correlated with high quality outputs. We then suggest system design implications and behavioral guidelines to support people writing together better, and conclude with future research directions.

About the Speaker

Gary M. Olson is Donald Bren Professor Emeritus of Information and Computer Sciences at the University of California, Irvine. Prior to 2008, he was the Paul M. Fitts Professor of Human-Computer Interaction at the University of Michigan. He along with Judy studies how information technology can play a role in collaboration. Mostly this work focused on collaboration at a distance, and led to the often-cited paper in 2000, “Distance Matters.” Later they published an edited book on Scientific Collaboration on the Internet (MIT Press, 2008), and more recently, Working Together Apart (Morgan & Claypool, 2014). He is an ACM Fellow, as well as a Fellow of the American Psychological Association and the Association for Psychological Science. He was elected to the CHI Academy in 2003, and along with Judy Olson, received the SIGCHI Lifetime Achievement Award in 2006. He received the SIGCHI Lifetime Service Award in 2016.

"The Future of Security and Opportunities for Building Greater Leadership"

The internet provides people with amazing opportunities. We are putting more and more of our lives online which offers us increased convenience and connectivity. However, as our online lives continue to evolve, the threats we face continue to grow. We all play a role in making the internet a safer and more secure place. Ms. Henley will discuss security issues people encounter online, the unique challenges she faces in her role at Facebook, and the importance of building a globally diverse security workforce to tackle the challenges ahead.

About the Speaker

Jennifer (Jenn) Henley is a Director of Security at Facebook. In her role, Jenn is responsible for leadership and planning across the Security organization. On a day to day basis she manages the organization’s growth & strategic initiatives, as well as the team’s relationships with the other departments across Facebook. Prior to her employment at Facebook, Jenn was Chief of Staff for the CISO at PayPal. She has over 15 years industry experience, holds her PMP certification, and is a graduate of St. Mary’s College of California where she received a B.A. in Communications. She also holds an Honorary Doctorate of Humane Letters from Bay Path University.