Problematic AI:

Finding the Best Way Forward

Panelist & Moderator Biographies

Panel 1 Topic: What is AI, how is it used and how can it produce erroneous results?

A person wearing glasses and a suit

Description automatically generated with medium confidence

Gregory Neal Akers

Owner, Greg Akers Consulting LLC

Greg Akers was the Senior Vice President & CTO of Advanced Security Research & Government and Chief Technology Officer within the Security & Trust Organization (STO) group at Cisco.  With more than two decades of executive experience, Akers brought a wide range of technical and security knowledge to this role.  A major focus of his group was to expand security awareness and launch product resiliency initiatives throughout Cisco’s development organization to deliver high-quality and secure products to customers.  He also served as executive sponsor of the Cisco Disability Awareness Network.

Akers joined Cisco in 1993.  He held a variety of technical, managerial and executive roles at Cisco.  These have included networking engineer, Vice President for the Worldwide Technical Assistance Center, Senior Vice President-CTO Services and Senior Vice President-Global Governments Solutions Group.  He also holds the CCIE certification.

In addition, Akers is an Internet security and critical infrastructure protection advisor to Cisco customers and to the U.S. government.  He regularly advises and directs activities relative to technology and security matters of domestic and international importance.  Akers has also advised the U.S. Department of Defense and the federal intelligence community for more than fifteen years. 

Before joining Cisco, Akers’ career included more than 15 years of designing, building, and running large networks for Fortune 100 companies.  He has held senior technical and leadership roles at Fechheimer Brothers, a holding of Berkshire Hathaway, and Procter and Gamble.

Today, Greg Akers advises large and startup commercial organizations. He continues to serve as a government advisor. He also sits on several “for” and “not-for” profit boards.

Akers holds a Bachelor of Science degree in chemical engineering from the University of Akron.


Dr. Blake Anderson

Principle Engineer, Cisco

Blake is a Principal Engineer in Cisco Security’s CTO office. He has been doing machine learning/security-related research for 12+ years, which has resulted in 20+ academic publications, 40+ patents, and many academic/industry speaking engagements. He received his PhD from the University of New Mexico where he studied combining multiple data views to improve malware detection. Blake’s current interests revolve around network traffic analysis where he has led the data analysis research that resulted in Cisco’s Encrypted Traffic Analytics solution and Encrypted Visibility Engine feature.


Dr. Laura Freeman

Deputy Director, Virginia Tech National Security Institute

Dr. Laura Freeman is a Research Associate Professor of Statistics and dual hatted as the Deputy Director of the Virginia Tech National Security Institute and Assistant Dean for Research for the College of Science. Her research leverages experimental methods for conducting research that brings together cyber-physical systems, data science, artificial intelligence (AI), and machine learning to address critical challenges in national security. She develops new methods for test and evaluation focusing on emerging system technology. She focuses on transitioning emerging research to solve challenges in Defense and Homeland Security. She is also a hub faculty member in the Commonwealth Cyber Initiative and leads research in AI Assurance. As the Assistant Dean for Research for the College of Science, in that capacity she works to shape research directions and collaborations in across the College of Science in the Greater Washington D.C. area. Dr. Freeman has a B.S. in Aerospace Engineering, a M.S. in Statistics and a Ph.D. in Statistics, all from Virginia Tech. Her Ph.D. research was on design and analysis of experiments for reliability data.


A person smiling for the camera

Description automatically generated with medium confidence

Dr. Emre Kazim

Co-Founder of Holistic AI

Dr. Emre Kazim is co-founder of Holistic AI, an AI Risk Management and Auditing firm with a mission to empower enterprises to adopt and scale AI with confidence. He is also a research fellow in the computer science department of University College London (UK). He has published dozens of papers in the field of AI law, governance, and auditing. He holds a Ph.D. in Philosophy from King’s College London.

Panel 1 Moderator:

Dr. Yan Lu

Research Assistant Professor, Virginia Modeling, Analysis and Simulation Center, Old Dominion University

Dr. Yan Lu joins the Center for Secure & Intelligent Critical Systems (CSICS) at Virginia Modeling, Analysis, and Simulation Center (VMASC), Old Dominion University in 2020. Dr. Yan Lu’s research focuses on addressing security and performance challenges in Trustworthy AI, and her research aims to better support the sustainability of the growth of AI and deep learning models, identify and mitigate potential cyber risks at every stage of the AI lifecycle, and address the security challenges in major AI and deep learning applications. She received her Ph.D. from the Department of Computational Modeling and Simulation Engineering, Old Dominion University (20’), M.Sc. in Computer Science from Virginia Commonwealth University (09’), M.Sc. in Circuit and System from the Chinese Academy of Sciences (07’) and B.A. in Computer Science from Beijing Jiaotong University (04’). She is a member of IEEE, ACM, and its Special Interest Group on Simulation (SIGSIM). She is the recipient of the Gene Newman Award for Excellence in Modeling and Simulation Research from ODU in 2019. Her research work in Deep Learning for Effective Refugee Tent Extraction using Satellite Images was reported in the world-leading engineering magazine IEEE Spectrum in 2020.

Panel 2 Topic: How to detect and manage AI errors and risks; human baseline and engagement.

A person with curly hair

Description automatically generated with medium confidence

Abby Gilbert

Head of Research at the Institute for the Future of Work (IFOW)

Abby is the Head of Research at the Institute for the Future of Work (IFOW). IFOW has a mission to build a fairer future of better work through technological change, and works at the intersection of academia, business, civil society and policy to achieve this. In a recent project, Abby has led the development of a Good Work Algorithmic Impact Assessment process, covering the lifecycle of an AI within a workplace.


A person in a suit smiling

Description automatically generated with medium confidence

Dennis Hirsch

Professor, Moritz College of Law; Director, Program on Data and Governance

Dennis Hirsch is a Professor of Law and of Computer Science at The Ohio State University. He serves as Faculty Director of the OSU Program on Data and Governance which conducts research on and convenes conversations about the law, policy and ethics of advanced analytics and AI.  He also serves as Co-Director of the Responsible Data Science Community of Practice at Ohio State.  A graduate of Yale Law School, Professor Hirsch is a recognized expert on the governance of advanced analytics and AI, having testified on this topic before both the US Senate Subcommittee on Privacy, Technology and the Law, and before the Federal Trade Commission. Professor Hirsch has published dozens of articles and book chapters and an award-winning book. In 2010, he served as Fulbright Senior Professor at the University of Amsterdam where he taught privacy law and researched Dutch data protection codes of conduct. He co-organized and teaches in the University of Amsterdam’s Summer Course on Privacy Law and Policy.  Professor Hirsch has also served as Chair of the AALS Committee on Defamation and Privacy, Reporter for the Uniform Law Commission Drafting Committee on Employee and Student Privacy, member of the Ohio Attorney General’s Task Force on Facial Recognition, and member of the Smart Columbus Privacy and Data Security Board.


A person smiling for the camera

Description automatically generated with medium confidence

Katie Shay

Associate General Counsel & Director of Human Rights, Cisco Systems, Inc.

Katie Shay (she/her) is Associate General Counsel and Director of Human Rights at Cisco Systems, Inc. She leads Cisco’s efforts to integrate a human rights perspective into the way that Cisco conducts business, throughout the value chain. Prior to joining Cisco, Katie served as Business and Human Rights Counsel at Yahoo, where she managed human rights programs related to privacy and freedom of expression across the global business. Katie earned her J.D. from Georgetown University Law Center and her B.A. in English Literature from Marquette University.


A person smiling for the camera

Description automatically generated with medium confidence

Steven Truitt

Principal Program Manager, Microsoft

Mr. Steven Truitt is a Principal Program Manager at Microsoft focusing on the practical applications of hyper-scale AI for difficult cognitive problems. Prior to starting at Microsoft in 2021, Steven was CTO of Kimetrica, a humanitarian technology company that used cutting edge algorithms to predict and warn against instability and acute resource crises.  He served as a PI DARPA World Modelers and Geospatial Cloud Analytics and worked on advancing remote sensing technologies within the IC and DoD through various senior leadership and advisory roles at startups and FFRDCs.


A person wearing a suit and tie

Description automatically generated with medium confidence

Nicolas Vermeys

Associate Director of the Cyberjustice Laboratory & professeur titulaire

Nicolas Vermeys, LL. D. (Université de Montréal), LL. M. (Université de Montréal), CISSP, is the Director of the Centre de recherche en droit public (CRDP), the Associate Director of the Cyberjustice Laboratory, and a Professor at the Université de Montréal’s Faculté de droit. He has also acted as a visiting professor of law at both William & Mary (USA) and the University of Fortaleza (Brazil).

Mr. Vermeys is a member of the Quebec Bar, as well as a certified information system security professional (CISSP) as recognized by (ISC)2, and is the author of numerous publications relating to the impact of technology on the law, including Droit codifié et nouvelles technologies: le Code civil (Yvon Blais, 2015), and Responsabilité civile et sécurité informationnelle (Yvon Blais, 2010).

Mr. Vermeys’ research focuses on legal issues pertaining to artificial intelligence, information security, developments in the field of cyberjustice, and other questions relating to the impact of technological innovations on the law. He is often invited to speak on these topics by the media, and regularly lectures for judges, lawyers, professional orders, and government organizations, in Canada and abroad.

Panel 2 Moderator:

A picture containing person, outdoor, tree, person

Description automatically generated

Iria Giuffrida

Professor of the Practice of Law, William & Mary Law School

Dr. Iria Giuffrida is a Professor of the Practice of Law at William & Mary Law School and serves as Visiting Faculty for Business Law at the Raymond A. Mason School of Business.

Professor Giuffrida’s research focuses on the legal issues arising from the increasing use of artificial intelligence, the rapid growth of the Internet of Things, and related emerging technologies. She has an interest in smart cities, which she researches through the lens of governance and accountability as well as cybersecurity. Professor Giuffrida teaches the Law School’s innovative artificial intelligence course, and co-teaches an interdisciplinary seminar on cyber and information security. She is involved in grant-funded experiential projects aimed at increasing diversity in the cybersecurity industry.

In her previous professional life, she was a commercial litigator and gained substantial experience in international alternative dispute resolution. Now a “recovering” litigator, she is also drawn to the interaction between technology and the administration of justice.

Professor Giuffrida is admitted to practice in the State of New York, is a Solicitor in England and Wales, and has qualified as a Solicitor in the Republic of Ireland. She is also a certified information privacy professional (CIPP/US).

Professor Giuffrida graduated with an LL.B. in English and European Law from Queen Mary, University of London. She was the 2001 Drapers’ Scholar at William & Mary Law School, where she obtained an LL.M. She was later awarded a Ph.D. in Law by Queen Mary, University of London.

Panel 3 Topic: How well do current regulations/policies address these challenges?

 Peter Chapman

Associate Director and Tech and Human Rights Lead, Article One

As Associate Director & Human Rights and Technology Lead, Peter Chapman leads Article One’s Washington DC office and advises companies and partners on business and human rights priorities. This includes conducting human rights impact assessments at the corporate, country and product level and helping companies develop robust human rights and governance strategies to mitigate risks associated with emerging technologies. To this role Peter brings his experience working with companies, governments, philanthropic organizations and civil society to improve governance processes, strengthen participation and advance human rights.

Prior to Article One, Peter worked as Senior Legal Counsel with Twitter’s Safety, Content and Law Enforcement team. Peter co-led the development of Twitter’s global Content Governance Initiative, which seeks to advance a consistent and principled approach to the development, enforcement and assessment of Twitter’s global rules and policies.

Peter has extensive human rights and governance experience. Peter worked in Ethiopia as an independent advisor on inclusive governance and access to justice, working with a range of non-profit and multilateral organizations, including the Pathfinders for Peaceful, Just and Inclusive Societies, the World Bank and the World Justice Project. In this role he led the development of publications and resources including the World Bank’s forthcoming Good Practices in National Systems for Environmental and Social Impact and the World Justice Project’s Grasping the Justice Gap: Opportunities and Challenges for People-Centered Justice Data. For seven years he helped to lead the Open Society Justice Initiative’s work on legal empowerment, sustainable development and inclusive governance from Washington DC and Budapest, Hungary. He played a leading role in advancing Open Society Foundation’s strategy to strengthen governance and justice through the Sustainable Development Goals, including through a multistakeholder partnership with the Organization for Economic Co-operation and Development to strengthen people-focused justice measurement. Prior to joining the Justice Initiative, Peter supported governance and justice reform efforts in Africa and East Asia with the World Bank’s Justice for the Poor program and worked on extractive industries, dispute resolution and access to justice with the Carter Center in Monrovia, Liberia. Peter has advanced human rights and governance work in a range of countries including Bangladesh, Cambodia, Côte d’Ivoire, Ethiopia, Indonesia, Kenya, Liberia, Nepal, Sierra Leone, Solomon Islands, South Africa, Uganda and the US.

Peter is an attorney, holding a Juris Doctor from the Washington College of Law, American University. He has a Master of Arts in international affairs from the School of International Service, American University and a Bachelor of Arts in peace studies and political science from Colgate University. He is a Non-Resident Fellow with New York University’s Center on International Cooperation. He and his family live in Washington DC.

Follow Peter on Twitter at @pfchap15.


A person wearing glasses

Description automatically generated with medium confidence

Brenda Leong

Partner, BNH.AI

Brenda Leong is a partner at BNH.AI, a boutique law firm uniquely founded by a partnership between lawyers and data scientists, dedicated entirely to developing policies and practices around AI governance, including applying model risk management frameworks, performing model audits, and creating de-identification architecture and certification, along with designing and automating AI policies and procedures. Previously, Brenda was senior counsel and director of AI and ethics at the Future of Privacy Forum, where she oversaw the development and analysis of AI and ML. She is a recognized expert on the responsible use of biometrics and digital identity, with a focus on facial recognition, facial analysis, and emerging issues around voice-operated systems. Prior to her work at FPF, Brenda served in the US Air Force. She is a 2014 graduate of George Mason University School of Law.


A picture containing person, indoor, shelf

Description automatically generated

Reva Schwartz

Research Scientist, National Institute of Standards and Technology (NIST)

Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST). She serves as Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. Her research focuses on organizational practices, and the role of expertise and expert judgment in socio-technical systems. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.

Reva’s background is in linguistics and experimental phonetics, and includes a forensic science posting for almost 15 years at the United States Secret Service, advising forensic science practice during a previous stint at NIST, a temporary duty assignment at the National Security Agency, and adjunct researcher at the Johns Hopkins University Human Language Technology Center of Excellence.

Having been a forensic scientist for more than a decade, her interests have expanded to the areas of socio-technical systems, organizational behavior, practice improvements for interdisciplinary teams, and the evaluation of expert-driven systems. Reva advocates for a socio-technical systems approach to AI practice, including human-centered design processes and the evaluation of AI systems in real world contexts.


Jessica Smith

Principal of Online Safety, Ofcom

Jessica is a Technology Policy Manager at Ofcom, the UK’s communications regulator. Ofcom’s principal duty in carrying out its functions is to further the interests of citizens in relation to communications matters and the interests of consumers in relevant markets. At Ofcom, Jessica works on AI regulation and supports the Digital Regulation Cooperation Forum (DRCF)’s algorithmic processing workstream. Previously, Jessica worked at the Centre for Data Ethics and Innovation, an expert body advising the UK government on ethical development and use of algorithms and AI.

Panel 3 Moderator:

A person smiling for the camera

Description automatically generated with low confidence

Dr. Stephanie Blackmon

Associate Professor of Higher Education, William & Mary School of Education

Dr. Stephanie J. Blackmon is an Associate Professor of Higher Education in the William & Mary School of Education. Her early research focuses on teaching and learning with an emphasis on technology integration in higher education. She has conducted several studies and written numerous papers about topics such as virtual worlds, instructors’ and students’ experiences with learning management systems, and massive open online courses (MOOCs).

Dr. Blackmon has expanded her work to include a focus on the broader applications of technology integration because of their impact on higher education. Her research explores the qualitative experiences people have with the following: technology integration in higher education and professional development settings; experiential learning in technology-related settings; trust and privacy in learning and data analytics use; and trust, privacy, and security with the use of apps and also with internet-connected devices such as wearable, mobile, and in-home technologies. Dr. Blackmon has also co-developed a framework for interdisciplinary learning analytics use; she was the lead PI for an interdisciplinary project that focused on the experiences people with disabilities have with trust and internet-connected devices; and she was the lead PI for an interdisciplinary project that focused on experiential learning and technological development. The goal of Dr. Blackmon’s work is to move research to practice, particularly as it relates to technology use and development.

This content has been updated on October 6, 2023 at 10:35 am.