Seminar on Ethics and Artificial Intelligence (A.I.)

An interesting event addressing topic on ethics and AI took place at Nuffield Theatre Building, University of Southampton on 13.3.2019 . Details and description as below;

 

Description

Artificial Intelligence (A.I.) increasingly affect our lives, from job selection where computer programs are used to automatically sift through applications, to our ability to get loans, credit cards, visas, and the premiums we pay for insurance policies. While such technology can make decisions more efficiently, are more consistent and less prone to subjective human decision-making, there are many challenges to be addressed around the fairness, transparency, as well as the legality of what are often seen as ‘black boxes’.

This is a fascinating seminar which looks at the philosophical and social arguments for and against the use of A.I., using the complex case study of the military and deployment of UAVs (drones), and examples taken from everyday life, as well as the views of a cutting-edge industry practitioner who uses data science and machine learning for commercial applications, will offer some insights on how to ensure machine learning models do not introduce bias or discrimination.

** Breaking News **

Professor Dame Wendy Hall has now kindly agreed to chair a Q&A Session with the three speakers as part of this seminar, which should make it even more interesting to get the thoughts from one of the pioneers of computer science, along with the exciting topics being covered by Professor Christian Enemark, Daniel First and Associate Professor Enrico Gerding.

This event forms part of the Southampton Science and Engineering Festival 2019 #SOTSEF @UoS_Engagement. We are also grateful to Quantum Black for their support at this event.

*********************************************************************

Speaker profiles:

Dr. Enrico Gerding

Enrico is an Associate Professor in the Agents, Interaction and Complexity (AIC) research group in the Department of Electronics and Computer Science (ECS) at the University of Southampton. He is also a board member of Southampton’s Centre for Machine Intelligence (CMI). He has been an academic at Southampton since 2007 and has been involved in Artificial Intelligence since 1994 when he did his undergraduate on this topic in Amsterdam, followed by his PhD which was completed in 2004. The focus of his research is on autonomous decision making, specifically in the area of multi-agent systems. In past 5 years, he has been working on a project called “Meaningful Consent in the Digital Economy” around data privacy. Consumers are constantly asked to consent to the use of their data, expecting non-experts to read and understand the terms and conditions, which are often unclear and do not offer a realistic choice. The project looks at using AI to empower consumers and help them make complex privacy-related decisions.

https://www.southampton.ac.uk/politics/about/staff/ce1e16.page

Professor Christian Enemark

Christian Enemark is Professor of International Relations in the Faculty of Social Sciences, University of Southampton, and Principal Investigator for the DRONETHICS project (ERC project ID 771082). Christian has published numerous books and articles addressing issues of global health politics, international security, and the ethics of armed conflict. These publications include a book on drones entitled Armed Drones and the Ethics of War: Military Virtue in a Post-Heroic Age.

https://www.ecs.soton.ac.uk/people/eg

Daniel First

Daniel is a Data Scientist at QuantumBlack. He has worked with doctors and healthcare companies to design innovative, data-driven solutions to improve outcomes for patients, by forecasting and preventing medical risks. He has also developed an approach for data scientists to follow, to ensure that algorithms are being developed in line with ethics and fairness considerations. He has published on the social and political impact of Artificial Intelligence and has spoken on the importance of making machine learning algorithms’ decisions interpretable to humans, most recently at the University of Oxford Mathematical Institute. He graduated from Yale University with a B.A. in Cognitive Science and Neuroscience. He holds an M.Phil. in Philosophy from the University of Cambridge, where he specialised in the history of ethical thought, and an M.Sc. in Data Science from Columbia University.

Dame Wendy Hall, DBE, FRS, FREng is Regius Professor of Computer Science, Pro Vice-Chancellor (International Engagement), and is an Executive Director of the Web Science Institute at the University of Southampton. With Sir Tim Berners-Lee and Sir Nigel Shadbolt, she co-founded the Web Science Research Initiative in 2006 and is the Managing Director of the Web Science Trust, which has a global mission to support the development of research, education and thought leadership in Web Science.

She became a Dame Commander of the British Empire in the 2009, and is a Fellow of the Royal Society. She has previously been President of the ACM, Senior Vice President of the Royal Academy of Engineering, a member of the UK Prime Minister’s Council for Science and Technology, a founding member of the European Research Council and Chair of the European Commission’s ISTAG, was a member of the Global Commission on Internet Governance, and a member of the World Economic Forum’s Global Futures Council on the Digital Economy.

Dame Wendy was co-Chair of the UK government’s AI Review, which was published in October 2017, and has recently been announced by the UK government as the first Skills Champion for AI in the UK.

 

During the event, I have few question in mind, for example on how “dynamic AI” capable dealing with fake or unreal data and behavior, which can effect the AI decisions ?

For example, almost every learning and MOOC platform does provide learning analytics data that can be used to analyzed student engagement or activity and possible to relate with the students performance.  Unfortunately, I don’t think today system can really identified the ‘present’ of the student and on how student learn? There are situation where student open a video or activity on browser, but just ignore it and do other stuff. Data on duration student ‘learn’ is not accurate at this situation. This on learning area, how about other area such as security or healthcare? I consider this as fake or unreal data recorded honestly. Please, do let me know if you know or found solutions for this.

Looking forward for next AI events!

 

Interesting to read:
The Dark Secret at the Heart of AI
Training AI with Fake Data: A Flawed Solution?
Can Fake Patient Data Drive AI in Healthcare?
4 Things Everyone Should Fear About Artificial Intelligence and the Future