81 – Consumer Credit, Big Tech and AI Crime


In today's episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of 'too big to fail' tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute's Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area.

You can listen to the episode below or download here. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here).


Show Notes

Topics discussed include:

  • The digitisation, datafication and disintermediation of consumer credit markets
  • Algorithmic credit scoring
  • The problems of risk and bias in credit scoring
  • How law and regulation can address these problems
  • Tech platforms that are too big to fail
  • What should we do if Facebook fails?
  • The forms of AI crime
  • How to address the problem of AI crime

Relevant Links

Post Block Status & visibility Visibility Public Publish September 18, 2020 1:09 pm Stick to the top of the blog Author John Danaher Enable AMP Move to trash 9 Revisions Permalink Categories Uncategorized Podcast Add New Category Tags Add New Tag Separate with commas or the Enter key. Featured image Excerpt Discussion Open publish panel NotificationsCode editor selected

80 – Bias, Algorithms and Criminal Justice


Lots of algorithmic tools are now used to support decision-making in the criminal justice system. Many of them are criticised for being biased. What should be done about this? In this episode, I talk to Chelsea Barabas about this very question. Chelsea is a PhD candidate at MIT, where she examines the spread of algorithmic decision making tools in the US criminal legal system. She works with interdisciplinary researchers, government officials and community organizers to unpack and transform mainstream narratives around criminal justice reform and data-driven decision making. She is currently a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly, she was a research scientist for the AI Ethics and Governance Initiative at the MIT Media Lab.

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).



Show notes

Topics covered in this show include

  • The history of algorithmic decision-making in criminal justice
  • Modern AI tools in criminal justice
  • The problem of biased decision-making
  • Examples of bias in practice
  • The FAT (Fairness, Accountability and Transparency) approach to bias
  • Can we de-bias algorithms using formal, technical rules?
  • Can we de-bias algorithms through proper review and oversight?
  • Should we be more critical of the data used to build these systems?
  • Problems with pre-trial risk assessment measures
  • The abolitionist perspective on criminal justice reform

Relevant Links


79 – Is There A Techno-Responsibility Gap?


Daniel_Tigard 

What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine's actions? That's the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History & Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems. 

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here).

          

Show Notes


Topics discussed include:

 
  • What is responsibility? Why is it so complex?
  • The three faces of responsibility: attribution, accountability and answerability
  • Why are people so worried about responsibility gaps for autonomous systems?
  • What are some of the alleged solutions to the "gap" problem?
  • Who are the techno-pessimists and who are the techno-optimists?
  • Why does Daniel think that there is no techno-responsibility gap?
  • Is our application of responsibility concepts to machines overly metaphorical?
 

Relevant Links



Lab Culture Ep. 22: Life as a public health lab scientist testing for COVID-19

Matt Sinn and Jessica Bauer pose with the Missouri state flag

Jessica Bauer and Matt Sinn are scientists at the Missouri State Public Health Laboratory. On this episode, they shared their experiences performing COVID-19 testing, working long hours seven days a week, supporting their staff while trying not to burnout themselves. As they describe in this conversation, the experience has been nothing they ever could have expected.

Jessica Bauer, molecular unit chief
Matthew Sinn, molecular laboratory manager

Listen here or wherever you get your podcasts:

Links:

Missouri State Public Health Laboratory
APHL: Responding to the COVID-19 Pandemic
COVID-19 posts on APHLblog.org 

The post Lab Culture Ep. 22: Life as a public health lab scientist testing for COVID-19 appeared first on APHL Lab Blog.

Tic-Tac-Toe the Hard Way is a podcast about the human decisions in building a machine learning system

From Google’s People + AI Research team, David Weinberger and Yannick Assogba build a machine learning system that plays Tic-Tac-Toe. They discuss the choices, not just the technical ones, along the way in the ten-part podcast series:

A writer and a software engineer engage in an extended conversation as they take a hands-on approach to exploring how machine learning systems get made and the human choices that shape them. Along the way they build competing tic-tac-toe agents and pit them against each other in a dramatic showdown!

This is a podcast for anyone, from curious non-techies to developers dabbling in machine learning, interested in peeking under the hood at how people make and shape ML systems.

I’m a few episodes in. It’s entertaining.

This is an especially good listen if you’re interested in machine learning, but aren’t quite sure about how it works beyond a bunch of data going into a black box.

Tags: , , ,

78 – Humans and Robots: Ethics, Agency and Anthropomorphism


  Sven-Nyholm 

Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today's guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend of the show, having appeared twice before. In this episode, we are talking about his recent, great, book Humans and Robots: Ethics, Agency and Anthropomorphism

You can download the episode here or listen below. You can also subscribe on Apple PodcastsStitcherSpotify and other podcasting services (the RSS feed is here). 


Show Notes:

Topics covered in this episode include:
  • Why did Sven play football with a robot? Who won?
  • What is a robot?
  • What is an agent?
  • Why does it matter if robots are agents?
  • Why does Sven worry about a normative mismatch between humans and robots? What should we do about this normative mismatch?
  • Why are people worried about responsibility gaps arising as a result of the widespread deployment of robots?
  • How should we think about human-robot collaborations?
  • Why should human drivers be more like self-driving cars?
  • Can we be friends with a robot?
  • Why does Sven reject my theory of ethical behaviourism?
  • Should we be pessimistic about the future of roboethics?

Relevant Links


 

77 – Should AI be Explainable?


scott robbins

If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.

You can download the episode here or listen below. You can also subscribe on Apple Podcasts, Stitcher, Spotify and other podcasting services (the RSS feed is here). 



Show Notes

Topic covered include:
  • Why do people worry about the opacity of AI?
  • What's the difference between explainability and transparency?
  • What's the moral value or function of explainable AI?
  • Must we distinguish between the ethical value of an explanation and its epistemic value?
  • Why is it so technically difficult to make AI explainable?
  • Will we ever have a technical solution to the explanation problem?
  • Why does Scott think there is Catch 22 involved in insisting on explainable AI?
  • When should we insist on explanations and when are they unnecessary?
  • Should we insist on using boring AI?
 

Relevant Links

 

Explore Explain is a new visualization podcast about how the charts get made

From Andy Kirk, there’s a new visualization podcast in town:

Explore Explain is a new data visualisation podcast and video series. Each episode is based on a conversation with visualisation designers to explore the design story behind a single visualisation, or series of related works. The conversations provide an opportunity to explain their design process and to share insight on the myriad little decisions that underpin the finished works. It also shines a light on the contextual circumstances that shaped their thinking.

Audiences will gain an appreciation of the what, the why and the how, learning about the hidden problems and challenges, the breakthroughs and the eureka moments, the pressures and frustrations, the things that were done and the things that were not done, as well as the successes and the failures.

My main podcast-listening mode was while driving, so I’m way behind, but this sounds promising. It’s right in line with Kirk’s Little of Visualization Design blog project.

Tags: , ,

76 – Surveillance, Privacy and COVID-19

Carissa Veliz

How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz about these questions. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics at Oxford and the Wellcome Centre for Ethics and Humanities, also at Oxford. She is the editor of the Oxford Handbook of Digital Ethics as well as two forthcoming solo-authored books Privacy is Power (Transworld) and The Ethics of Privacy (Oxford University Press).

You can download the episode here or listen below.You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here). 


Show Notes

Topics discussed include
  • The value of privacy
  • Do we balance privacy against other rights/values?
  • The significance of consent in debates about consent
  • Digital contact tracing and digital quarantines
  • The ethics of digital contact tracing
  • Is the value of digital contact tracing being oversold?
  • The relationship between testing and contact tracing
  • COVID 19 as an important moment in the fight for privacy
  • The data economy in light of COVID 19
  • The ethics of immunity passports
  • The importance of focusing on the right things in responding to COVID 19
 

Relevant Links

 

75 – The Vital Ethical Contexts of Coronavirus


David Shaw

There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? Can they be morally blamed for what they have done. These are the questions I discuss with my guest on today's show: David Shaw. David is a Senior Researcher at the Institute for Biomedical Ethics at the University of Basel and an Assistant Professor at the Care and Public Health Research Institute, Maastricht University. We discuss some recent writing David has been doing on the Journal of Medical Ethics blog about the coronavirus crisis.

You can download the episode here or listen below. You can also subscribe to the podcast on Apple, Stitcher and a range of other podcasting services (the RSS feed is here).


 Show Notes

Topics discussed include...
  • Why is it important to keep death rates and other data in context?
  • Is media reporting of deaths misleading?
  • Why do the media discuss 'soaring' death rates and 'grim' statistics?
  • Are we ignoring the unintended health consequences of COVID 19?
  • Should we take the economic costs more seriously given the link between poverty/inequality and health outcomes?
  • Did the UK government mishandle the response to the crisis? Are they blameworthy for what they did?
  • Is it fair to criticise governments for their handling of the crisis?
  • Is it okay for governments to experiment on their populations in response to the crisis?

Relevant Links