Medical AI can detect the racial identity of patients from x-rays. This is extremely concerning, and raises urgent questions about how we test medical AI systems.
The way we currently report human performance systematically underestimates it, making AI look better than it is.
Reports that CT scanning may be better than PCR testing for covid-19 are flawed and almost certainly wrong.
Medical AI testing is unsafe, but addressing hidden stratification may be a way to prevent harm, without upending the current regulatory environment.
Ai competitions are fun, community building, talent scouting, brand promoting, and attention grabbing. But competitions are not intended to develop useful models.
I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!
My first impressions of these datasets. How do they measure up, and how useful might they be?
Humans explain their decisions with words. In our latest work, we suggest AI systems should do the same.
Medical data is horrible to work with, but deep learning can quickly and efficiently solve many of these problems.
Our team has post-doc and PhD positions available, so come to unexpectedly great Adelaide!