Fully autonomous medical AI is here, but are we treating it with the caution it deserves?
Medical AI can detect the racial identity of patients from x-rays. This is extremely concerning, and raises urgent questions about how we test medical AI systems.
The way we currently report human performance systematically underestimates it, making AI look better than it is.
CMS will reimburse an AI stroke detection model through Medicare/Medicaid. It is so darn complicated that it deserves a much deeper look.
AI is finally getting paid, apparently at a rate of $1000 per patient. What?
Super-resolution promises to be one of the most impactful medical imaging AI technologies, but only if it is safe.
This week we saw the FDA approve the first MRI super-resolution product, from the same company that received approval for a similar PET product last year. This news seems as good a reason as any to talk about the safety concerns myself and many other people have with these systems.
Medical AI testing is unsafe, but addressing hidden stratification may be a way to prevent harm, without upending the current regulatory environment.
Ai competitions are fun, community building, talent scouting, brand promoting, and attention grabbing. But competitions are not intended to develop useful models.
I discuss a piece of medical AI research that has not received much attention, but actually did a proper clinical trial!
Forget about interpretability, don't share your code or data, and remember, AI is magic.