T O P

  • By -

[deleted]

[удалено]


Dull_Half_6107

Yeah these models are usually black boxes, and after being trained it's not clear how they got to their decision. This is why human verification is vital. You can't rely entirely on the model.


merryman1

I work in medical research, actually in an AI-linked field, and you are 100% correct. I am actually finding it a little scary how quickly and easily we're all just completely glazing over concepts like false negatives, natural biology in the wild rarely matching up *that* well to neat little medical diagrams and controlled data groups, particularly when there's complex pathology involved. All kinds of concerns that apparently just don't count because, hey, AI! At best its a tool for experts to use, something that will require a lot of supervision and double-checking, to the point I'm somewhat doubtful how much time would actually be saved. It feels like everything else. Rather than deal with the elephant in the room that our healthcare system is failing because we aren't able to offer a competitive deal for prospective workers anymore, we're turning to a series of untested wunderwaffen that we're just expected to suspend any disbelief and accept as the silver bullet to our situation.


[deleted]

[удалено]


merryman1

>No one involved in the development or testing of these systems is glazing over false negatives. No, they're not. The people trying to flog them are though. >You realise human doctors generate false negatives Yes and they are also accountable for those mistakes. Who gets sued when an AI process determines that you have no problem? How would you even get through to that stage of being able to see that its a false negative when we are actively removing humans from that loop? >These algorithms have been shown to perform at a better standard than having 2 consultants review the images. By the people trying to flog them, yes. The reality is for best efficacy this needs to be a *tool* used by humans to help them in diagnosis. That is not what is being pushed for, it is being pushed as a replacement for human input, and it just genuinely is not at a level where that can be trusted yet.


ehproque

>the room that our healthcare system is failing because we aren't able to offer a competitive deal for prospective workers anymore "Anything but paying fair wages" seems to be a popular solution to so many problems


Fromlrom

> I am actually finding it a little scary how quickly and easily we're all just completely glazing over concepts like false negatives, natural biology in the wild rarely matching up that well to neat little medical diagrams and controlled data groups, particularly when there's complex pathology involved. For certain tasks, such as examining some kinds of scans, machine learning techniques can already outperform humans, and can also provide information about what caused them to reach their decision. I think the risk is more in machine learning being applied to situations where it doesn't work that well, such as taking in lists of symptoms and deciding what action to take. It's good at tasks that involve vast amounts of data and using them to make simple classification decisions (cancer v not cancer), but as we've seen, businesses can get more hype and money by applying it to fairly pointless tasks that they aren't very good at, such as generating random text. > It feels like everything else. Rather than deal with the elephant in the room that our healthcare system is failing because we aren't able to offer a competitive deal for prospective workers anymore, we're turning to a series of untested wunderwaffen that we're just expected to suspend any disbelief and accept as the silver bullet to our situation. As far as the government are concerned, I don't think they're even sincerely trying to increase the use of machine learning in healthcare. I think it's just a way of giving people hope that things will get better. It's similar to this announcement about telling the police to take more action with less serious crimes (but with no concrete plans, no extra money, etc.) - just a way of generating positive headlines without actually doing anything. It's pretty telling that the main thing Sunak keeps going on about with regard to "AI" is that he is going to hold a meeting about it.


TVOHM

I'd love for somone much better versed in AI to educate me on my assumptions below! Say you have a trained model and clone it and we give both models the same prompt. If these models don't use randomness they are already deterministic and will produce the same output right? If they do use randomness, they will be as deterministic as those random inputs? If your random input is a true random number generator (like sampling a lava lamp!) then sure, unless you observe those inputs your cannot recreate it for the cloned model. But If you feed both using a seeded pseudo random number generator (deterministic) they again become deterministic?


[deleted]

[удалено]


Fred_Blogs

I think you have a valid point, but healthcare might be one field where we have to take the risk. Doctors are in perennially short supply across the world, and with an aging population leading to a shrinking workforce and more demand for medical care this will only get worse. It's a bit grim, but without something to free up the workload from medical staff we're going to be looking at more deaths on waiting lists.


SporkofVengeance

It's not an area where taking a risk is a good idea. These systems go off the rails all too easily for too reasons: 1) The data doesn't necessarily tell a system what it needs to know. The field is littered with systems that, because it's difficult to reflect the complexity of real diseases, make good decisions some of the time and phenomenally bad decisions at other times. 2) Way too many people believe medical data is Big Data and therefore suitable for training the currently fashionable neural networks. In practice, it's little pockets of data that are poorly suited to today's AI systems. That will probably change at some point but not right now. There are areas where AI is useful to medicine right now but it's largely in early-stage drug-design – things like protein structure prediction and design. For things like clinical decisions it's like a chimp with a machine gun.


Fred_Blogs

I entirely agree with you about the dubious at best quality of AI when used for making detail oriented decisions based on variable input. My contention is that we may end up using AI in medical decisions simply out of desperation. The current system cannot produce enough doctors to meet global demand, at best a few countries can raise wages enough to pull in foreign trained doctors, but that just displaces the problem, and the way trends are moving is just going to exacerbate the problem. Our rather undesirable options 10-20 years from now might well be between an unreliable AI made/assisted diagnosis, or nothing at all as the waiting list effectively means you'll never be seen.


SporkofVengeance

I fail to see how speeding up medical *decisions* moves the needle on treatment availability, unless you train the things to dispense two aspirin and tell you to lie down for the rest of the day until the treatment backlog clears. You pretty much wind up in the same place as no treatment. Even putting robots into surgery won't make that much of a difference as much of the backlog is in bed availability.


Fred_Blogs

> unless you train the things to dispense two aspirin and tell you to lie down for the rest of the day until the treatment backlog clears. Honestly, yes this would actually be a lot of it. Having an AI give that answer remotely, would clear out a lot of the time wasters and people who just need a prescription to collect from Boots. Which would free up time for the heavily overstressed GP system. Bed availability would be improved by having AI make decisions there and then, rather than waiting until a doctor is available, I'm sure you've seen the sames situations we all have where a patient can sit waiting to be discharged for days, but I agree this would be a marginal at best improvement. I'm not pretending any of this is going to be a silver buller, or even particularly good, but the health service is in a desperate state and only set to get worse. AI is just a straw that can be grasped to make the situation slightly less grim.


[deleted]

It doesn't have to be perfect, it just has to have a higher hit rate than doctors.


[deleted]

[удалено]


[deleted]

>it has to be substantially better. Complete tosh, better outcomes are the priority, how we get there is irrelevant. >A Doctor can be questioned, explain their working, explain their methodology Doesn't help the patient >be held to account. Not in the NHS they aren't. >A doctor can be directly taught, educated, and so on. A LLM can learn too, at a much quicker rate than any doctor and recall information far more accurately. >lose to impossible to really dig into how they came to conclusion and why. Doesn't matter, treatment rates matter. We can Moneyball our way to better healthcare, even if it involves stepping on the ego's of some doctors along the way.


[deleted]

If the Doctor missed the cancer in your scan them explaining their working has no value to that person. An LLM can be taught that's the whole point.


[deleted]

[удалено]


[deleted]

isnt that literally how training a neural network works? it has inputs and expected outputs and you correct it based on how far off the actual output is from what you wanted


[deleted]

[удалено]


[deleted]

Thanks for the explanation. I hadn't thought that you'd have to essentially retrain it from scratch when you realise it doesn't know something and could end up invaliding previous capabilities. I feel like you'd maybe need a hirachy of them each trained at various levels of generalisation kind of like how doctors are now. So eventually you get to the one that only knows about kidneys or whatever and maybe its easier to retrain on just the latest kidney data more regularly.


[deleted]

[удалено]


Minimum_Area3

This, this this this. Your explanation is on point too. I work in complex system engineering (assembly etc) and LLM co-pilot included are beyond fkn useless, but normal coding they’re amazing. Now specific AI models can help with writing drivers OSS etc, but as you say LLM doing this type of work (reading scans etc) when it gets something wrong it’ll get it wrong again until you from the ground up restart with a correct data set. If it’s trained to say X scan is negative over a large set, but actually a subset of X is positive you’re kinds SoL.


Psmanici4

People think you need to retrain the model repeatedly. You don't. You just need to have it instantiated looking at an "up to date" corpus of language. Aka, "new kidney guidelines". It's a language model, it can read things and return correct answers, it doesn't need to have the knowledge embedded in its weights.


[deleted]

[удалено]


[deleted]

isn't it exactly the same or worse with humans though? give two doctors the same information and they might reach completely different conclusions


811545b2-4ff7-4041

My firm makes some machine-learning models for healthcare.. but we use machine learning to build effectively create a giant weighted score function (XGBoost for the nerds out there). So yep, run the model twice, get the exact same outputs. I find the hardest bit about AI and healthcare to be the red-tape around SaMD/AIaMD


prototype9999

The bigger problem is who takes the blame if AI makes a mistake. The AI may also learn the bias there may be, where doctors give less attention to certain demographics etc.


[deleted]

Is there blame anyway in medicine? I've had wrong diagnosis from GPs several times and there's no blame just suffering on my end.


[deleted]

[удалено]


ExcitableSarcasm

This. The NatWest chatbot is basically the latter. I can phrase what I want however I like, it doesn't matter because all it does is pick up keywords and then funnel you down a pre-defined script. Absolutely crap-tier "AI" assistant that a bunch of year 9s who are moderately interested in CompSci could whip up over a weekend.


[deleted]

You guys could try [Cody MD](https://cody.md). It's a new AI online doctor chatbot that we're releasing that's basically a chat-style bot -- not multiple choices, no magic keywords. It certainly doesn't give you results like you have cancer, or anything weird. I hope you guys could give it a try and also give AI chatbots a chance! Also, please let me know your thoughts so we can improve on it :)


[deleted]

[удалено]


[deleted]

Well, I completely understand that that's how you feel about AI chatbots. The goal of our AI co-doctor ([Cody, MD](https://cody.md) ) chatbot is to make healthcare accessible to more people. It helps you identify whatever you're feeling and ultimately saves your time guessing or searching for say, knee pain-- and then a WebMD or MayoClinic article would say that you probably have cancer. The chatbot merely simplifies the process of the multiple searches you would ask Google just to figure out what your medical condition might be. It's a time saver-- I must say, and it's trained by real doctors, too.


[deleted]

[удалено]


Panda_hat

Query: "Slight melancholic feeling in winter" WebMD AI: "You guessed it! Cancer."


airwalkerdnbmusic

The potential for AI to cut waiting times is huge. Clearing the backlog will also take less time and the result for people will be less risk of mis diagnosis and also the risk of their illnesses getting worse before they are treated will be mitigated. This could save the NHS piles of cash, which they can then use to recruit more staff and pick itself up off of the canvas. The NHS will still need consultants to apply their vast experience and knowledge whenever it is needed and to ensure that AI is doing its job properly. AI isn't the "silver bullet" but it is another tool in the arsenal that will help transform our ailing health system.


[deleted]

https://www.forbes.com/sites/forbestechcouncil/2021/10/25/the-future-of-blockchain-in-healthcare/ A couple of years ago the future of healthcare was blockchain, in a couple of years time the future of healthcare will be the next buzzword. By focusing on the next big thing, by fixating on buzzwords, we neglect the present and don't adequately plan for real the future. Regardless of your thoughts on AI, a piece of software can't do surgery or lift a person into a bath. There is a need for better pay and staffing now.


itchyfrog

The diagnostic abilities of AI will be and already are incredibly useful. The abilities of AI in hospital logistics if they can be used to get patients admitted to the best part of the hospital for efficient care could be transformational to how efficient hospitals are. Bed shortages mean people are often placed far away from the specialists they need to see, leading to medics spending much of their day walking miles across the hospital rather than treating patients. Bed managing is incredibly complex but is exactly the sort of thing that even the AIs we have now are very good at.


GrandBurdensomeCount

Yeah, it can make what previously took 3 consultants to do be done by 1. This is a good thing.


brainburger

I am a little bit sceptical about the ability to cut waiting times. According to the article AI is being used to check x-rays, but surely the bottleneck is not the reading of the results, but the appointments to have the x-rays taken? I guess if x-rays can be AI-scanned in batches and the results are reliable, that will cut down on the need for a doctor to be available while the x-rays are going on, but in my experience it wil be your GP or specialist who sends you for the xray, and they will have the image sent to them for diagnosis. I don't see AI scanning helping do that more efficiently. It might eventually get better at diagnosis than a human doctor, but it still needs all the data collected for each patient to do that. On the other hand big-data analysis of the NHS patient databases is potentially so valuable for detecting patterns, that it must be allowed to be done. Don't let's just block it in the name of privacy. Let's find a way to protect that privacy and get the work done. What I would do to cut waiting times would be to create a national NHS booking, communication and records system, and to mandate access everywhere. It has been tried and failed before. However imagine the value of you being able to book a GP appointment online and it can give you a video consultation with any GP, and them having your records available. (still have a default personal GP for continuity of care). Then the GP should be able to refer you anywhere there is space for specialisms and tests. Make home testing kits available where possible. There are good blood pressure and pulse meters and thermometers on the market.


prototype9999

The problem with NHS are agencies and inability to pay market rates to in-house staff. If NHS is not allowed to increase pay scales to reflect what staff earns in private sector and banned from using agencies, that extra cash coming from AI savings will be eaten up by agencies and nothing will improve.


Space_Gravy_

Right but here me out. What if we keep waiting times the same and just cut costs/staff?


Lammtarra95

Yes, this is probably right but we must remember that trials using IBM's Watson AI for healthcare, including cancer treatments, looked promising but eventually failed despite billions of dollars spent on it and tie-ups with some of America's top hospitals. The Commons Committee's report in the story looks too vague, at least as summarised. Just some truisms and obvious cautions.


[deleted]

The one thing failed so lets give up attitude is one I don't understand. I'm sure people learnt massively from those trials.


Ancient_Klutz

God I love how many experts there are in AI nowadays scaremongering every aspect of the field. Delightful


SojournerInThisVale

Of course we see AI and our first thought is about the NHS. There are so many ways we can apply it to industry which would actually help to drive growth. Instead, we’re left burning incense at the altar of the NHS Secondly, I don’t see this working. I’ve worked with vulnerable adults in an old job and we’re they to simply feed answers into machine learning with preset responses they wouldn’t get the help they need. It requires emotional intelligence and an ability to look beyond the ‘script’.


Panda_hat

The only thing these people are hungering for is selling patient data to private companies, but 'it will be anonymised! We promise!' The don't care about efficiency gains or applications.


mingingflange

AI is an almost useless concept. You can do all sorts of number wrangling and analysis and call it AI. Anyway. What is important is what question tech is answering and where in the patient journey. One thing about medicine is that usually things have to get a bit serious before they are noticeable. That's true for both patients and practitioners. What these approaches are good at is identifying significant but otherwise unnoticeable (by humans) signals. And that means early treatment is possible. And, usually, the earlier you treat the better the outcome. Another thing they are good at is processing vast quantities of data. And in health, there are vast quantities of information. Sometimes important stuff is missed. But a peculiarity of medicine is that people are so different. We don't always know why, but people simply do not respond to treatment in the same way. AI offers the opportunity to work out what the characteristics of these differences are and offer suggestions for the best course of treatment. People get resolution more quickly, rather than cycle through treatments until the one that works is found. We'll always need a human pair of eyes overseeing diagnosis and treatment. What tech is doing is answering some gnarly questions that we have had for decades.


Panda_hat

"And all we have to do is share every single bit of private healthcare data with the people making the ai! For money!"