Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I've been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I'm thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That's a no for me dawg.jpg.
I'm really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don't consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
I know this might go against the flow here, but realistically if they're using the tools in the way they say they are (which you should 100% check with your doctor to let him know about possible hallucinations) it's not that bad. Speech-to-text is not prone to hallucinate, it can fail and detect wrong things but shouldn't outright hallucinate. After that, LLMs are good at summarizing things, yes they are prone to hallucinations which is why having the doctor review the notes immediately after the session is important (and they said they do), so I don't see this as such a big issue from the usability point of view.
You might still have issues from a privacy point of view and that's a much more complex discussion with them about what kind of contract they have with the LLM company to ensure no HIPAA violations (as from the LLM point of view it's just making a summary of a text it might store it, and then the whole stack is suable). They need to understand that just because they haven't kept a copy around doesn't mean the other party hasn't, and because they shared it out without your agreement (you're only agreeing to AI note taking which can be done locally so them sharing information with third parties is entirely up to them) they would be liable. I'm not a lawyer, so you might want to double check that, but I would be very surprised if that's not the way it works, otherwise Drs could get away with a bunch of HIPAA violations by having you sign something that says they use a computer to store data and then storing things in shared Google drive.
It depends. For programming, I've tried using them to write commit messages and they suck at it. And for healthcare they're not summarizing blog posts, they're dealing with potential life or death scenarios. Doctors have expert knowledge to catch details that LLMs won't pick up on, and LLMs won't notice nonverbal cues either which constitutes a large portion of communication. Doctors also have a thought process to log that LLMs don't have either. Even if the doctor reviews the notes afterward, the quality will probably be worse than before.
I feel like the doctor and the patient should have to sign off on notes even without AI.