Parents Trust Health Recommendations From ChatGPT More Than Advice From Doctors, Study Finds

Parents are now trusting health recommendations from chatbots such as ChatGPT more than advice given by health care professionals.

That is according to findings recently published in the Journal of Pediatric Psychology. The study, conducted by researchers from the University of Kansas Life Span Institute, had over 116 parents between the ages of 18 and 65 as participants for the research.

Each parent was given health information, which included content on nutrition and sleep training for infants. There were two types of texts, one generated by ChatGPT and another made by doctors.

None of the parents were told who authored the texts provided, the study noted.

What Were the Findings

The study's participants were asked to rate each text. Specifically, they would rate the information based on "perceived morality, trustworthiness, expertise, and accuracy."

In most cases, it was impossible for the participants to distinguish whether the content was generated by the chatbot or by health experts. In cases where the parents noted significant differences, most rated the content from ChatGPT as more reliable.

"This outcome was surprising to us, especially since the study took place early in ChatGPT's availability," lead author Calissa Leslie-Miller said.

It is unclear why more parents chose ChatGPT's recommendations. However, the study's authors said it could be the simplicity and approachability of the AI-generated texts. In addition, they also noted that ChatGPT presents information in a way that is easier for parents to understand.

Are There Any Risks Associated With Using AI in Healthcare?

While ChatGPT does offer convenience, medical professionals warned that it could provide parents with inaccurate information about their child's condition. For instance, it would suggest medication for a child who is too young to receive it.

A March observation study, for instance, found that several generative AI programs, including ChatGPT, generated incorrect health information when asked to produce misleading content that looked realistic and scientific. The chatbots also generated "references" to make the content look more authentic.

The authors also added that ChatGPT lacks the deep insight and clinical experience that doctors and healthcare professionals have. This includes the patient's health profile and medical history.

Tags Parents

© 2024 ParentHerald.com All rights reserved. Do not reproduce without permission.

Join the Discussion
Real Time Analytics