Study assesses GPT-4’s potential to perpetuate racial, gender biases in clinical decision making

Large language models (LLMs) like ChatGPT and GPT-4 have the potential to assist in clinical practice to automate administrative tasks, draft clinical notes, communicate with patients, and even support clinical decision making. However, preliminary studies suggest the models can encode and perpetuate social biases that could adversely affect historically marginalized groups.

Leave A Comment

Your email address will not be published. Required fields are marked *