Test of ‘poisoned dataset’ shows vulnerability of LLMs to medical misinformation

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

Leave A Comment

Your email address will not be published. Required fields are marked *