I’m excited and optimistic about the future of AI and its role in analytical fields like collision reconstruction (as long as it doesn’t kill us off, like Eliezer Yudkowsky predicts). I’ve found tools like ChatGPT to be really useful for creating code and editing text, but it can get you in a bit of trouble in its current form.
ChatGPT lied and caused some troubles in court
Apparently Roberto Mata was bumped by a serving cart on an Avianca flight and sustained some lawsuit-worthy injuries. Mata’s attorneys performed legal research to oppose a motion to dismiss, and that’s where things got a little funky. Avianca’s legal team couldn’t find the cited cases. It turns out an associate attorney used ChatGPT to perform the research. Small problem, the cited cases didn't exist. Three bullets from the associate attorney's affidavit regarding the issue are below (the full document, including excerpts from ChatGPT, is here).
Like the sage Kevin Kelly recently said during a podcast with Tim Ferriss (edited a bit to smooth the transition from spoken word to text):
"What we get from these AIs right now are universal personal interns. They’re doing the work of interns, and they’re really amazing, but you have to check their work. It’s embarrassing to release their work without improvement."
Don’t embarrass yourself, QC the robots (for now). I hope you have a swell weekend, full of relaxation and people you enjoy.
Lou Peck
Lightpoint | Axiom