Ariel Beery / אריאל בארי
1 min readDec 12, 2022

--

This fact -- that ChatGPT completely made up references to provide citation evidence to an article it 'wrote' -- is the most important part, I believe, of this entire (very important, well argued, well worth reading) piece.

ChatGPT does NOT act as a human would in answering questions, keeping integrity and relational dynamics at the forefront. ChatGPT strings together highly correlated bits to provide an 'answer' that is consistent with what an answer could look like.

This inhuman response is entertaining but inherently flawed: it has no ethics, no agency, no obligation to a commonly held reality.

And yet we've learned that most people will take what looks like an 'expert answer' for granted. Which is why I believe those who are fiduciarily responsible for our wellbeing -- our governments and their law enforcement agencies -- need to move swiftly to develop the regulation and accountability for large language models in the public sphere.

--

--

Ariel Beery / אריאל בארי

An avid fan of the future and believer in human initiative to build a better world. Founder and builder of businesses to better the planet.