2 Comments

Can I give some counter examples (with GPT4)

Just recently:

https://www.businessinsider.com/lawyer-duped-chatgpt-invented-fake-cases-judge-hearing-court-2023-6?op=1

"The lawyer who used ChatGPT's fake legal cases in court said he was 'duped' by the AI, but a judge questioned how he didn't spot the 'legal gibberish'"

https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

"Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying."

Turns out, pretty much everything was bogus because of course, an LLM doesn't know fact from fiction.

A bar exam has standardized answers so an advanced LLM with large training set could answer it. Likewise for *some types of coding* of which there are many examples to train with such as web coding.

But: regarding programming, see here: https://spectrum.ieee.org/gpt-4-calm-down

The more arcane the code, the less correct it will be:

"I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.

But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."

Which relates also to this, lack of logical functioning (an LLM makes predictive patterns out of language)

https://www.howtogeek.com/890540/dont-trust-chatgpt-to-do-math/

Both the lawyer's case and the IEEE article's case of programming are examples of hallucinations. It doesn't know when it's wrong and answers with absolute confidence.

But one thing out of all this discussion I noticed about "AI" is that most people make it out to be like some mystical black box out there that someone else controls. I think this plays into the hands of those who want to control it. The problem is, there is no "it" .. AI / ML is not a singular thing controlled by a single entity. It's literally code, algorithms that people can learn AND DO themselves on their own computers and on rented computers in the cloud. AWS has Sagemaker for example. You can download code from github, compile, build and run it on your own machine and more significantly train it to give different results than what someone's else build of the same model train and tuned on different data would give.

e.g.

https://github.com/huggingface/transformers/

Expand full comment
author

thanks i knew about the lawyer

thanks for the link

paul

Expand full comment