

Discover more from Wrong Speak Publishing
Gordon Moore helped found the Intel Semiconductor Company in 1965. He is best known for his idea called Moore's Law. His law states that the number of transistors on an integrated circuit can double in 18 months. In essence, the integrated circuit can be smarter within 18 months due to the doubling of transistors. What does this have to do with artificial intelligence in particular the GPT 3.5 vs GPT 4?
“Black Victim To Black Victor” Book by Adam B. Coleman. Rated 4 1/2 Stars on Amazon!
“Adam B. Coleman puts his arm around the people he has lived amongst his whole life, providing them with honest and incredibly personal insight.”
“Wow. I had no idea when I started to read this book, how important the message is for every single one of us.”
Wrong Speak Publishing’s First Book! Purchase Now from WSP’s Store or Amazon!
Three years ago in the Guardian, I read an article that was written by GPT 3, the neural network. I was so blown away by the article I realized no college student could write an essay as concise and as effective as one that this level of artificial intelligence wrote. In the fourth quarter of 2022, GPT 3.5 chat went online. That's when I wrote an article about artificial intelligence basically making college or a bachelor's degree obsolete. In March of 2023, GPT 4 was released.
In my opinion, there was almost no comparison in the advanced capabilities of GPT 4. In five short months, there was a major advancement in artificial intelligence-not 18 months as opposed to Moore's Law. There have been many articles written about artificial intelligence and particularly neural networks.
I don't think there's a rationalization in the media of how powerful GPT 4 is or how it will invalidate a college education for massive numbers of students.
Below are helpful charts that illustrate the improvement of GPT 4 versus GPT 3.5.
This chart shows the GPT 4 scores versus GPT 3.5 scores and how much GPT 4 has improved. What is amazing is that in such a short period of time, there's such tremendous improvement. This AI is definitely going to challenge the workforce and possibly destroy a great many jobs. It's scary but inevitable. It also makes you wonder what our greatest enemy China is doing with AI.
GPT 4 scored 298 on the uniform bar exam; the chart above shows the passing score needed in a number of different states and those states where GPT 4 would pass the BAR Exam.
Are you scared yet? In 5 months we went from version 3.5 to version 4. In one year, five years, 10 years how much more advanced will this artificial intelligence be? Another question is what jobs will be left that humans can do at a middle-class income. People who code for a living have a special problem.
“Whatever he threw at it, (Software Developer Adam) Hughes found that ChatGPT came back with something he wasn't prepared for: very good code”. I never thought I would be replaced in my job, ever, until ChatGPT, he says. "I had an existential crisis right then and there. A lot of the knowledge that I thought was special to me, that I had put seven years into, just became obsolete."
If people think that artificial intelligence is scary now, what is really scary is how fast it can evolve, become better, and replace humans.
The Acceleration Of AI Technological Innovation
Can I give some counter examples (with GPT4)
Just recently:
https://www.businessinsider.com/lawyer-duped-chatgpt-invented-fake-cases-judge-hearing-court-2023-6?op=1
"The lawyer who used ChatGPT's fake legal cases in court said he was 'duped' by the AI, but a judge questioned how he didn't spot the 'legal gibberish'"
https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research
"Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying."
Turns out, pretty much everything was bogus because of course, an LLM doesn't know fact from fiction.
A bar exam has standardized answers so an advanced LLM with large training set could answer it. Likewise for *some types of coding* of which there are many examples to train with such as web coding.
But: regarding programming, see here: https://spectrum.ieee.org/gpt-4-calm-down
The more arcane the code, the less correct it will be:
"I’ve been using large language models for the last few weeks to help me with the really arcane coding that I do, and they’re much better than a search engine. And no doubt, that’s because it’s 4,000 parameters or tokens. Or 60,000 tokens. So it’s a lot better than just a 10-word Google search. More context. So when I’m doing something very arcane, it gives me stuff.
But what I keep having to do, and I keep making this mistake—it answers with such confidence any question I ask. It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong. And I spend 2 or 3 hours using that hint, and then I say, “That didn’t work,” and it just does this other thing. Now, that’s not the same as intelligence. It’s not the same as interacting. It’s looking it up."
Which relates also to this, lack of logical functioning (an LLM makes predictive patterns out of language)
https://www.howtogeek.com/890540/dont-trust-chatgpt-to-do-math/
Both the lawyer's case and the IEEE article's case of programming are examples of hallucinations. It doesn't know when it's wrong and answers with absolute confidence.
But one thing out of all this discussion I noticed about "AI" is that most people make it out to be like some mystical black box out there that someone else controls. I think this plays into the hands of those who want to control it. The problem is, there is no "it" .. AI / ML is not a singular thing controlled by a single entity. It's literally code, algorithms that people can learn AND DO themselves on their own computers and on rented computers in the cloud. AWS has Sagemaker for example. You can download code from github, compile, build and run it on your own machine and more significantly train it to give different results than what someone's else build of the same model train and tuned on different data would give.
e.g.
https://github.com/huggingface/transformers/