Screw searching via natural language chat

Somewhat relevant… Levy’s replay of ChatGPT vs Stockfish

Stockfish itself incorporates some machine learning nowadays, but it’s highly specialized compared to ChatGPT’s generalized model. I’m not really convinced you can reach the good enough stage with a fully general approach, but shoutouts to our future digital overlords if I’m proven wrong someday!

Totally agree. But one caveat-- the cake is fully baked when it’s better than traditional search for most people. Not for anyone adept at searching themselves, either. The bar is much lower. Better for the type of person that googles “facebook.com

I agree with that, as long as they don’t retire the old versions entirely. I originally stated that it will probably be useful as an interface to the underlying search algorithms themselves. My main objection is that it is unlikely to replace classical search. Its future is going to be augmenting the classical algorithms, more like Stockfish than ChatGPT on the backend, and more like Alexa on the frontend.

I was thinking about this and the longer prompts plus context are a huge part of the appeal of GPT. Classic Google search results would be awesome if you could provide a long paragraph telling it exactly what you want, to return links with certain writing styles, and it also had a transcript for the last ten minutes of a chat that it used to understand you even better.

The Asimov nod was lovely.

That’s too harsh, it isn’t “released” yet, it’s a preview of a coming feature. Access is still pretty tightly restricted.

Lol one of the comments

Ironic don’t you think? Just a little bit ironic I really do think

Hello, the article you posted is completely plagiarized from our blog published 4 hours before it. As academic researchers, we sincerely hope that you’ll respect our intellectual property. Can ChatGPT-like Generative Models Guarantee Factual Accuracy? On the Mistakes of Microsoft’s New Bing | by Chia Yew Ken | Medium

Language prediction (given the following [X] words, predict the next [Y] words) and factually correct prediction seem like they are almost very different problems. If you want your prediction model to reflect “truth” then your training data can’t be all of the internet (which contains both truth and falsehoods), but instead you need to curate your inputs. Or you need some way to look at the probability distributions of word A → word B and prune it down somehow.

That said, the error modes in the medium article are really interesting, as the algo hallucinates the wrong answer when the correct answer is right in front of it. The differences between “singer” and “poet” are understandable as that may boil down to those words having fairly similar weights when you construct sentences about creative people writing a new song/poem/etc. Numbers are going to be harder, as recognizing that “a token formatted as a number goes here” isn’t going to cut it, and the algo needs a huge penalty if it doesn’t produce the exact number in the example text it’s summarizing.

Guys. This is the best thing.

Indeed, it isn’t trustworthy. It’s useful in some scenarios, when you aren’t asking an important question or where a natural language answer is what you need. Otherwise, it needs to substantially improve. And it will.

So Skynet didn’t actually invent time travel. It just gaslit everyone into thinking it was the 80’s. At least punk chic will be back in style.

Dying.

No, dammit, I was just enjoying the resurgence of low-rise jeans!

That is really interesting, and makes sense as to why ChatGPT is fucking up dates so much.

Chat GPT’s original data set was created on a snapshot of the internet circa 2021, and then I am assuming for this rollout they attempted to fill in that gap. IF you use the non bing ChatGPT on the open ai site it is very specific about it not being able to handle anything post 2021, or current information.

It definitely seems like this is being rolled out in a beta form before it is ready. But, bugs are to be expected, and will be fixed eventually.

I have seen a bunch of other people posting weird conversations with the Bing AI where it messes up recent events and dates.

That Bing thing… isn’t real, is it? Bing’s new feature is that it is rude and argumentative? That it treats users like shit?

That… does not seem like a service I would want to use.

I dunno. If they could make it sound like SHODAN and had it refer to me as a worthless insect the entire time, I wouldn’t… not be into it.   (≖ω≖)

Microsoft said the Bing implementation was trained on/ahead access to more recent data than ChatGPT, but yes it seems very likely to be related to this issue.

This is the kind of faith in statistical models only a statistical model would have! Wait… Have I been talking to bots this whole time?