Copilot - ChatGPT Comes to MS 365

Microsoft is announcing a new AI-powered Copilot for its Microsoft 365 apps and services today, designed to assist people with generating documents, emails, presentations, and much more. The Copilot, powered by GPT-4 from OpenAI, will sit alongside Microsoft 365 apps much like an assistant (remember Clippy?), appearing in the sidebar as a chatbot that allows Office users to summon it to generate text in documents, create PowerPoint presentations based on Word documents, or even help use features like PivotTables in Excel.

Copilot can also be summoned throughout Microsoft’s Office apps and be used in Word to draft documents based on other files. The AI-generated text can then be freely edited and adapted. As Copilot is essentially a chatbot, you can even ask it to create a 10-slide PowerPoint presentation based on a Word document or analyze or format Excel data.

That means Excel users can use Copilot to instantly create a SWOT analysis or a PivotTable based on data. In Microsoft Teams, the Copilot feature can transcribe meetings, remind you of things you might have missed if you joined late, or even summarize action items throughout a meeting.

Hopefully I can use a bot to read bot-generated stuff.

The constant use of the verb “summon” makes me think of a circle of people in hooded robes, a dark room, a pentacle, lots of incense, a stone altar, and a sacrificial virgin.

But I think those are only for people on the Platinum plan.

Worth it for that.

Hell, just have it attend the meeting for you. While you’re at it, it could probably do most exec’s jobs for them as well and no one would notice. Except for the uptick in efficiency.

I mean, if it works.

Forcing an advanced AI to not only attend the 90 minute Teams meetings I suffer through, but also making it pretend anything of substance was said or decided seems inhumane.

From AI: You dodged a bullet, my friend. This was a total BS meeting.

Me: Whew. Thanks AI!

Generally speaking, LLMs are pretty good at “write a letter expressing my indignation that Wendy’s won’t allow me to get a Frosty in the spicy chicken sandwich combo meal” and downright excellent at “given the following text, write the last paragraph”. More longform writing, in other words. It’s also useful to suggest your next word or phrase or line of code. This is because it has a giant data corpus with tons of examples.

LLMs break when you ask them to do anything new. That’s where they just plain make shit up and show it to the user with perfect confidence and cogently written text. This is a major failing in LLMs, and I see no reason why they couldn’t be modified to, you know, stop doing that. If the LLM doesn’t know the answer (or in reality, has a low confidence interval), just say so. Once that’s done they will instantly be vastly more useful.

Interestingly, I just read an article about using LLMs for protein folding. Turns out amino acids can be treated just like words, and the same statistical analysis can be useful in predicting which proteins will fit your needed parameters. It’s less accurate than dedicated algorithms so far, but much faster. Cool stuff indeed.

Edit: This concluding paragraph was entirely written by a LLM and I didn’t even try multiple times. Obviously it’s well-trained on this use-case by now.

In conclusion, while LLMs have their strengths in generating long-form text, they can also fall short when asked to do something new or outside of their pre-existing knowledge base. However, as technology advances and more data is fed into these models, there is hope that these limitations can be overcome. The potential applications of LLMs, such as the use of statistical analysis for predicting protein folding, are incredibly exciting and showcase the vast possibilities of this technology. Perhaps the next word or phrase to consider when it comes to LLMs is “adaptability”, as it will be crucial for these models to continually evolve and learn in order to reach their full potential.

Terminator: The REAL Origin Story

Well I’m shocked.

You can barely get people to do this. I think this is much, much, MUCH easier said than done. I have very little confidence that it will be done, regardless. We are on the cusp of drowning in an endless sea of automatically generated bullshit. The semantic apocalypse is nigh! Am I joking? Do I even know?!

Not at all, it’s trivial. Every NLU model has an internal confidence score.

I watched the whole 30 minute keynote from yesterday and they were pretty clear that the experience is designed around “we’ll get you started, but don’t expect everything to be accurate”. By no means do they pretend like things work perfectly or accurately, but it’s just intended as a quick way to begin ideas and get them on the screen.

Then why don’t they already do it?

Are you asking why this thing that Microsoft launched into beta a few weeks ago isn’t perfect yet?

Google+ mentality. Everything must be AI now, full steam ahead, damn the torpedos.

Clippy 2: Electric Boogaloo

Actually more like Auto-Complete 2: The Plagiarizing

In which nested menu is the option to turn this shit off?

Well, if it’s “trivial” then it should already surely be part of the thing, but no, I’m not asking why it isn’t “perfect” yet. I hardly think “making shit up” is “not perfect,” it seems a lot more like, “doesn’t actually work” to me.