Site icon WDC NEWS 6

Prompting And Prompt Engineering Facing Notable Changes Due To OpenAI Latest o1 Generative AI Model

Prompting And Prompt Engineering Facing Notable Changes Due To OpenAI Latest o1 Generative AI Model

In today’s column, I identify and arm you with brand new tips and techniques about prompting and prompt engineering, doing so because of the newly released OpenAI o1 generative AI model. You might say that this latest advancement in generative AI changes the landscape of how to best do your prompts.

To be clear, you aren’t starting over.

Instead, there are crucial twists and turns that you’ll need to upgrade your prompt engineering skills on. No worries. This will pretty much be a piece of cake if you are mindful of prompting and willing to adapt to the new nuances in this kind of generative AI.

I am going to assume that you are already versed generally in prompting and know much of the fundamentals. You might find of use my comprehensive overview of over fifty key prompting and prompting engineering techniques and recommendations at the link here. If you are starting at ground zero and know nothing about generative AI and prompting, this could be a bit of an uphill battle, and I’d stridently suggest first getting up to speed. Your choice.

This posting is the fourth in my assessment and review series about the OpenAI o1 generative model. I will share here a quick glimpse at some of the new features of o1 that are causing you to change your prompting prowess.

For my general overview and comprehensive look at what o1 entails, which is in the first part of this series, see the link here. Part two discussed how chain-of-thought or CoT includes double-checking now and ergo tends to thankfully reduce so-called AI hallucinations and other problematic issues, see the link here. Part three examined how the chain-of-thought feature can also be used to catch generative AI being deceptive, though this is more experimental than it is yet put into full practice, see the link here.

This is part four and covers prompting insights.

Six New Prompting Tips You Need To Know

Here’s the deal about o1.

OpenAI’s newly released generative AI is known as o1. The name is a bit confusing, and some are informally referring to this version as GPT-o1, ChatGPT-o1, and by its rumored internal name of Strawberry, but let’s just proceed here by referring to the generative AI as the official name of o1.

First, o1 is not at this time a considered upgrade from GPT-4o or ChatGPT. You might have assumed that this would be the next in their series of generative AI apps. Not really the case. This is more so an experimental release of something that is kind of like the others but different in other ways too. A hodgepodge, if you will. I’ll say more about this shortly.

Second, one of the biggest changes is that they have opted to include chain-of-thought directly woven into the inner mechanisms of o1. This is a monumental change and worthy of close attention. The upshot is that chain-of-thought happens now, automatically. You have no choice in the matter. Chain-of-thought will seemingly run for all prompts. Boom, drop the mic.

Some background about chain-of-thought might be handy here as a reminder of the importance of this essential technique and technology.

Practitioners and empirical research have clearly demonstrated that if you ask generative AI to proceed on a stepwise basis, considered to be a chain-of-thought approach, the results usually come out better, see my in-depth analysis at the link here. Happy face. The downside is that by getting AI to show each of the steps, the runtime effort elongates, the response time is extended or delayed, and if you are paying for your generative AI usage the cost will go up accordingly. Sad face.

It is a learned hunch-based tradeoff of whether the benefits of potentially better results — but not guaranteed to be better — are worth the delay in time and added costs.

Those who are versed in prompting know that leaning into the use of chain-of-thought as a core prompting technique should always be at the forefront of your prompting engineering mindset. I use the “Think a step at a time” wording or something akin to it, in my daily prompting actions. The payoff is mighty handsome. Not all the time, but enough of the time with judicious selection such that there is often tremendous value in engaging a chain-of-thought process in generative AI.

An entire mini-industry has arisen regarding chain-of-thought prompting. I’ve covered a barrel full of them. For example, there is chain-of-thought amplified with factored decomposition at the link here, skeleton-of-thought at the link here, chain-of-feedback at the link here, verification-of-thought at the link here, chain-of-density at the link here, tree-of-thoughts at the link here, and my comprehensive coverage of other CoT-related prompting techniques and prompt engineering fundamentals at the link here.

Okay, the above sets the table for what I want to say about changing up your prompting considerations.

You are now ready for my latest set of prompting tips based on the o1 model:

  • (1) New vital tip about chain-of-thought. For this type of generative AI, no longer use prompts invoking chain-of-thought activity since it automatically is happening.
  • (2) Emphasize your prompt simplicity. Avoid especially complicated prompts and try to be immensely straightforward.
  • (3) Be highly distinctive within your prompts. Make sure to use explicit delimiters in prompts to distinguish various elements.
  • (4) Streamline your RAG. Retrieval-augmented generation (RAG) for in-context modeling needs to be streamlined.
  • (5) Invisibility counts now. Be mindful of visible tokens and invisible tokens for size and cost considerations.
  • (6) Watch out for domain narrowness. Realize that this AI for now shines in rather narrow ways and likely falters in everyday broader ways. Watch out.

Before I jump into describing those tips, I want to abundantly and notably state that those tips are only relevant to o1. Please keep that fully in mind.

Do not suddenly abide by those tips when you are using other generative AI. Most of the rest of the world of generative AI is still unlike o1. Use your already in-hand prompting techniques for the likes of ChatGPT, GPT-4o, Claude, Gemini, and so on, just as you always have.

Only switch over to the above tips when you opt to use o1. I realize that is a bit of an annoyance. Switching your mindset and your hat to the o1 approach is on your shoulders. That being said, I expect that we will soon enough see other generative AI apps adopt similar new internal mechanisms that might very well mean that these tips will apply to their new wares too.

I’ll certainly keep you posted.

Brief Explanation About The Six Prompting Tips

Hold onto your hat as I proceed at a fast clip to explain the six tips.

You used to invoke chain-of-thought by mentioning “Think a step at a time” or akin wording inside a prompt. Don’t do that anymore when it comes to using o1. Suppress the urge.

Why?

Because o1 automatically does a chain-of-thought.

As far as we can tell at this juncture, the AI developers decided to hard-wire this into o1. This is smart due to the AI now likely tending toward better answers. It can be frustrating for users since this adds delay in processing and boosts costs. Most of all, you have no choice in the matter. It is going to occur. Period, end of story.

I am sure some of you quick-witted types will say that you will compose a prompt that tells o1 to explicitly not do a chain-of-thought. There you go, humans regain control. However, initial experiments suggest that this is brushed aside and rebuffed. The beast wants what it wants.

Another clever idea is to mess with the automatic chain-of-thought by telling it to be briefer, or maybe telling it to be longer. So far, again, initial experiments suggest this is turned aside.

Worse still, it seems that if you do ask for a chain-of-thought, you are forcing o1 into doing double duty and the results might get ugly. Think of it this way. When the engine of your car is already running, and you try to turn the key, what happens? Usually, an ugly noise and you might harm your car. The same seems to be the case with o1.

Your request to do a chain-of-thought has seemingly nothing to do with the automatic one in the sense that you are layering a second chain-of-thought upon the automatic one. The automatic chain-of-thought might seemingly try to do a chain-of-thought on the chain-of-thought that you’ve requested. Do you see how this can grind the gears? I tell you; it is almost like the movie Inception if you know what I mean.

Okay, so the first tip is that with o1, set aside your instincts and training on invoking a chain of thought. It is going to happen without you lifting a finger. Furthermore, if you try to do a chain-of-thought, maybe it will cause matter and anti-matter to intertwine, and the universe will spontaneously implode. I warned you.

The rest of the tips are a bit more straight-ahead.

The o1 doesn’t seem to relish complicated prompts. Keep your prompts to the best simplicity you can. I would say this is a general rule of thumb for most generative AI. With o1, it almost seems a necessity now.

Inside of your prompts, make sure to use delimiters to offset anything that you want to be distinctive. Again, this is a good general rule of thumb for generative AI. In the case of o1, especially so.

If you are going to import data by using retrieval-augmented generative or RAG and do in-context modeling, see my explanation about this popular technique at the link here, the world of o1 makes this less easy-going. You will need to put extra elbow grease and homework into preparing the data and carrying out the RAG process. Sorry, that’s the way the ball bounces.

A bit of an interesting offshoot of the automatic chain-of-thought is that the consideration regarding size and cost gets more complex. You undoubtedly know that most generative AI usually count tokens as the means of figuring out the memory size and cost of your usage, see my discussion about such aspects at the link here.

Conventionally, you count the number of prompt words as roughly tokens (multiplied by a factor) and the response words generated as tokens (multiplied by a factor). That still is something you need to do to gauge whether you are within size limits and how much you will be charged for your usage.

The trick now is that there are so-called visible tokens and an added set of invisible tokens. The visible tokens are the same as before, the prompt tokens and the generated results tokens. Those invisible tokens have to do with the automatic chain-of-thought. You are getting dinged for the automatic activity based on additional token counts. You can’t see them per se, and you can’t stop them per se, but you will absolutely consume them and pay for them. Lucky you.

Consider your online memory and cost considerations correspondingly.

The final of the six tips is that o1 is currently set up to do well in narrow domains, especially in science, mathematics, and programming or coding. Those do on the whole better than traditional generative AI. On all other facets, all bets are off. Your results from o1 might be on par with other generative AI or worse when it comes to anything beyond that handful of selected domains.

Bottom line, right now, use o1 for playing around, or for specific questions or problems in those narrow domains, but otherwise stick with the generative AI that you know and love. Other generative AI apps are shall we say, generalists and do reasonably well across-the-board. The o1 is at this time more focused on a narrow set of specialties.

Now you know.

OpenAI Has Posted Further Nuances

For those of you who want to dig deeper into these matters, make sure to look at the OpenAI blog site for their “Reasoning Models” Beta blog description, of which here are a few excerpts:

  • “Avoid chain-of-thought prompts: Since these models perform reasoning internally, prompting them to “think step by step” or “explain your reasoning” is unnecessary.”
  • “Some prompt engineering techniques, like few-shot prompting or instructing the model to ‘think step by step,’ may not enhance performance and can sometimes hinder it.”
  • “Keep prompts simple and direct: The models excel at understanding and responding to brief, clear instructions without the need for extensive guidance.”
  • “Use delimiters for clarity: Use delimiters like triple quotation marks, XML tags, or section titles to clearly indicate distinct parts of the input, helping the model interpret different sections appropriately.”
  • “Limit additional context in retrieval-augmented generation (RAG): When providing additional context or documents, include only the most relevant information to prevent the model from overcomplicating its response.”
  • “Each completion has an upper limit on the maximum number of output tokens—this includes both the invisible reasoning tokens and the visible completion tokens.”
  • “It’s important to ensure there’s enough space in the context window for reasoning tokens when creating completions. Depending on the problem’s complexity, the models may generate anywhere from a few hundred to tens of thousands of reasoning tokens.”

I plucked out excerpts that pertain to this discussion and that I believe are the biggest bang for your buck when it comes to rejiggering your prompting skills.

Conclusion

Congratulations, you are now knowledgeable about changes to prompting that are needed if you wish to effectively use o1.

If you aren’t going to use o1, these changes are of little consequence to you, other than acting as a heads-up of what the future might hold for advances in generative AI. For those of you who are going to at least try out o1, these prompt tips are notable. And those of you who are going to go whole-hog into o1, you will need to live and breathe these prompting insights.

May your prompts always go well, and your use of generative AI be all sunshine and roses.


Source link
Exit mobile version