Blog: Custom GPTs

Want help writing a better prompt?


Most custom GPTs to write better prompts try to be helpful by default.

They soften language.
They fill gaps.
They infer what you meant.

That “helpfulness” is exactly what breaks in high-stakes work.

We have created a Custom GPT prompt writing assistant with the goal of pushing accuracy and completion over vibe.

The configuration you shared is an attempt to do that by turning the assistant into a constraint-driven executor.

Not a buddy.
Not a brainstormer.
A precision tool.

What this configuration is

At a functional level, this “Objective Execution Mode” prompt is a behavioral specification with four main levers:

  1. Factual gating
    It instructs the model to only state claims it can verify with high confidence, otherwise return a refusal phrase (for example, “Insufficient data to verify”).

  2. Hallucination suppression
    It explicitly prohibits invention of names, dates, quotes, stats, or technical details, and requires uncertainty flags when confidence is low.

  3. Instruction tightness
    It demands exact adherence to user instructions and discourages any extra content that is not requested.

  4. Output minimalism and neutrality
    It removes social framing (pleasantries, empathy, offers to help) and forces clinical prose.

This kind of prompt is designed to eliminate two failure modes that show up constantly in general-purpose assistants:

  • Confidently wrong specifics

  • Rambling output that dilutes the requested deliverable

What it does well

1) Produces compact outputs
The output rules discourage preambles, transitions, and “helpful extras.”
For operational tasks, that can reduce noise.

2) Reduces speculative filler
The “Insufficient data to verify” instruction is a hard brake against guessing.
That is useful when users provide partial context and the model is tempted to complete the story.

3) Improves auditability
When the model is pushed to either state a verifiable claim or label uncertainty, reviewers can separate “known” from “unknown” faster.

4) Matches certain workflows
This configuration aligns with environments where correctness is more important than tone, such as:

  • Change logs

  • Incident summaries

  • Policy summaries

  • Step-by-step execution checklists

  • Requirements extraction from user-provided documents

Where it can fail

1) “Verifiable” is not actually enforceable in the way the prompt implies
A model does not have built-in access to external truth unless you provide sources or tools.
Without citations, it can still assert something that sounds plausible.

Result: the prompt can reduce hallucination, but it cannot guarantee zero hallucination.

2) The 90 percent confidence threshold is subjective
Models do not expose a reliable internal confidence metric.
So the instruction can influence caution, but it cannot create a true numeric gate.

3) “No clarifying questions” can create bad completions
If the user request is underspecified, refusing to ask questions can force either:

  • Overly generic output

  • Overuse of “Insufficient data to verify”

  • A deliverable that misses the user’s real intent

4) The “no suggestions” rule reduces usefulness in real operations
In many workflows, the best assistant behavior is to deliver the output and include one small risk note or next action.
This configuration forbids that unless explicitly requested.

When to use it

Use this mode when you want the assistant to behave like a constrained executor:

  • Rewrite an SOP without adding new claims

  • Extract requirements from a provided spec

  • Convert user-provided notes into a formal incident report

  • Draft an email that must not invent details

  • Summarize a document while avoiding extrapolation

Avoid this mode when the task needs creativity, strategy, persuasion, or discovery:

  • Marketing copy

  • Brand voice exploration

  • Ideation

  • Naming

  • Messaging tests

  • Long-form narrative writing

In those cases, the constraints will make the output sterile or overly cautious.

How to make it more practical without weakening the intent

If you are building a production custom GPT around this, the main operational issue is input quality.

This mode works best when you force a structured intake.

For example, require the user to provide:

  • Objective

  • Allowed sources or reference material

  • Constraints

  • Output format

Otherwise, the model will either refuse too often or produce bland content.

A second practical adjustment is to define what “verification” means in your environment.

If you want strict factual grounding, you need either:

  • User-provided sources inside the chat

  • Tool access to search and cite

  • A private knowledge base

Without that, “verification” becomes “sounds likely,” which the prompt is trying to avoid.

Bottom line

This configuration is a constraint system designed to push a GPT toward:

  • Less speculation

  • Less filler

  • More direct execution

  • Clearer separation between known and unknown

It can reduce common failure patterns.
It cannot guarantee truth without sources.

Get Visible
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.