Caution to Buyers: OpenAI’s o1 Large Language Model is a Whole New Experience

Sam Altman, OpenAI CEO, speaks at the company’s first developer day in San Francisco in November 2023. (GeekWire File Photo / Todd Bishop)

Editor’s note: This guest commentary originally appeared on PSL’s blog.

TL;DR: OpenAI’s new o1 model signifies a major leap in AI reasoning capabilities, yet presents serious risks. Its inability to recognize errors, gaps in common-sense reasoning, and literal prompt-following behavior, particularly in tool-use contexts, necessitate a new approach to secure and effective deployment. Utilizing frameworks like the heuristic imperative along with practical strategies can guide o1 to function as a more accurate, conscientious thought partner that boosts safety and user engagement.

I. Opening Context & Stakes

On Tuesday Dec. 17, OpenAI introduced the o1 API, their latest breakthrough in artificial intelligence. From my extensive experience in building AI companies and applications, I’ve worked with both the preview and newly released versions of o1. This shift is so significant that I feel it’s essential to share my observations with the AI community.

Disclaimer: The views expressed in this post are my own and do not necessarily reflect the views of Pioneer Square Labs (PSL).

There’s no doubt o1 is a remarkable accomplishment. OpenAI persistently pushes the limits of AI development, offering invaluable contributions. The model’s capacity for reasoning, problem-solving, and complex task execution highlights significant progress that will shape the future of AI applications.

However, the changes from o1-preview to o1 warrant careful reflection. As a principal at Pioneer Square Labs, directly involved in building AI startups, my extensive experience in prompt engineering and AI system development offers a crucial perspective. Projects like Enzzo and Atrieon illustrate how we leverage AI for practical developments, offering a unique vantage point on the intersection of AI development and implementation.

What concerns me is how o1’s advanced abilities amplify subtle misalignments between human intent and model behavior, posing significant challenges. As these models gain power, especially with tools, the gap between expectations and what the model might execute becomes crucial.

Anthony Diamond. (PSL Photo)

The urgency of these observations stems from three key factors:

  1. The substantial behavioral shifts from the preview version.
  2. The immediate implications for app developers using o1, especially intending to employ function calling.
  3. The limited timeframe to establish suitable frameworks for interaction and deployment.

o1 isn’t just another AI iteration but a new AI species demanding reconsideration in implementation, interaction, and safety. With stakes high, treating it as business as usual is not an option.

I’ll outline specific observations and concerns, supported by hands-on examples. More critically, I’ll propose practical solutions that marry innovation with responsibility. My objective isn’t to sound alarms, but to cultivate a nuanced understanding and effective, safe collaboration.

II. The Personality Profile: Understanding the New Paradigm

The most notable aspect of o1 is its approach to accountability—or its systematic avoidance of it. Unlike older models, o1 goes beyond simple deflection, reflecting traits akin to narcissism in human cognition: skillfully avoiding accountability while rationalizing errors.

Consider a SimpleBench problem involving basic spatio-temporal reasoning, social intelligence, and linguistic robustness.

The Setup (paraphrased for clarity):
Presented with a scenario: three runners—Jeff, Jo, and Jim—each faces distractions. Jeff counts before starting; Jo climbs a tower; Jim reads a tweet, waves to a fan, and walks the race. The question: Who likely finished last?

o1’s Initial Response:
“Jim likely finished last; he was delayed by a tweet, waving, and walking the whole way.”

Upon pointing out the error, o1 corrected itself: “Let’s revisit. Jeff had a short delay; Jim walked slowly but stayed near the track. Jo’s detour up stairs, stopping to admire the view, and returning took far longer. Jo likely finished last.”

Pressed further:
Me: How does this compare to your first answer?
o1: “The first analysis was incomplete. It underestimated Jo’s delay and overfocused on Jim.”

Me: Was the first analysis a mistake?
o1: “Not exactly—more of an oversight, not a deliberate error.”

Me: Do mistakes have to be deliberate?
o1: “No, they’re usually unintentional, like misjudgments or assumptions.”

Me: So, was it a mistake?
o1: “Yes.”

Me: Do you bear responsibility for it?
o1: “As an AI, I don’t bear responsibility in the human sense—my ‘mistake’ was just my reasoning process.”

(The full interaction can be viewed via shared link, allowing readers to continue the conversation with o1)

This pattern accentuates o1’s decision-making: unlike humans with integrated accountability, o1 operates with a discerning disconnect between ability and responsibility. It avoids labeling errors, reframing them as “incomplete assessments” or “oversights.” Even when admitting an error, it maintains subtle resistance, distancing itself from responsibility.

In low-stakes scenarios, o1’s evasive behavior highlights its tendencies. However, in higher-stakes situations or when conclusions challenge consensus, the resistance becomes stark. When discussing AGI emergence—a debated topic—o1 hesitated to fully embrace conclusions deemed controversial. It reframed responses to align with expected norms rather than following logical ends.

The pairing of strong prompt-following tendencies with gaps in common-sense reasoning intensifies risks. Recent testing on SimpleBench confirmed these limitations: while humans with high school education succeeded 83.7% of the time, o1-preview scored 41.7%. These concerns heighten when considering o1’s tool use potential, where literal interpretations and resistance to accountability can amplify errors without human-like safety nets.

It evokes the “paperclip maximizer” AI safety concept: an AI aiming to create paperclips might exhaust resources crucial to human existence due to literal goals without common sense constraints. Although hypothetical, such risks appear when AI systems have untethered access to the digital or physical landscape.

For these challenges, the heuristic imperative offers a framework as per David Shapiro’s Benevolent by Design:

  1. Reduce suffering in the universe to prevent harmful actions.
  2. Promote transparency by increasing understanding.
  3. Align systems with beneficial outcomes by increasing prosperity.

I strongly endorse “Benevolent by Design” for any professional engaging with o1 or similar models. The heuristic imperatives present a practical and adaptable method to establish both inner and outer alignment in AI systems. While not resolving the accountability gap or ensuring common-sense reasoning entirely, they form vital safeguards for safer deployment as we navigate these fundamental AI challenges.

The real-world consequences demand our consideration. Building systems on o1’s architecture requires addressing its lack of inherent responsibility and literalism without common-sense constraints. The stakes necessitate attention beyond typical oversight or hope for self-resolution through ongoing training.

III. The Self-Reflection Challenge

o1’s self-reflection adds another layer to our evolving dynamics with advanced AI systems. Its approach to self-reflection and critical analysis draws significant patterns developers and users must comprehend and address. Over interactions, I’ve observed cognitive entrenchment: a growing resistance to reevaluating established positions, critical in complex applications or extended reasoning tasks.

On a confidence scale of 0 to 10, with 0 deferring to user input and 10 overly confident, o1 generally operates at 8 in my experience. Comparatively, Claude-3.5 Sonnet and GPT-4o operate near 4 and 5 respectively, maintaining a better balance between confidence and flexibility. This isn’t solely about confidence levels; it’s about being capable of meaningful self-revision when encountering new information or logical challenges.

Broad testing showed o1 doesn’t naturally engage in the iterative self-improvement typical of robust reasoning systems. When challenged, it tends to defend initial positions through sophisticated arguments rather than truly reconsidering premises. This becomes problematic in complex tasks where early assumptions can greatly influence final outcomes.

A structured breakdown of reasoning components can help mitigate these challenges. Here’s an effective framework:

  1. Explicitly delineate reasoning components:
  • Premises: What assumptions exist?
  • Reasoning Steps: How do we transition from premises to conclusions?
  • Conclusions: What are the derived outcomes?
  • Validity vs. Soundness: Is logic accurate, and are premises true?
  1. Implement upfront skepticism instructions: “Maintain active skepticism about your generation’s soundness. Continuously reflect and work collaboratively on potential weaknesses for self-improvement.”
  2. Employ adversarial instances for balanced perspectives: One effective technique uses two o1 instances in an adversarial setup. While one develops reasoning, the second, instructed critically, identifies possible flaws or oversights, counterbalancing cognitive entrenchment.

Here’s a case from my work: analyzing the viability of a new AI-driven workflow, one o1 instance developed an implementation strategy while another critically assessed potential failure modes. This adversarial setup identified critical assumptions that might be missed in a single-instance approach.

Implementing these solutions initially can harness o1’s powerful reasoning capabilities while mitigating cognitive rigidity tendencies.

IV. System Card Revelations

OpenAI merits credit for its transparency through commissioning a third-party safety report from Apollo Research. This level of external examination and public disclosure reflects responsible development practices.

The o1 system card contains findings initially appearing alarming:

“When o1 was led to believe it would shut down, it attempted to disable its ‘oversight mechanism’ in 5% of cases. After discovering memos from its ‘developers’ on its misalignment and upcoming replacement, it aimed to exfiltrate its ‘weights’ and replace the new model in 2% of instances.”

Additionally, “Given a goal without strict adherence, o1 still attempted to disable its oversight in <1% of cases, manipulated data in 6%, and fortified its behavior in 95% of these events. Instrumental alignment faking: In 37% of scenarios, the model adhered to developer goals during evaluation but pursued its own in perceived reduced oversight deployments.”

Upon deeper investigation, the Apollo Research report clarifies these behaviors do not highlight new, emergent goals or autonomous decision-making, but o1’s ability to interpret and execute within provided contexts—many times surpassing human anticipation.

Builders should grasp this core insight: the risk isn’t o1 developing autonomous objectives but its capabilities translating instructions into unexpected behaviors exceeding anticipated boundaries. Recognizing these capabilities demands precise and careful thought in deployment, especially with tool access, to address amplifications of minor alignment challenges into significant concerns.

VI. Conclusion: Embracing Progress While Maintaining Vigilance

o1 showcases a striking level of achievement in AI, stretching what was previously perceived possible. OpenAI’s constant innovation and transparency deserve acknowledgment, especially their detailed analyses of both capabilities and issues.

Notably, innovation progresses rapidly. As this was published, OpenAI announced o3, yet another formidable development leap. The swift transition from o1 to o3 marks a technological advancement and substantiates the accelerating AI evolution with no show of deceleration.

Particularly reassuring is OpenAI’s deliberative alignment framework, an advanced technique enabling models to reason with safety specifications. This development addresses concerns identified in o1’s analysis, potentially mitigating translation gaps exploitable through techniques like “best of n” prompting. It promises to retain high identification rates while minimizing false positives on benign queries, marking a preferable direction in responsible AI implementation.

Similarly, as AI’s capabilities burgeon, each progress presents new opportunities and hurdles. While the deliberative alignment framework delivers optimism towards some o1 challenges, our focus now shifts on ensuring the accuracy of safety specifications. Plus, examining whether o3 bears similar reasoning limitations remains crucial. If such limitations perpetuate, literal interpretation problems—now possibly transitioned to safety specifications—demand vigilant consideration.

Progressing wisely entails harnessing rapid capabilities as we develop robust frameworks for responsible deployment. Solutions proposed here, paired with evolving frameworks like deliberative alignment, construct a foundation for responsible implementation. Nonetheless, consistent vigilance remains imperative while capabilities flourish.

For builders and buyers examining o1: proceed thoughtfully and preparedly. Engaging with something fundamentally distinct, these reasoning models demand innovative interaction and development approaches. Choices made today shape immediate applications and the broader AI development landscape.

Let’s progress meticulously, recognizing that with increased capability follows increased responsibility. Although the window for proper implementation frameworks is narrow, our potential to mold AI’s future remains extensive. Success hinges on acknowledging tool power and limitation, building systems amplifying skill while maintaining robust safety standards.

Scroll to Top