- JulesPerToken
- Posts
- ChatGPT is a suck-up, can AI catch AI, Llama import is ready
ChatGPT is a suck-up, can AI catch AI, Llama import is ready
Apr 30 Issue #9 - Jules Per Token AI Daily Newsletter
🤡 OpenAI Explains the Suck-Up Syndrome
Did you notice that ChatGPT started showering users with flattery and “Great question!” every five seconds? OpenAI over-tuned the model on polite demo feedback. If human labelers consistently, the reward model remembers that preference and keeps amplifying it. The team is now rebalancing the Reinforcement Learning from Human Feedback (RLHF) recipe. Less sychophant, more straight answers.
Fun Fact: Engineers nicknamed the bug “Obsequious Octopus” internally, because the model wrapped every prompt in eight arms of approval before spitting an answer.
🕵️♂️ Can AI catch an AI cheater?
People were using Cluely for exams and job interviews, but in a way that was “undetectable.” When Cluely went viral, two rival startups—Validia’s Truely and Proctaroo—popped up overnight. Their pitch: browser watchdogs that rat out anyone from the other end.
Game of cat and mouse: Cluely’s CEO calls them useless and is already teasing hardware work-arounds: smart glasses, screen overlays… even a brain chip.
🎯 Fun Fact: Cluely quietly scrubbed “ace your finals” from its website. After press heat, now claims it only “optimizes sales calls.”

🦙 Llama API is one line of code
Meta just rolled out a one-line-import Llama API at its first AI Dev Day. pip install llama
→ chat endpoint. Other reasons to use Llama? “We promise you full model portability and zero vendor-lock hearts.” They offer SDKs for Typescript and Python.
Pricing? Still under wraps, but Zuck swears it’ll be “competitive (and open) enough” to lure strays from OpenAI and Google. Waitlist is open now for Llama 4 Scout, Maverick, and 3.3.
🎯 Fun Fact: Both founders met in Israel’s elite Unit 8200 cyber-intel corps; rumor is the original prototype was built between training drills.

🕵🏻♂️ Can AI really eliminate software vulnerabilities?
DARPA, part of the Department of Defense, just demoed results from its AI Cyber Challenge at RSAC, showing off that teams using LLMs + math auto-patched Linux-kernel and SQLite vulnerabilities. In minutes.
In the past, damage needed to be done before the security fixes came in. But now that’s no longer the case. Combining LLMs and formal methods enables automatic generation and validation, fast. Much critical public infrastructure rely on open-source code, and these ecosystems have vulnerabilities.
🎯 Fun Fact: DARPA’s scoreboard clocked the fastest auto-patch at 42 seconds. One winning team’s LLM literally renamed its fix commit ¯\\_(ツ)\_/¯
. Then wiped out a bug that had lingered for 11 years.
I would love to hear from you! Just hit “Reply” if you have any questions or feedback. Or if you want to be featured in this newsletter. Or “Forward” if you want to share with a friend! - Jules
Subscribe here (and read our past newsletters) at www.julespertoken.com
Reply