- AI Biz Hour
- Posts
- Advanced Prompting Techniques & The Traditional Job Markek is Broken
Advanced Prompting Techniques & The Traditional Job Markek is Broken
Episode #163 - June 25, 2024

TODAY'S HIGHLIGHTS:
The surprising bankruptcy filings of Monster.com and Career Builder signal a major shift in job searching.
AI and remote work have fundamentally changed how companies find and hire talent, rendering traditional resumes less effective.
Personal networking and building your authentic brand are now more critical than ever for career success.
Advanced AI prompting techniques, utilizing tools like LM Studio and exploring structured data formats, are key to unlocking LLMs' full potential.
Philosophical discussion on whether current LLMs can truly achieve human-like lateral thinking.
INTRODUCTION: Welcome to the AI Biz Hour, where Andy Wergedal (@andywergedal) and John Allen (@AiJohnAllen) explore the cutting edge of AI business innovation. In this episode, the hosts and guests dive deep into the seismic shifts occurring in the job market, the implications of AI on traditional hiring processes, and the emerging strategies for navigating this new landscape. The conversation also ventured into the technical nuances of interacting with large language models, including prompting strategies and the role of new tools and protocols.
MAIN INSIGHTS: The Demise of Job Board Giants The discussion kicked off with the significant news that long-standing job boards Monster.com and Career Builder have filed for Chapter 11 bankruptcy. This development is seen as a direct consequence of the changing dynamics in the job market, heavily influenced by AI and the rise of remote work. Andy Wergedal highlighted that these platforms, once central hubs for tech job seekers and recruiters, are struggling with revenue and liability issues.
AI, ATS, and the Broken Resume System The traditional model of submitting a resume through Applicant Tracking Systems (ATS) is increasingly ineffective. Andy explained that while ATS was designed to filter candidates, AI now allows job seekers to easily tailor their resumes to match job descriptions precisely. This has led to an overwhelming volume of seemingly qualified resumes, making it difficult for employers to identify the best candidates. Michael and others noted that companies are adding more steps to the hiring process to cope, but the core system remains "broken." This shift necessitates a new approach to job searching beyond simply submitting applications.
The Remote Work Effect: Diluted Candidate Pools COVID-19 accelerated the shift to remote work, removing geographical limitations for job seekers. Andy pointed out that this exponentially increased the pool of candidates for any given remote position. What was once a local pool of hundreds is now a national pool of thousands, diluting the chances of a single resume standing out and requiring more sophisticated filtering from employers.
Networking and Personal Brand: The New Currency With traditional methods failing, the importance of personal connections and building an authentic personal brand has skyrocketed. Andy emphasized that a personal recommendation from someone in your network can bypass the broken ATS and technology checks. JOATMON (JACK) @JOATMON_SRQ and VR @RealDealCPA echoed this, stating that relationships and reputation are becoming the most highly valued assets in finding decent jobs and partnerships. Building a strong online presence and sharing your expertise is crucial to expanding your network beyond immediate colleagues.
Looking to tap into the $7 trillion government contracting market? GovBidMike helps businesses secure government contracts and grants. With important AI procurement rule changes coming in October 2024, now is the time to position your business. Mention AI Biz Hour for a 10% discount on services. Government contracts increasingly specify American-made AI technologies and interoperability requirements. Visit biddata.ai to learn how to navigate the complex world of government procurement.
AI as a Deflationary Force and Opportunity VR suggested that AI acts as a massive deflationary technology entering an inflated economy. By enabling individuals and small teams to duplicate the functionality of bloated, traditional companies with significantly less overhead, AI can drive down costs and create new business opportunities. The key is identifying problems to solve and leveraging AI to build solutions, rather than solely being the "product" (i.e., selling your labor).
Verifying AI Output is Crucial While AI tools are powerful, several speakers stressed the critical need to verify the information they provide. VR shared an anecdote about a client who believed AI output without checking its validity, leading to potential issues. Umesh and others agreed that developing skills in verifying AI-generated information is essential, perhaps even a necessary educational component in the future.
ADVANCED PROMPTING & LLM INSIGHTS (with Umesh): Umesh shared significant insights into working with LLMs at a deeper level, highlighting new tools and techniques for better control and output.
LM Studio and MCP Support: Umesh expressed excitement about LM Studio adding MCP (Model-Calling/Connecting Protocol) support. This allows users to run local models on their machines and automate API connections, enabling the creation of agentic workflows (like a personal voice assistant) without extensive coding. This facilitates connecting local LLMs to external tools and data sources.
Fighting Code Hallucinations: A major challenge with current LLMs for coding is their tendency to "make up" libraries or function calls. Umesh is working on an MCP-based "linting" or verification agent within LM Studio. This agent would check the existence and validity of libraries and function calls before the code is generated, saving debugging time and improving code accuracy, potentially leveraging knowledge graphs of libraries and packages.
The Power of Prompt Optimization: Umesh runs his prompts through a "prompter" LLM first to generate better prompts, which are then saved and used against various models (Gemini, Claude, DeepSeq, Perplexity, GPT, Grok) to compare results. He noted that even with identical parameters, results can vary, underscoring the need for testing.
MCP Response Formatting: In a technical exchange with Nadine, Umesh recommended converting JSON output from MCPs into XML format before feeding it back to the LLM. This is due to a historical bias in training data towards tagged formats (like XML and SGML) from Web 1.0. Even smaller models handle XML better, improving context recall and data structure understanding, leading to significantly higher accuracy (Six Sigma level in early testing).
Agentic Prompting Frameworks: Umesh uses a personal framework involving an agentic loop to build and refine prompts. Starting with a simple intention, the system runs iterative loops based on "value" and "intensity" scores to optimize the prompt before sending it to the best-suited model (determined by a scoring system).
The "Council of Elders" Framework: For complex or philosophical topics (like the "crisis of existence"), Umesh employs a "council of elders" framework. This involves different AI personas debating or analyzing a topic at a deep level, leveraging not only the models' inherent knowledge but also external resources via MCPs to extract maximum insight.
Language-Based Ranking: Instead of numerical scales (1-10), Umesh's system uses language-based classifications for ranking model output quality (e.g., Extremely Poor to Extremely Good). He found that using an odd number of classifications (3, 5, or ideally 7) works best, as it avoids ties and forces a decisive judgment, improving the classifier's effectiveness.
The "Two Not Three" Technique: Inspired by a sales anecdote, Umesh developed the "Two Not Three" prompting technique. Instead of asking for multiple ideas at once, you ask the LLM to generate only two. Then, in a follow-up, you ask for a third idea but instruct the model to discard one of the original two and provide the reason for discarding it. This method forces the model into a structured elimination process, leading to more reasoned and potentially higher-quality ideas or structured outputs (like refining questionnaire questions).
Lateral Thinking and Stateless LLMs: Umesh discussed whether current stateless LLMs can truly achieve human-like lateral thinking – the ability to learn from one disparate experience (like a shoe salesman's technique) and apply it to another (like prompting strategy). He believes current models, lacking the capacity to "hold onto a thought" across different domains, are not capable of this inherent lateral thinking. However, humans using LLMs can force the models to combine disparate knowledge areas through clever prompting, effectively leveraging the LLM as a tool to amplify human lateral thinking and make humans smarter.

FEATURED TOOLS & TECHNIQUES:
Custom GPTs: Michael highlighted the power of creating custom GPTs on platforms like OpenAI, allowing users to control the system prompt, upload documents (like resumes), and tailor the AI for specific tasks like resume modification.
LM Studio: Discussed in detail by Umesh, LM Studio is an application for running local LLMs on your computer, now with MCP support for creating agentic workflows and integrating with external tools.
Gemini CLI: Wes and Andrew mentioned the release of the Gemini Command Line Interface, offering an interactive way to interact with Google's models, potentially with attractive pricing for developers.
MCPs (Model-Calling/Connecting Protocols): Discussed extensively for enabling LLMs to interact with external tools, databases, and documentation. Nadine shared the insight that structuring MCP responses in a human-readable, less-nested format (like structured text or XML tags) can improve smaller models' ability to process and recall information compared to deeply nested JSON.
Knowledge Graphs (Neo4j): Andrew shared his experience using Neo4j, a graph database, with MCPs. This approach enhances the LLM's ability to traverse data and build a structured understanding of complex information in real-time, improving reasoning by providing a structured context.
Prompt Optimization Frameworks: Umesh detailed his system for automatically building and refining prompts through an agentic loop, including techniques like "Council of Elders" and the "Two Not Three" method (explained above).
EXPERT CORNER: Umesh, Nadine, Wes, and Andrew provided deep technical insights into optimizing LLM interactions. Discussions covered the nuances of structuring data for models (XML vs. JSON), the potential for models to deduce frameworks from data (grounded theory, as observed by Andrew), the difference between prompt engineering and context engineering (providing prompts and tools/data), the ongoing challenge of "fighting noise" in building effective knowledge systems, and the philosophical question of AI's capacity for lateral thinking.
QUICK HITS:
Tailor your resume for every job application using AI tools.
Focus on building your personal brand and expanding your professional network.
Verify information provided by AI tools, especially for critical tasks.
Experiment with different prompting techniques and model outputs, considering format and structure.
Explore tools like LM Studio and Gemini CLI for local LLM development and interaction.
RESOURCES MENTIONED:
AI Biz Hour Website: aibizhour.com
Gov Bid Mike's Company: biddada.ai
LM Studio: https://lmstudio.ai/
Gemini CLI: (Search for "Gemini CLI" or "Google AI CLI")
Neo4j: https://neo4j.com/
Isaac Asimov Short Story: "The Last Question"
COMING UP: Join us for tomorrow's live AI Biz Hour session at 12 PM ET!
CONNECT WITH AI BIZ HOUR: Website: aibizhour.com Andy: @andywergedal John: @AiJohnAllen Show: @aibizhour
CALL TO ACTION: Don't miss out on future insights! Join the AI Biz Hour community and subscribe to the newsletter at aibizhour.com to stay ahead in the world of AI business innovation. Engage with us on X and share your thoughts!
Reply