-
Introduction
-
Why Structured Output?
-
Notes & Code Walkthrough
-
Conclusion
-
Introduction
When building applications that interact with Large Language Models (LLMs), you often need more than just a plain text response. Structured outputs, such as JSON, make it much easier to integrate your AI-powered app with external systems or display specific data on the frontend. In this post, we'll explore how to achieve that using LangChain. We'll also walk through some personal notes on how we've set up our React components, and finally we'll dive into our code to see it all in action.
-
Why Structured Output?
If you've ever tried to build an API around an LLM, you may have run into a significant challenge: the LLM typically returns unstructured text. For a human reader, this is totally fine. But if you plan to feed that output into another process or system, you need a consistent structure—like JSON objects or arrays—to reliably parse the returned data.
In this post, we'll transform an ordinary text output into something more predictable. Whether you want to expose an API to external consumers or simply structure your own UI, having predictable keys and values will save you time and headaches.
-
Explanation & Code Walkthrough
Here's the general idea behind how to achieve a structured response using LangChain:
1. You can define a schema (for example, with fields for the country, state, and zip code) that instructs the LLM to return data in JSON format.
2. By leveraging LangChain's
StructuredOutputParser
, you can specify which fields you want the LLM to fill.3. Then, by adding a “formatInstructions” variable to your prompt, you give the LLM a clear signal about how you want the final response to look.
4. Finally, you parse the LLM's raw text output into a structured JSON object that your code can easily consume.
Incorporating these points into your own project is quite straightforward. Let's see how our own code structure implements them.
Set up a small React project with the following files:
App.jsx
where I importAddressInformationExtractor
and display it as my main component.llm.js
which handles the LangChain logic, including thePromptTemplate
andStructuredOutputParser
.AddressInformationExtractor.jsx
which is a React component that takes user input, sends it to the LLM, and displays the extracted country, state, and zip code.
-
1. App.jsx
App.jsximport './App.css' import { AddressInformationExtractor } from './components/AddressInformationExtractor' function App() { return ( <> <AddressInformationExtractor /> </> ) } export default App
2. llm.js
llm.jsimport { OpenAI } from "@langchain/openai"; import { LLMChain } from "langchain/chains"; import { PromptTemplate } from "@langchain/core/prompts"; import { StructuredOutputParser } from "langchain/output_parsers"; const simpleParser = StructuredOutputParser.fromNamesAndDescriptions({ country: "The country extracted from the text", state: "The state extracted from the text", zipcode: "The zipcode extracted from the text", }); const formatInstructions = simpleParser.getFormatInstructions(); const model = new OpenAI({ modelName: "gpt-3.5-turbo", openAIApiKey: "API-KEY", }); const myPrompt = new PromptTemplate({ template: `Extract from the following text the country, state and zipcode: {text} ${formatInstructions}`, inputVariables: ["text"], partialVariables: { formatInstructions }, }); const myChain = new LLMChain({ llm: model, prompt: myPrompt, outputKey: "questions", verbose: true, outputParser: simpleParser, }); export const executePrompt = async (promptKeys) => { const response = await myChain.call(promptKeys); return response; };
3. AddressInformationExtractor.jsx
AddressInformationExtractor.jsximport React, { useState } from "react"; import { executePrompt } from "./llm" export const AddressInformationExtractor = () => { const [text, setText] = useState( "there is a place in USA, a really interesting place and inside something called New York, in particular 10001." ); const [addressInfo, setAddressInfo] = useState(""); const extractAddressInfo = async () => { const response = await executePrompt({ text }); setAddressInfo(response); }; return ( <div> <h1>Address Information Extractor</h1> <textarea value={text} onChange={(e) => setText(e.target.value)} placeholder="Enter text here" /> <button onClick={extractAddressInfo}>Extract Address Information</button> <p>{addressInfo.questions?.country}</p> <p>{addressInfo.questions?.state}</p> <p>{addressInfo.questions?.zipcode}</p> </div> ); };
-
Conclusion
And that's it! With just a few tweaks—defining a schema, adding format instructions, and parsing the response—you can quickly turn a ChatGPT response into JSON that is easy to consume. Whether you are building a front-facing UI or an external API endpoint, structured outputs make your applications more robust.
Ready to expand this approach with more complex schemas or even local LLMs? Stay tuned for the next steps in this series!