Ir al contenido principal

Langchain Basic LLM with a Simple Schema Javascript

  • Introduction
  • Why Structured Output?
  • Notes & Code Walkthrough
  • Conclusion
  • Introduction

    When building applications that interact with Large Language Models (LLMs), you often need more than just a plain text response. Structured outputs, such as JSON, make it much easier to integrate your AI-powered app with external systems or display specific data on the frontend. In this post, we'll explore how to achieve that using LangChain. We'll also walk through some personal notes on how we've set up our React components, and finally we'll dive into our code to see it all in action.

  • Why Structured Output?

    If you've ever tried to build an API around an LLM, you may have run into a significant challenge: the LLM typically returns unstructured text. For a human reader, this is totally fine. But if you plan to feed that output into another process or system, you need a consistent structure—like JSON objects or arrays—to reliably parse the returned data.

    In this post, we'll transform an ordinary text output into something more predictable. Whether you want to expose an API to external consumers or simply structure your own UI, having predictable keys and values will save you time and headaches.

  • Explanation & Code Walkthrough

    Here's the general idea behind how to achieve a structured response using LangChain:

    1. You can define a schema (for example, with fields for the country, state, and zip code) that instructs the LLM to return data in JSON format.

    2. By leveraging LangChain's StructuredOutputParser, you can specify which fields you want the LLM to fill.

    3. Then, by adding a “formatInstructions” variable to your prompt, you give the LLM a clear signal about how you want the final response to look.

    4. Finally, you parse the LLM's raw text output into a structured JSON object that your code can easily consume.

    Incorporating these points into your own project is quite straightforward. Let's see how our own code structure implements them.

    Set up a small React project with the following files:

    • App.jsx where I import AddressInformationExtractor and display it as my main component.
    • llm.js which handles the LangChain logic, including thePromptTemplate and StructuredOutputParser.
    • AddressInformationExtractor.jsx which is a React component that takes user input, sends it to the LLM, and displays the extracted country, state, and zip code.
  • 1. App.jsx

    App.jsx
    import './App.css'
    import { AddressInformationExtractor } from './components/AddressInformationExtractor'
    
    function App() {
      return (
        <>
          <AddressInformationExtractor />
        </>
      )
    }
    
    export default App

    2. llm.js

    llm.js
    import { OpenAI } from "@langchain/openai";
    import { LLMChain } from "langchain/chains";
    import { PromptTemplate } from "@langchain/core/prompts";
    import { StructuredOutputParser } from "langchain/output_parsers";
    
    const simpleParser = StructuredOutputParser.fromNamesAndDescriptions({
      country: "The country extracted from the text",
      state: "The state extracted from the text",
      zipcode: "The zipcode extracted from the text",
    });
    
    const formatInstructions = simpleParser.getFormatInstructions();
    
    const model = new OpenAI({
      modelName: "gpt-3.5-turbo",
      openAIApiKey: "API-KEY",
    });
    
    const myPrompt = new PromptTemplate({
      template: `Extract from the following text the country, state and zipcode: {text}
    ${formatInstructions}`,
      inputVariables: ["text"],
      partialVariables: { formatInstructions },
    });
    
    const myChain = new LLMChain({
      llm: model,
      prompt: myPrompt,
      outputKey: "questions",
      verbose: true,
      outputParser: simpleParser,
    });
    
    export const executePrompt = async (promptKeys) => {
      const response = await myChain.call(promptKeys);
      return response;
    };

    3. AddressInformationExtractor.jsx

    AddressInformationExtractor.jsx
    import React, { useState } from "react";
    import { executePrompt } from "./llm"
    
    export const AddressInformationExtractor = () => {
      const [text, setText] = useState(
        "there is a place in USA, a really interesting place and inside something called New York, in particular 10001."
      );
      const [addressInfo, setAddressInfo] = useState("");
    
      const extractAddressInfo = async () => {
        const response = await executePrompt({ text });
        setAddressInfo(response);
      };
    
      return (
        <div>
          <h1>Address Information Extractor</h1>
          <textarea
            value={text}
            onChange={(e) => setText(e.target.value)}
            placeholder="Enter text here"
          />
          <button onClick={extractAddressInfo}>Extract Address Information</button>
          <p>{addressInfo.questions?.country}</p>
          <p>{addressInfo.questions?.state}</p>
          <p>{addressInfo.questions?.zipcode}</p>
        </div>
      );
    };
  • Conclusion

    And that's it! With just a few tweaks—defining a schema, adding format instructions, and parsing the response—you can quickly turn a ChatGPT response into JSON that is easy to consume. Whether you are building a front-facing UI or an external API endpoint, structured outputs make your applications more robust.

    Ready to expand this approach with more complex schemas or even local LLMs? Stay tuned for the next steps in this series!

Entradas populares de este blog

How to copy files from and to a running Docker container

Sometimes you want to copy files to or from a container that doesn’t have a volume previously created, in this quick tips episode, you will learn how. Above is the vid and below you will find some useful notes. 1. Pre-reqs Have Docker installed 2. Start a Docker container For this video I will be using a Jenkins image as an example, so let’s first download it by using docker pull docker pull jenkins/jenkins:lts ...

Integrating Nodejs and Maven using The Maven Frontend Plugin

In this tutorial I show how to integrate nodejs with maven using the Maven Frontend Plugin, above is the vid and below you will find some useful notes. 1. Pre-reqs Have access to an Adobe Experience Manager instance if you want to install the AEM application and test it. The same pom configs shown here can be used for different types of applications Have Maven installed, understand how it works and also understand how to use Adobe's archetype, you can watch my video about maven here: Creating an AEM application using Maven and Adobe's archetype 2. Creating the base app ...

How to create an AEM component using Reactjs

In this tutorial, I will show how to use use Adobe's archetype to create an AEM application with React.js support and also how to add a new React.js component so that it can be added into a page, above is the vid and below you will find some useful notes. In the second part we will see how to configure the Sling Model for the AEM React component. 1. Pre-reqs Have access to an Adobe Experience Manager instance. You will need aem 6.4 Service Pack 2 or newer. Have Maven installed, understand how it works and also understand how to use Adobe's archetype, you can watch my video about maven here: Creating an AEM application using Maven and Adobe's archetype 2....