Revolutionizing LLM-Powered Copilots with Layer and OpenAI API’s GPT-4
Layer is at the forefront of AI innovation, aiming to simplify the deployment of LLM-powered copilots that are experts on specific platforms. Their vision revolves around two core functionalities; ingesting platform-specific documentation to facilitate a Q&A experience and describing site functionality through "Invokables" or "Paths". These paths allow for a more dynamic interaction with the copilot, enabling it to decide whether to complete a task or answer a question based on user queries.
To showcase Layer’s copilot capabilities into various categories, Lazer was brought in to develop custom tools that range from basic to advanced environments. With specialized expertise in AI, we showcased a combination of tools including GPT-4, Langchain, AI agents and many more to tailor to different cases developed alongside the Layer team.
While working with the Layer team, we closely collaborated on a number of core features and approaches that would allow their platform to simplify the deployment of LLM-powered copilots.
Developing multiple LLM-powered copilots for a diverse range of tasks
By giving the user full-autonomy on a platform, there are endless paths that can be taken. With this in mind, we developed different copilots to handle specific scenarios and tailor to the request of the user.
BasicCopilot: Executes one or more Invokables based on user queries
FallbackCopilot: Provides Q&A responses in case of errors during Invokable execution
PlanAndConfirmCopilot: Presents a plan to users before executing any Invokables
QuestionAnswerCopilot: Dedicated solely to Q&A
RouterCopilot: Determines whether executing Invokables or providing a Q&A response is best for a given user query
Showcasing Layer’s platform through an interactive demo
To demonstrate the full extent of the platform, we implemented a demo that allows the user to design a tailored dashboard by providing a set of prompts. The demo utilizes a local vector store that executes a list of functions in a particular order and abstracts the complexity to only a single prompt. Another demo, “Layer Park” ( https://buildwithlayer.github.io/park/ ) showcases Layer’s SDK for the user to understand its technical capabilities. To engage with the demo, here are the prompts available:
“Set to light/dark mode”
“Add technical analysis for GOOGL”
“Add news tv box widget”
“Create me a ticker for GOOGL”
“Create me a new dashboard”
“Change chart box symbol to USDCAD”
“Change chart box interval to 1 minute”
“Change ticker symbol to TSLA”
“What widgets are available?” for Q&A prompts”
Enhancing the user experience to automatically detect Invokables
To further simplify the process of creating each functions also known as Invokables, we developed Layer’s "Builder" tool. This auto-detects Invokables on specific site routes and generates a configuration file that can feed into the SDK. This accelerates the development of Invokables as it removes the need to write each and every piece of functionality the user desires.
Developing a Python-based backend for document ingestion
By navigating our development through the lens of the user, we aimed to provide various solutions for different situations. Our Python and Flask-based backend allows for document ingestion organized by the user that can be equipped to answer specific Q&A prompts. This enables the user to have further extend Layer’s full potential.
Leveraging Langchain to solve “path abstraction”
The development of each copilot required various Invokables (functions) to carry out in a particular order (referred to as a “Path”). However, this list was required to guarantee that some Invokables be executed after others and a way for users to confirm the execution of these Invokables. This unpredictable problem required us to implement Langchain for us to gain control of the order of execution, but also provided some limitations that make the solution unreliable at times. As a result, we architected solutions for edge cases while leveraging Langchain’s custom agents and Router Chain.
Establishing a complete architecture with Google Cloud
To ensure a robust and scalable architecture, we utilized a suite of tools from Google Cloud Platform (GCP), including Cloud Run, Google Storage, Cloud SQL, and Compute Engine. This infrastructure allowed us to host ChromaDB and ensure seamless integrations with other platforms. While working with innovative technologies, its strongly beneficial to utilize reliable tools.
Collaborating with Layer has been a journey of innovation and exploration, and we thoroughly enjoyed working with them. Throughout our time with their team, we’re proud to see how far Layer’s platform has grown and improved. We’re excited to continue helping them create a platform to easily build and deploy LLM-powered copilots.