JigsawStack Prompt Engine is a powerful AI SDK designed to integrate into any backend, automating tasks such as web scraping, optical character recognition (OCR), translation, and more, using custom fine-tuned models. By plugging JigsawStack into your existing application infrastructure, you can offload the heavy lifting and focus on building.

The JigsawStack Prompt Engine is powered by the Groq LPU Inference Engine to deliver an exceptionally efficient workflow optimized for real-time performance. By leveraging Groq in the backend for fast inference and low latency, the JigsawStack Prompt Engine provides rapid results for your AI applications.

Packed with a range of built-in features, Prompt Engine makes working with LLMs effortless and efficient:

  • 🌐 Prompt caching for repeated prompt runs
  • πŸ’¬ Automatic prompt optimization for improved performance
  • πŸ“„ Response schema validation for accuracy and consistency
  • πŸ” Reusable prompts to streamline your workflow
  • 🧠 Multi-agent LLM from 50+ models for flexibility depending on your apps
  • 🚫 No virtual rate limits, tokens, and GPU management

How does it work?

The JigsawStack Prompt Engine is based on a Mixture-of-Agents (MoA) approach. Each time a prompt is executed, it is run across 5 LLMs under the hood, including LLMs powered by Groq for lightning-fast inference speed and performance, such as:

  • llama-3.1-70b-versatile
  • llama-3.1-8b-instant
  • mixtral-8x7b-32768

The output of each LLM is then ranked by a smaller model based on similarity and quality before being merged into a single output.

The JigsawStack Prompt Engine especially works well if you run the same base prompt repeatedly to allow the engine to self-tune for running the best model every single time with the built-in prompt caching feature.

Optionally each prompt execution is first ran through llama-guard-3-8b to detect and filter for common abuse such as prompt injection, crimes, sexual content and more. Prompt guard is a feature that’s built into every step of the Prompt Engine and is configurable to allow or block specific content types.

Getting Started

Prerequisite

Installation

Usage

You can get a JigsawStack API key by creating an account here. Store the key in a secure environment like a .env file.

Creating a prompt engine:

All prompts are stored allowing you to rerun them with just the prompt ID

Running the prompt:

Running a prompt directly:

You can learn more about the Prompt Engine in the docs

Llama Guard 3 by Groq

The prompt engine comes with prompt guards to prevent prompt injection from user inputs and a wide range of unsafe use cases. This can be turned on automatically using the prompt_guard field.

Prompt Guard Categories

  • defamation: Guards against potentially libelous content
  • privacy: Protects personal information
  • hate: Filters out discriminatory content
  • sexual_content: Blocks explicit material
  • elections: Prevents election misinformation
  • code_interpreter_abuse: Guards against code exploitation
  • indiscriminate_weapons: Blocks content about mass destruction weapons
  • specialized_advice: Prevents unauthorized professional advice

By implementing prompt guard, you can create more secure, reliable, and user-friendly AI-powered applications. It’s an invaluable tool for developers looking to harness the power of AI while maintaining control over the content generated and processed by their applications.

For the most up-to-date list of prompt guard options and detailed usage, refer to the full documentation.