Groq
Learn how to use Prompt Engine powered by Groq to automate tasks in your application.
JigsawStack Prompt Engine is a powerful AI SDK designed to integrate into any backend, automating tasks such as web scraping, optical character recognition (OCR), translation, and more, using custom fine-tuned models. By plugging JigsawStack into your existing application infrastructure, you can offload the heavy lifting and focus on building.
The JigsawStack Prompt Engine is powered by the Groq LPU Inference Engine to deliver an exceptionally efficient workflow optimized for real-time performance. By leveraging Groq in the backend for fast inference and low latency, the JigsawStack Prompt Engine provides rapid results for your AI applications.
Packed with a range of built-in features, Prompt Engine makes working with LLMs effortless and efficient:
- π Prompt caching for repeated prompt runs
- π¬ Automatic prompt optimization for improved performance
- π Response schema validation for accuracy and consistency
- π Reusable prompts to streamline your workflow
- π§ Multi-agent LLM from 50+ models for flexibility depending on your apps
- π« No virtual rate limits, tokens, and GPU management
How does it work?
The JigsawStack Prompt Engine is based on a Mixture-of-Agents (MoA) approach. Each time a prompt is executed, it is run across 5 LLMs under the hood, including LLMs powered by Groq for lightning-fast inference speed and performance, such as:
llama-3.1-70b-versatile
llama-3.1-8b-instant
mixtral-8x7b-32768
The output of each LLM is then ranked by a smaller model based on similarity and quality before being merged into a single output.
The JigsawStack Prompt Engine especially works well if you run the same base prompt repeatedly to allow the engine to self-tune for running the best model every single time with the built-in prompt caching feature.
Optionally each prompt execution is first ran through llama-guard-3-8b
to detect and filter for common abuse such as prompt injection, crimes, sexual content and more. Prompt guard is a feature thatβs built into every step of the Prompt Engine and is configurable to allow or block specific content types.
Learn more about how prompt engine works here
Getting Started
Prerequisite
- Create a free JigsawStack account
- Generate a secret key from the dashboard and securely store it
Installation
Usage
You can get a JigsawStack API key by creating an account here. Store the key in a secure environment like a .env
file.
Creating a prompt engine:
All prompts are stored allowing you to rerun them with just the prompt ID
Running the prompt:
Running a prompt directly:
You can learn more about the Prompt Engine in the docs
Llama Guard 3 by Groq
The prompt engine comes with prompt guards to prevent prompt injection from user inputs and a wide range of unsafe use cases. This can be turned on automatically using the prompt_guard
field.
Prompt Guard Categories
defamation
: Guards against potentially libelous contentprivacy
: Protects personal informationhate
: Filters out discriminatory contentsexual_content
: Blocks explicit materialelections
: Prevents election misinformationcode_interpreter_abuse
: Guards against code exploitationindiscriminate_weapons
: Blocks content about mass destruction weaponsspecialized_advice
: Prevents unauthorized professional advice
By implementing prompt guard, you can create more secure, reliable, and user-friendly AI-powered applications. Itβs an invaluable tool for developers looking to harness the power of AI while maintaining control over the content generated and processed by their applications.
For the most up-to-date list of prompt guard options and detailed usage, refer to the full documentation.
Was this page helpful?