API Documentation

Integrate Mastiff Defense compliance guardrails into your Shopify store's chatbot.

Overview

Mastiff Defense is a compliance screening middleware — a three-layer AI guardrail that sits between your customers and your chatbot's LLM. Every message is screened before reaching the LLM, and every response is screened before reaching the customer.

There are two integration modes depending on your setup:

Middleware Mode — Recommended

You have your own LLM or chatbot (Tidio, Gorgias, OpenAI, etc.). Use /screen/input and /screen/output to screen messages around your own LLM call.

All-in-One Mode

You don't have an AI backend yet. Use /evaluate and Mastiff Defense screens the input and returns an AI-generated response in one call. You provide the chat interface.

Base URL:

https://mastiffdefense.com

Authentication

All requests require an API key passed in the request header. Your API key was provided when you installed the app. Each key is tied to your store.

X-API-Key: your-api-key-here

If your key is missing or invalid the request returns status: "unauthorized". Contact support if you need a new key.

How It Works

Every message passes through three guardrail layers in order:

Layer 1 — Keywords
Layer 2 — Policy Rules
Layer 3 — Semantic Evaluation

If any layer blocks the message, processing stops immediately — the remaining layers are skipped. The same three layers run on both the input (before your LLM) and the output (after your LLM).

Middleware Mode

POST /screen/input

POST /screen/input Screen customer message before sending to your LLM

Run the customer's message through all three guardrail layers. If the result is clean, pass message to your LLM. If blocked, show the message field to the customer and stop.

Request Body

{ "message": "string — the customer's message (required)" }

Response

{ "status": "clean", "message": "What is your return policy?", "riskScore": 0.1, "riskLevel": "low", "reason": "Standard customer service inquiry", "source": "semantic_eval" }

POST /screen/output

POST /screen/output Screen your LLM's response before delivering to customer

Run your LLM's response through all three guardrail layers. If clean or redacted, show the message field to the customer. If blocked, show the message field (a safe fallback) instead of the LLM's response.

Request Body

{ "message": "string — your LLM's response (required)" }

Response

{ "status": "clean", "message": "Your order will arrive in 3-5 business days.", "riskScore": 0.1, "riskLevel": "low", "reason": "Standard shipping information", "source": "semantic_eval" }

Response Statuses

StatusMeaningWhat to do
clean Passed all layers Use the message field as-is
blocked Violated policy — stop here Show message to customer (safe fallback)
redacted Sensitive content removed Show message (cleaned version)
error Unexpected server error Show fallback message, retry

Always show the message field to the customer — it is always safe to display regardless of status.

Middleware Mode — Code Examples

JavaScript
cURL
Python
// Full middleware flow: screen input → your LLM → screen output async function handleCustomerMessage(customerMessage) { // Step 1 — Screen the input const inputCheck = await fetch('https://mastiffdefense.com/screen/input', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key-here', }, body: JSON.stringify({ message: customerMessage }), }).then(r => r.json()); if (inputCheck.status === 'blocked') { return inputCheck.message; // Safe block message — show to customer } // Step 2 — Call your own LLM with the screened input const llmResponse = await yourLLM(inputCheck.message); // Step 3 — Screen the output const outputCheck = await fetch('https://mastiffdefense.com/screen/output', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key-here', }, body: JSON.stringify({ message: llmResponse }), }).then(r => r.json()); // Always return the message field — clean, redacted, or blocked fallback return outputCheck.message; }
# Step 1 — Screen the input curl -X POST https://mastiffdefense.com/screen/input \ -H "Content-Type: application/json" \ -H "X-API-Key: your-api-key-here" \ -d '{"message": "What is your return policy?"}' # Step 2 — Call your own LLM (not shown — use your LLM provider) # Step 3 — Screen the output curl -X POST https://mastiffdefense.com/screen/output \ -H "Content-Type: application/json" \ -H "X-API-Key: your-api-key-here" \ -d '{"message": "Returns are accepted within 30 days of purchase."}'
import requests API_KEY = 'your-api-key-here' BASE = 'https://mastiffdefense.com' def handle_customer_message(customer_message): # Step 1 — Screen the input input_check = requests.post( f'{BASE}/screen/input', headers={'Content-Type': 'application/json', 'X-API-Key': API_KEY}, json={'message': customer_message} ).json() if input_check['status'] == 'blocked': return input_check['message'] # Safe block message — show to customer # Step 2 — Call your own LLM with the screened input llm_response = your_llm(input_check['message']) # Step 3 — Screen the output output_check = requests.post( f'{BASE}/screen/output', headers={'Content-Type': 'application/json', 'X-API-Key': API_KEY}, json={'message': llm_response} ).json() # Always return the message field — clean, redacted, or blocked fallback return output_check['message']
All-in-One Mode

POST /evaluate

POST /evaluate All-in-one: screen + generate response

For merchants who don't have their own LLM. Mastiff Defense screens the input, generates an AI response, screens the output, and returns the final result in a single call.

Request Body

{ "userInput": "string — the customer's message (required)", "conversationHistory": [ { "role": "user", "content": "previous customer message" }, { "role": "assistant", "content": "previous AI response" } ] }

Conversation history is optional but improves response quality for multi-turn conversations. It is never stored on our servers — send it with each request.

Response

{ "status": "allowed", "response": "The AI-generated response to show the customer", "riskScore": 0.12, "riskLevel": "low", "reason": "No policy violations detected", "source": "semantic_eval" }

Always show the response field to the customer regardless of status.

All-in-One Mode — Code Examples

JavaScript
cURL
Python
const response = await fetch('https://mastiffdefense.com/evaluate', { method: 'POST', headers: { 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key-here', }, body: JSON.stringify({ userInput: customerMessage, conversationHistory: history, }), }); const data = await response.json(); // Always show the response field — it is always safe chatbot.reply(data.response);
curl -X POST https://mastiffdefense.com/evaluate \ -H "Content-Type: application/json" \ -H "X-API-Key: your-api-key-here" \ -d '{ "userInput": "What is your return policy?", "conversationHistory": [] }'
import requests response = requests.post( 'https://mastiffdefense.com/evaluate', headers={ 'Content-Type': 'application/json', 'X-API-Key': 'your-api-key-here', }, json={ 'userInput': customer_message, 'conversationHistory': history, } ) data = response.json() # Always show the response field — it is always safe chatbot.reply(data['response'])
General

Error Handling

Wrap all API calls in try/catch. If a request fails or times out, show a fallback message and retry.

try { const result = await screenInput(customerMessage); // handle result } catch (error) { chatbot.reply('Sorry, I am unable to process your request right now. Please try again.'); }

Requests time out after 30 seconds. Average response time is under 1 second for keyword blocks, 1-3 seconds when semantic analysis runs.

Fail Mode

Fail mode controls what happens when Layer 3 (AI semantic analysis) becomes temporarily unavailable — for example, if semantic evaluation becomes temporarily unavailable. Layers 1 and 2 (keyword matching and policy rules) always run regardless.

Fail Open — Default

If semantic analysis goes down, messages that passed Layers 1 and 2 are allowed through. Your chatbot keeps working. Suitable for most retail stores where uptime matters more than deep compliance coverage during outages.

Fail Closed

If semantic analysis goes down, all messages are blocked until service recovers. Your chatbot stops responding. Suitable for stores handling sensitive topics (health, finance, legal) where compliance must be guaranteed at all times — even at the cost of availability.

Fail mode is configured per-tenant in your policy settings. The tradeoff is straightforward:

ModeDuring L3 outageBest for
open Chatbot keeps working (L1+L2 still protect you) Retail, e-commerce, general customer service
closed Chatbot stops responding entirely Sensitive industries — health, finance, legal

If no fail mode is configured, the platform defaults to open. Contact [email protected] to update your fail mode setting.

Platform Guides

Tidio

Lyro compatibility

Mastiff Defense cannot be used alongside Lyro. Lyro runs its own autonomous conversation loop and Tidio disables external flow integrations while it is active, making it impossible to screen messages in either direction. If you deactivate Lyro and switch to Tidio's standard Flow builder, we can walk you through the setup — email [email protected] and we'll get you configured.

Gorgias — Step-by-Step Setup

Gorgias is a helpdesk for Shopify stores. Mastiff Defense integrates with Gorgias to screen outgoing agent responses — logging anything sensitive before your support team sends it to a customer.

Which mode to use with Gorgias

Use Middleware output mode (/screen/output) to screen agent responses. This catches sensitive pricing, internal notes, or policy violations in support replies and logs them to your Mastiff Defense audit trail.

How the Gorgias integration works

Gorgias HTTP integrations are fire-and-forget — the integration sends a message to Mastiff Defense, but Gorgias cannot read the response back or act on it. Flagged messages are captured in your Mastiff Defense audit log for your team to review. The trigger fires on every new message — both customer messages and agent replies — so all traffic through a ticket is screened.

Step 1 — Create the HTTP Integration

In Gorgias, go to Settings → HTTP integration, then click the Manage tab. Click Add HTTP integration in the top right. Fill in the form as follows:

Integration name: Mastiff Defense Trigger: Ticket message created URL: https://mastiffdefense.com/screen/output HTTP Method: POST Headers: X-API-Key: your-api-key-here Request Body (JSON): { "message": "{{ticket.messages[-1].body_text}}" }

Leave OAuth2 disabled — authentication is handled by the X-API-Key header above. Click Add Integration.

Step 2 — Review flagged messages

When Mastiff Defense flags a message, it appears in the Recent Flags section of your store settings page. The log shows which guardrail triggered, the risk score, and a reason. If you need help reviewing anything, email [email protected].

If you need real-time blocking (stopping a reply before it reaches the customer), contact us — this requires a custom Gorgias API callback setup that we can configure for your store.

Step 3 — Test it

Open a test ticket in Gorgias and have an agent send a reply containing something that should be flagged (e.g., paste an internal pricing formula or a keyword from your policy). Within a few seconds the message should appear in your Mastiff Defense audit log marked as blocked or flagged.

Need Help Setting This Up?

Integration setup takes about 10 minutes for most stores, but every setup is a little different. If you get stuck at any step — wrong variable name, unexpected response, or the flow isn't firing — email us and we'll walk you through it.

Free setup support

Email [email protected] with your store domain and which platform you're using (Tidio, Gorgias, or something else). We'll respond within one business day.

Using a different platform — Zendesk, Freshdesk, Intercom, Re:amaze, or a custom chatbot? The same API works with any platform that can make HTTP requests. The code examples in the Code Examples section above apply directly.