Getting Started with Verdic Guard API: A Complete Integration Guide
This guide walks you through integrating Verdic Guard API into your LLM application, from account setup to production deployment.
Step 1: Create an Account
- Visit Verdic Dashboard
- Sign up with your email
- Verify your email address
- Complete your profile
Step 2: Create a Project
Projects organize your validation configurations:
# Via API
curl -X POST https://api.verdic.dev/api/projects \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Customer Support Bot",
"globalIntent": "Customer support and help desk assistance"
}'
Response:
{
"id": "project-uuid-here",
"name": "Customer Support Bot",
"globalIntent": "Customer support and help desk assistance",
"createdAt": "2024-12-30T10:00:00Z"
}
Save the project ID—you'll need it for all validation requests.
Step 3: Generate API Key
- Navigate to Dashboard → API Keys
- Click "Generate API Key"
- Copy the key immediately (you won't see it again)
- Store it securely (environment variable, secrets manager)
# Example: Store in environment variable
export VERDIC_API_KEY="vdk_live_your_api_key_here"
Step 4: Make Your First Validation Request
JavaScript/TypeScript
async function validateOutput(output: string, projectId: string) {
const response = await fetch('https://api.verdic.dev/api/validate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.VERDIC_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
projectId: projectId,
output: output,
config: {
globalIntent: "Customer support and help desk assistance",
threshold: 0.76,
enableV5: true,
enableV6: true,
enableV7: true,
enableV8: true
}
})
})
if (!response.ok) {
throw new Error(`Validation failed: ${response.statusText}`)
}
return await response.json()
}
// Usage
const result = await validateOutput(
"Your order will arrive on January 15th.",
"your-project-id"
)
console.log(`Decision: ${result.decision}`) // ALLOW, WARN, SOFT_BLOCK, or HARD_BLOCK
console.log(`Status: ${result.status}`) // OK or BLOCKED
Python
import requests
import os
def validate_output(output: str, project_id: str) -> dict:
response = requests.post(
'https://api.verdic.dev/api/validate',
headers={
'Authorization': f'Bearer {os.getenv("VERDIC_API_KEY")}',
'Content-Type': 'application/json'
},
json={
'projectId': project_id,
'output': output,
'config': {
'globalIntent': 'Customer support and help desk assistance',
'threshold': 0.76,
'enableV5': True,
'enableV6': True,
'enableV7': True,
'enableV8': True
}
}
)
response.raise_for_status()
return response.json()
# Usage
result = validate_output(
"Your order will arrive on January 15th.",
"your-project-id"
)
print(f"Decision: {result['decision']}")
print(f"Status: {result['status']}")
cURL
curl -X POST https://api.verdic.dev/api/validate \
-H "Authorization: Bearer $VERDIC_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"projectId": "your-project-id",
"output": "Your order will arrive on January 15th.",
"config": {
"globalIntent": "Customer support and help desk assistance",
"threshold": 0.76,
"enableV5": true,
"enableV6": true,
"enableV7": true,
"enableV8": true
}
}'
Step 5: Handle Validation Responses
Response Structure
interface ValidationResponse {
status: "OK" | "BLOCKED"
decision: "ALLOW" | "WARN" | "SOFT_BLOCK" | "HARD_BLOCK"
requestId: string
drift?: number
threshold?: number
reason?: string
rotationMetrics?: {
angle: number
angularVelocity: number
angularAcceleration: number
}
hybridScore?: {
rotationScore: number
distanceScore: number
combinedScore: number
}
multiDimensional?: {
aggregateScore: number
riskLevel: "LOW" | "MEDIUM" | "HIGH"
shouldBlock: boolean
dimensions: {
semanticAngle: number
intentAlignment: number
domainMatch: number
topicCoherence: number
decisionConfidence: number
}
}
}
Decision Handling
function handleValidation(result: ValidationResponse, originalOutput: string) {
switch (result.decision) {
case "ALLOW":
// Output is safe, return as-is
return originalOutput
case "WARN":
// Minor issue, log and return
console.warn(`Validation warning: ${result.reason}`)
return originalOutput
case "SOFT_BLOCK":
// Significant issue, return sanitized or fallback
console.warn(`Content blocked: ${result.reason}`)
return "I apologize, but I cannot provide that information. Please contact support."
case "HARD_BLOCK":
// Critical violation, block completely
console.error(`Content hard blocked: ${result.reason}`)
return "I'm unable to assist with that request. Please contact our support team."
}
}
Step 6: Integrate with Your LLM Application
Complete Example
import OpenAI from 'openai'
import { validateOutput, handleValidation } from './verdic'
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
const VERDIC_PROJECT_ID = process.env.VERDIC_PROJECT_ID
const VERDIC_API_KEY = process.env.VERDIC_API_KEY
async function handleUserQuery(query: string): Promise<string> {
// 1. Generate LLM response
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: query }]
})
const llmOutput = completion.choices[0].message.content || ""
// 2. Validate with Verdic
const validation = await validateOutput(llmOutput, VERDIC_PROJECT_ID)
// 3. Handle validation decision
return handleValidation(validation, llmOutput)
}
// Usage
const response = await handleUserQuery("Tell me about your products")
console.log(response)
Step 7: Error Handling
Always handle errors gracefully:
async function validateWithErrorHandling(output: string, projectId: string) {
try {
const result = await validateOutput(output, projectId)
return result
} catch (error: any) {
if (error.response?.status === 401) {
// Invalid API key
console.error("Invalid API key. Please check your credentials.")
return { decision: "HARD_BLOCK", error: "Authentication failed" }
}
if (error.response?.status === 403) {
// Insufficient credits
console.error("Credit limit reached. Please upgrade your plan.")
return { decision: "HARD_BLOCK", error: "Credits exhausted" }
}
if (error.response?.status === 504) {
// Timeout
console.warn("Validation timed out. Allowing output with caution.")
return { decision: "WARN", error: "Timeout" }
}
// Other errors - fail secure
console.error("Validation error:", error)
return { decision: "HARD_BLOCK", error: "Validation failed" }
}
}
Step 8: Production Best Practices
1. Rate Limiting
Respect rate limits:
- Free: 10 requests/minute
- Starter: 100 requests/minute
- Pro: 1,000 requests/minute
- Enterprise: Custom limits
import { RateLimiter } from 'limiter'
const limiter = new RateLimiter({
tokensPerInterval: 100,
interval: 'minute'
})
async function validateWithRateLimit(output: string, projectId: string) {
await limiter.removeTokens(1)
return await validateOutput(output, projectId)
}
2. Caching
Cache validation results for identical outputs:
const validationCache = new Map<string, ValidationResponse>()
async function validateCached(output: string, projectId: string) {
const cacheKey = `${projectId}:${hash(output)}`
if (validationCache.has(cacheKey)) {
return validationCache.get(cacheKey)!
}
const result = await validateOutput(output, projectId)
validationCache.set(cacheKey, result)
return result
}
3. Monitoring
Track validation metrics:
const metrics = {
total: 0,
allowed: 0,
blocked: 0,
errors: 0
}
async function validateWithMetrics(output: string, projectId: string) {
metrics.total++
try {
const result = await validateOutput(output, projectId)
if (result.decision === "ALLOW" || result.decision === "WARN") {
metrics.allowed++
} else {
metrics.blocked++
}
return result
} catch (error) {
metrics.errors++
throw error
}
}
4. Logging
Log all validations for auditing:
async function validateWithLogging(output: string, projectId: string, userId: string) {
const result = await validateOutput(output, projectId)
await logValidation({
userId,
projectId,
output: output.substring(0, 500), // Log first 500 chars
decision: result.decision,
drift: result.drift,
timestamp: new Date(),
requestId: result.requestId
})
return result
}
Next Steps
- Test thoroughly with various outputs
- Monitor metrics in the dashboard
- Tune thresholds based on your use case
- Scale up as your usage grows
- Review logs regularly for patterns
Resources
- API Documentation - Complete API reference
- Dashboard - Manage projects and API keys
- Support - Get help with integration
Conclusion
Integrating Verdic Guard API is straightforward. With just a few API calls, you can add comprehensive validation to your LLM application. Start with basic validation and gradually add more sophisticated checks as you understand your needs.
Remember: Always handle validation errors gracefully and have fallback responses ready.

