Code Specimens

SDK Examples & Use Cases

Complete, production-ready code samples to help you integrate PromptForge Studio into your workflow in minutes.

Gemini Provider (Flash/Pro)

Standard execution using the default Google Gemini infrastructure.

import { PromptForgeClient } from "promptforge-server-sdk";

const client = new PromptForgeClient({
  apiKey: process.env.PROMPTFORGE_API_KEY,
  baseURL: "https://your-studio.vercel.app" // 👈 Required
});

async function run() {
  const result = await client.execute({
    versionId: "gemini-version-uuid",
    variables: { topic: "AI", tone: "friendly" }
  });

  if (result.success) console.log(result.data);
}

run();
Source code visualization

NVIDIA High-Performance

Deploy prompts to NVIDIA's accelerated infrastructure (Nemotron/Llama).

import { PromptForgeClient } from "promptforge-server-sdk";

const client = new PromptForgeClient({
  apiKey: process.env.PROMPTFORGE_API_KEY,
  baseURL: "https://your-studio.vercel.app"
});

async function run() {
  const result = await client.execute({
    versionId: "nvidia-version-uuid", // Configured with NVIDIA model
    variables: { 
      project: "PromptForge",
      goal: "democratize AI"
    }
  });

  if (result.success) console.log(result.data);
}

run();
Source code visualization

Next.js API Handler

Securely bridge your frontend with the PromptForge Studio backend.

// app/api/generate/route.ts
import { NextResponse } from "next/server";
import { PromptForgeClient } from "promptforge-server-sdk";

const client = new PromptForgeClient({
  apiKey: process.env.PROMPTFORGE_API_KEY,
  baseURL: process.env.PROMPTFORGE_BASE_URL
});

export async function POST(req: Request) {
  const { input } = await req.json();

  const result = await client.execute({
    versionId: process.env.PROMPTFORGE_VERSION_ID!,
    variables: { input }
  });

  if (!result.success) {
    return NextResponse.json({ error: result.error }, { status: 500 });
  }

  return NextResponse.json({ output: result.data });
}
Source code visualization

Usage & Cost Observability

Extract telemetry like token counts, specific models used, and latency.

const result = await client.execute({
  versionId: "any-version-id",
  variables: { name: "Anil" }
});

if (result.success) {
  const { model, latency_ms, tokens_total } = result.meta;
  
  console.log(`Model: ${model}`);
  console.log(`Latency: ${latency_ms}ms`);
  console.log(`Tokens: ${tokens_total}`);
}
Source code visualization

Need more help?

Check out our community examples or join our Discord to see how other teams are using PromptForge.