Rate Limiting
Lightning Enable applies rate limits to ensure fair usage and system stability.
Rate Limits
Default Limits
| Policy | Limit | Window | Applied To |
|---|---|---|---|
| Global | 100 requests | 1 minute | All authenticated requests (per API key) |
| Read | 200 requests | 1 minute | GET operations on payments, refunds, status |
| Payment Create | 10 requests | 1 minute | POST /api/payments, POST /api/refunds |
| Checkout Create | 5 requests | 1 minute | Stripe checkout session creation |
| Admin | 30 requests | 1 minute | All /api/admin/* endpoints |
note
Rate limits are designed to prevent abuse while allowing normal operations. The global limiter applies per API key for authenticated requests, or per IP address for anonymous requests.
Rate Limit Headers
Every response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1704067200
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests in window |
X-RateLimit-Remaining | Requests remaining |
X-RateLimit-Reset | Unix timestamp when limit resets |
Rate Limit Exceeded
When you exceed the rate limit:
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067200
{
"error": "Too Many Requests",
"message": "Rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED",
"details": {
"limit": 1000,
"window": "1 minute",
"retryAfter": 45
}
}
Handling Rate Limits
Check Headers
async function makeRequest(url) {
const response = await fetch(url, {
headers: { 'X-API-Key': API_KEY }
});
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
console.log(`Rate limit: ${remaining}/${limit} remaining`);
if (remaining < 10) {
console.warn('Approaching rate limit');
}
return response;
}
Handle 429 Errors
async function requestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After')) || 60;
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await sleep(retryAfter * 1000);
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
C# Implementation
public class RateLimitedHttpClient
{
private readonly HttpClient _client;
private int _remaining = int.MaxValue;
private DateTimeOffset _resetTime = DateTimeOffset.MinValue;
public async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request)
{
// Wait if we know we're rate limited
if (_remaining <= 0 && DateTimeOffset.UtcNow < _resetTime)
{
var delay = _resetTime - DateTimeOffset.UtcNow;
await Task.Delay(delay);
}
var response = await _client.SendAsync(request);
// Update rate limit info from headers
if (response.Headers.TryGetValues("X-RateLimit-Remaining", out var values))
{
_remaining = int.Parse(values.First());
}
if (response.Headers.TryGetValues("X-RateLimit-Reset", out var resetValues))
{
var resetUnix = long.Parse(resetValues.First());
_resetTime = DateTimeOffset.FromUnixTimeSeconds(resetUnix);
}
// Handle 429
if (response.StatusCode == HttpStatusCode.TooManyRequests)
{
var retryAfter = response.Headers.RetryAfter?.Delta ?? TimeSpan.FromSeconds(60);
await Task.Delay(retryAfter);
return await SendAsync(request);
}
return response;
}
}
Python Implementation
import time
import requests
class RateLimitedClient:
def __init__(self, api_key):
self.api_key = api_key
self.remaining = float('inf')
self.reset_time = 0
def request(self, method, url, **kwargs):
# Wait if rate limited
if self.remaining <= 0 and time.time() < self.reset_time:
sleep_time = self.reset_time - time.time()
time.sleep(sleep_time)
headers = kwargs.pop('headers', {})
headers['X-API-Key'] = self.api_key
response = requests.request(method, url, headers=headers, **kwargs)
# Update from headers
self.remaining = int(response.headers.get('X-RateLimit-Remaining', float('inf')))
reset = response.headers.get('X-RateLimit-Reset')
if reset:
self.reset_time = int(reset)
# Handle 429
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
time.sleep(retry_after)
return self.request(method, url, **kwargs)
return response
Best Practices
1. Implement Exponential Backoff
async function exponentialBackoff(fn, maxRetries = 5) {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (error.status !== 429 || i === maxRetries - 1) {
throw error;
}
const delay = Math.min(1000 * Math.pow(2, i), 60000);
const jitter = Math.random() * 1000;
await sleep(delay + jitter);
}
}
}
2. Use Request Queuing
class RequestQueue {
constructor(maxConcurrent = 10) {
this.queue = [];
this.running = 0;
this.maxConcurrent = maxConcurrent;
}
async add(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.process();
});
}
async process() {
if (this.running >= this.maxConcurrent || this.queue.length === 0) {
return;
}
this.running++;
const { fn, resolve, reject } = this.queue.shift();
try {
const result = await fn();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.running--;
this.process();
}
}
}
// Usage
const queue = new RequestQueue(5);
const results = await Promise.all(
paymentIds.map(id =>
queue.add(() => getPayment(id))
)
);
3. Cache Responses
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getCachedRate(currency) {
const cacheKey = `rate:${currency}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetchRate(currency);
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
4. Batch Requests
Instead of individual requests:
// Bad - 100 API calls
for (const orderId of orderIds) {
const payment = await getPayment(orderId);
}
// Good - Use webhooks or batch endpoints
// Payments are pushed via webhook, no polling needed
5. Use Webhooks
Don't poll for payment status. Use webhooks instead:
// Bad - Polling every 5 seconds
setInterval(async () => {
const status = await getPaymentStatus(invoiceId);
if (status === 'paid') {
fulfillOrder();
}
}, 5000);
// Good - Webhook notification
app.post('/webhooks/lightning', (req, res) => {
if (req.body.event === 'payment.completed') {
fulfillOrder(req.body.data.orderId);
}
res.status(200).send('OK');
});
Rate Limit by Endpoint
Payment Endpoints
| Endpoint | Policy | Limit |
|---|---|---|
POST /api/payments | payment-create | 10/min |
GET /api/payments/{id} | read | 200/min |
GET /api/payments/order/{orderId} | read | 200/min |
POST /api/payments/{id}/sync | read | 200/min |
Refund Endpoints
| Endpoint | Policy | Limit |
|---|---|---|
POST /api/refunds | payment-create | 10/min |
GET /api/refunds | read | 200/min |
GET /api/refunds/{id} | read | 200/min |
Merchant Self-Service Endpoints
| Endpoint | Policy | Limit |
|---|---|---|
GET /api/merchant/me | read | 200/min |
PUT /api/merchant/opennode-key | read | 200/min |
PUT /api/merchant/webhook-url | read | 200/min |
Admin Endpoints
| Endpoint | Policy | Limit |
|---|---|---|
GET /api/admin/merchants | admin | 30/min |
POST /api/admin/merchants | admin | 30/min |
PUT /api/admin/merchants/{id} | admin | 30/min |
L402 Endpoints
| Endpoint | Policy | Limit |
|---|---|---|
GET /api/l402/pricing | read | 200/min |
GET /api/l402/status | read | 200/min |
/l402/proxy/* | global | 100/min |
Monitoring Rate Limits
Track Usage
class RateLimitMonitor {
constructor() {
this.history = [];
}
record(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
this.history.push({
timestamp: Date.now(),
remaining,
limit,
usage: (limit - remaining) / limit
});
// Keep last 100 entries
if (this.history.length > 100) {
this.history.shift();
}
}
getUsage() {
const recent = this.history.slice(-10);
const avgUsage = recent.reduce((sum, h) => sum + h.usage, 0) / recent.length;
return {
averageUsage: (avgUsage * 100).toFixed(1) + '%',
entries: recent.length
};
}
}
Alerts
function checkRateLimitStatus(response) {
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const usage = (limit - remaining) / limit;
if (usage > 0.8) {
console.warn(`Rate limit warning: ${(usage * 100).toFixed(0)}% used`);
// Send alert to monitoring system
}
if (usage > 0.95) {
console.error('Critical: Approaching rate limit');
// Trigger immediate action
}
}
Enterprise Options
Need higher rate limits? Contact us for enterprise plans with:
- Custom rate limits based on your needs
- Dedicated infrastructure
- Priority support
- SLA guarantees
Contact: enterprise@lightningenable.com
Next Steps
- Errors - Error handling
- Authentication - API key setup
- Webhooks - Avoid polling with webhooks