Skip to main content
Version: 1.0.0

Rate Limits

TwinEdge API implements rate limiting to ensure fair usage and platform stability.

Rate Limit Overview

Rate limits are applied:

  • Per API key: Each API key has independent limits
  • Per endpoint: Some endpoints have specific limits
  • Per organization: Aggregate limits across all keys

Limits by Tier

REST API Limits

TierRequests/minRequests/hourRequests/day
Trial601,00010,000
Starter30010,000100,000
Professional1,00050,000500,000
EnterpriseCustomCustomCustom

WebSocket Limits

TierConnectionsMessages/sec (in)Messages/sec (out)
Trial2550
Starter510100
Professional2050500
EnterpriseUnlimited2002,000

Telemetry Ingestion Limits

TierPoints/minPoints/hourStorage (GB)
Trial1,00010,0001
Starter10,000100,00010
Professional100,0001,000,000100
EnterpriseCustomCustomCustom

Rate Limit Headers

All API responses include rate limit information in headers:

X-RateLimit-Limit: 300
X-RateLimit-Remaining: 250
X-RateLimit-Reset: 1704538800
X-RateLimit-Window: 60
HeaderDescription
X-RateLimit-LimitMaximum requests in window
X-RateLimit-RemainingRemaining requests in window
X-RateLimit-ResetUnix timestamp when limit resets
X-RateLimit-WindowWindow duration in seconds

Rate Limit Exceeded

When rate limited, you'll receive:

HTTP/1.1 429 Too Many Requests
Retry-After: 30
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704538830

{
"error": {
"code": "rate_limited",
"message": "Rate limit exceeded. Retry after 30 seconds.",
"retry_after": 30
}
}

Endpoint-Specific Limits

Some endpoints have additional limits:

Data Query Endpoints

EndpointLimitNotes
GET /telemetry100 req/minComplex queries may have lower limits
POST /bi/query/execute30 req/minSQL queries are resource-intensive
GET /ml/datasets/{id}/preview60 req/minLarge datasets limited

Write Endpoints

EndpointLimitNotes
POST /telemetry1000 req/minBatch ingestion recommended
POST /alerts100 req/minAlert creation
POST /ml/training10 req/hourTraining jobs

Admin Endpoints

EndpointLimitNotes
POST /ota/deployments10 req/hourOTA deployments
POST /organizations/*/members/invite50 req/dayMember invitations

Best Practices

Handling Rate Limits

Implement exponential backoff:

import time
import requests

def api_request(url, max_retries=5):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)

if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 30))
wait_time = retry_after * (2 ** attempt) # Exponential backoff
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
continue

return response

raise Exception("Max retries exceeded")
async function apiRequest(url, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, { headers });

if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '30');
const waitTime = retryAfter * Math.pow(2, attempt);
console.log(`Rate limited. Waiting ${waitTime}s...`);
await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
continue;
}

return response;
}

throw new Error('Max retries exceeded');
}

Optimize Request Patterns

  1. Batch requests

    // Instead of multiple requests:
    POST /telemetry {"data": [{"asset": "Pump_001", "value": 1}]}
    POST /telemetry {"data": [{"asset": "Pump_001", "value": 2}]}

    // Use single batch request:
    POST /telemetry {
    "data": [
    {"asset": "Pump_001", "value": 1},
    {"asset": "Pump_001", "value": 2}
    ]
    }
  2. Use webhooks instead of polling

    • Subscribe to events via webhooks
    • Avoid polling endpoints frequently
  3. Cache responses

    • Cache responses that don't change often
    • Respect Cache-Control headers
  4. Use appropriate intervals

    • Don't poll faster than data updates
    • Use WebSocket for real-time needs

Monitor Your Usage

  1. Track rate limit headers

    def check_rate_limit(response):
    remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
    limit = int(response.headers.get('X-RateLimit-Limit', 0))

    if remaining < limit * 0.1:
    print(f"Warning: Only {remaining}/{limit} requests remaining")
  2. Use the usage API

    GET /organizations/current/usage

    Response:

    {
    "api_requests": {
    "current": 45000,
    "limit": 100000,
    "period": "day"
    },
    "telemetry_points": {
    "current": 850000,
    "limit": 1000000,
    "period": "hour"
    }
    }
  3. Set up usage alerts

    • Configure alerts at 80% usage
    • Get notified before hitting limits

Request Prioritization

When approaching limits:

  1. Prioritize critical operations

    • Alert acknowledgements
    • Safety-related commands
    • Data ingestion
  2. Defer non-critical operations

    • Historical data queries
    • Report generation
    • Bulk exports
  3. Use queuing

    from collections import deque

    class RequestQueue:
    def __init__(self, rate_limit):
    self.queue = deque()
    self.rate_limit = rate_limit
    self.requests_this_minute = 0

    def add(self, request, priority='normal'):
    if priority == 'high':
    self.queue.appendleft(request)
    else:
    self.queue.append(request)

    def process(self):
    if self.requests_this_minute >= self.rate_limit:
    return None
    if not self.queue:
    return None
    self.requests_this_minute += 1
    return self.queue.popleft()

Increasing Limits

Request Limit Increase

For Professional and Enterprise tiers:

  1. Go to SettingsAPIRate Limits
  2. Click Request Increase
  3. Provide:
    • Current usage patterns
    • Expected usage
    • Use case justification
  4. We'll review within 2 business days

Enterprise Custom Limits

Enterprise plans include:

  • Custom rate limits
  • Dedicated API infrastructure
  • Priority support
  • SLA guarantees

Contact sales@twinedgeai.com for Enterprise pricing.

Compliance

Fair Use Policy

  • Don't attempt to circumvent rate limits
  • Don't use multiple accounts to increase limits
  • Don't make unnecessary requests

Abuse Protection

Automated systems monitor for:

  • Unusual request patterns
  • Suspected abuse
  • Security threats

Violations may result in:

  • Temporary increased rate limiting
  • API key suspension
  • Account suspension

Troubleshooting

Common Issues

Issue: Hitting limits unexpectedly

  • Check for runaway loops
  • Verify batch operations are working
  • Review caching implementation

Issue: Rate limits not resetting

  • Verify server time vs local time
  • Check for multiple processes using same key
  • Contact support if issue persists

Issue: Different limits than expected

  • Verify your subscription tier
  • Check endpoint-specific limits
  • Review recent billing status

Getting Help

If you're experiencing rate limit issues:

  1. Check your usage in the dashboard
  2. Review this documentation
  3. Contact support with:
    • API key (last 4 characters)
    • Endpoints affected
    • Approximate request volume
    • Error messages received

Next Steps