Rate Limits
TwinEdge API implements rate limiting to ensure fair usage and platform stability.
Rate Limit Overview
Rate limits are applied:
- Per API key: Each API key has independent limits
- Per endpoint: Some endpoints have specific limits
- Per organization: Aggregate limits across all keys
Limits by Tier
REST API Limits
| Tier | Requests/min | Requests/hour | Requests/day |
|---|---|---|---|
| Trial | 60 | 1,000 | 10,000 |
| Starter | 300 | 10,000 | 100,000 |
| Professional | 1,000 | 50,000 | 500,000 |
| Enterprise | Custom | Custom | Custom |
WebSocket Limits
| Tier | Connections | Messages/sec (in) | Messages/sec (out) |
|---|---|---|---|
| Trial | 2 | 5 | 50 |
| Starter | 5 | 10 | 100 |
| Professional | 20 | 50 | 500 |
| Enterprise | Unlimited | 200 | 2,000 |
Telemetry Ingestion Limits
| Tier | Points/min | Points/hour | Storage (GB) |
|---|---|---|---|
| Trial | 1,000 | 10,000 | 1 |
| Starter | 10,000 | 100,000 | 10 |
| Professional | 100,000 | 1,000,000 | 100 |
| Enterprise | Custom | Custom | Custom |
Rate Limit Headers
All API responses include rate limit information in headers:
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 250
X-RateLimit-Reset: 1704538800
X-RateLimit-Window: 60
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests in window |
X-RateLimit-Remaining | Remaining requests in window |
X-RateLimit-Reset | Unix timestamp when limit resets |
X-RateLimit-Window | Window duration in seconds |
Rate Limit Exceeded
When rate limited, you'll receive:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704538830
{
"error": {
"code": "rate_limited",
"message": "Rate limit exceeded. Retry after 30 seconds.",
"retry_after": 30
}
}
Endpoint-Specific Limits
Some endpoints have additional limits:
Data Query Endpoints
| Endpoint | Limit | Notes |
|---|---|---|
GET /telemetry | 100 req/min | Complex queries may have lower limits |
POST /bi/query/execute | 30 req/min | SQL queries are resource-intensive |
GET /ml/datasets/{id}/preview | 60 req/min | Large datasets limited |
Write Endpoints
| Endpoint | Limit | Notes |
|---|---|---|
POST /telemetry | 1000 req/min | Batch ingestion recommended |
POST /alerts | 100 req/min | Alert creation |
POST /ml/training | 10 req/hour | Training jobs |
Admin Endpoints
| Endpoint | Limit | Notes |
|---|---|---|
POST /ota/deployments | 10 req/hour | OTA deployments |
POST /organizations/*/members/invite | 50 req/day | Member invitations |
Best Practices
Handling Rate Limits
Implement exponential backoff:
import time
import requests
def api_request(url, max_retries=5):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 30))
wait_time = retry_after * (2 ** attempt) # Exponential backoff
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
continue
return response
raise Exception("Max retries exceeded")
async function apiRequest(url, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url, { headers });
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '30');
const waitTime = retryAfter * Math.pow(2, attempt);
console.log(`Rate limited. Waiting ${waitTime}s...`);
await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
Optimize Request Patterns
-
Batch requests
// Instead of multiple requests:
POST /telemetry {"data": [{"asset": "Pump_001", "value": 1}]}
POST /telemetry {"data": [{"asset": "Pump_001", "value": 2}]}
// Use single batch request:
POST /telemetry {
"data": [
{"asset": "Pump_001", "value": 1},
{"asset": "Pump_001", "value": 2}
]
} -
Use webhooks instead of polling
- Subscribe to events via webhooks
- Avoid polling endpoints frequently
-
Cache responses
- Cache responses that don't change often
- Respect
Cache-Controlheaders
-
Use appropriate intervals
- Don't poll faster than data updates
- Use WebSocket for real-time needs
Monitor Your Usage
-
Track rate limit headers
def check_rate_limit(response):
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
limit = int(response.headers.get('X-RateLimit-Limit', 0))
if remaining < limit * 0.1:
print(f"Warning: Only {remaining}/{limit} requests remaining") -
Use the usage API
GET /organizations/current/usageResponse:
{
"api_requests": {
"current": 45000,
"limit": 100000,
"period": "day"
},
"telemetry_points": {
"current": 850000,
"limit": 1000000,
"period": "hour"
}
} -
Set up usage alerts
- Configure alerts at 80% usage
- Get notified before hitting limits
Request Prioritization
When approaching limits:
-
Prioritize critical operations
- Alert acknowledgements
- Safety-related commands
- Data ingestion
-
Defer non-critical operations
- Historical data queries
- Report generation
- Bulk exports
-
Use queuing
from collections import deque
class RequestQueue:
def __init__(self, rate_limit):
self.queue = deque()
self.rate_limit = rate_limit
self.requests_this_minute = 0
def add(self, request, priority='normal'):
if priority == 'high':
self.queue.appendleft(request)
else:
self.queue.append(request)
def process(self):
if self.requests_this_minute >= self.rate_limit:
return None
if not self.queue:
return None
self.requests_this_minute += 1
return self.queue.popleft()
Increasing Limits
Request Limit Increase
For Professional and Enterprise tiers:
- Go to Settings → API → Rate Limits
- Click Request Increase
- Provide:
- Current usage patterns
- Expected usage
- Use case justification
- We'll review within 2 business days
Enterprise Custom Limits
Enterprise plans include:
- Custom rate limits
- Dedicated API infrastructure
- Priority support
- SLA guarantees
Contact sales@twinedgeai.com for Enterprise pricing.
Compliance
Fair Use Policy
- Don't attempt to circumvent rate limits
- Don't use multiple accounts to increase limits
- Don't make unnecessary requests
Abuse Protection
Automated systems monitor for:
- Unusual request patterns
- Suspected abuse
- Security threats
Violations may result in:
- Temporary increased rate limiting
- API key suspension
- Account suspension
Troubleshooting
Common Issues
Issue: Hitting limits unexpectedly
- Check for runaway loops
- Verify batch operations are working
- Review caching implementation
Issue: Rate limits not resetting
- Verify server time vs local time
- Check for multiple processes using same key
- Contact support if issue persists
Issue: Different limits than expected
- Verify your subscription tier
- Check endpoint-specific limits
- Review recent billing status
Getting Help
If you're experiencing rate limit issues:
- Check your usage in the dashboard
- Review this documentation
- Contact support with:
- API key (last 4 characters)
- Endpoints affected
- Approximate request volume
- Error messages received
Next Steps
- Authentication - Auth methods and tokens
- REST Endpoints - API reference
- WebSocket API - Real-time streaming