Skip to main content

Rate Limiting

LimesIndex implements rate limiting to ensure fair usage and maintain service quality for all users. This page explains how rate limits work and best practices for handling them.

Overview

Rate limits are applied per API key and are based on your subscription tier. Limits reset on a rolling window basis.

Rate Limit Tiers

TierRequests per HourRequests per DayBatch Size
Free1,00010,00010 IPs
Starter5,00050,000100 IPs
Pro20,000200,000500 IPs
EnterpriseCustomUnlimited1,000 IPs
Rate Limit Enforcement

Rate limits are enforced using a sliding window algorithm with token bucket burst handling. Limits are specified per-hour but enforced per-minute for smoother request distribution.

Upgrade for Higher Limits

If you consistently hit rate limits, consider upgrading your plan. Visit the Dashboard to view upgrade options.

Rate Limit Headers

Every API response includes headers that help you track your rate limit status:

HeaderDescriptionExample
X-RateLimit-LimitMaximum requests per window1000
X-RateLimit-RemainingRequests remaining in current window950
X-RateLimit-ResetUnix timestamp when the limit resets1704067200

Example Response Headers

HTTP/1.1 200 OK
Content-Type: application/json
X-Request-ID: 550e8400-e29b-41d4-a716-446655440000
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 950
X-RateLimit-Reset: 1704067200

Rate Limit Exceeded Response

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

{
"data": {
"error": "rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED"
},
"meta": {
"processing_time_ms": 0
}
}

The response will include a Retry-After header indicating how many seconds to wait:

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 60
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067200

Best Practices

1. Monitor Rate Limit Headers

Always check the rate limit headers in your responses to avoid hitting limits:

import requests
import time

def make_request_with_rate_limit(url, headers):
response = requests.get(url, headers=headers)

# Check remaining requests
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))

if remaining < 10:
# Log warning when running low
print(f"Warning: Only {remaining} requests remaining")
print(f"Resets at: {time.ctime(reset_time)}")

return response

2. Implement Exponential Backoff

When receiving a 429 response, implement exponential backoff:

import time
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

def create_session_with_retry():
session = requests.Session()
retries = Retry(
total=5,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=["GET", "POST"]
)
adapter = HTTPAdapter(max_retries=retries)
session.mount("https://", adapter)
return session

# Usage
session = create_session_with_retry()
response = session.get(
"https://api.limesindex.com/v1/ip/8.8.8.8",
headers={"X-API-Key": "YOUR_API_KEY"}
)

3. Use Batch Endpoints

The batch endpoint counts as a single request, making it much more efficient:

# Inefficient: 100 individual requests
for ip in ip_list[:100]:
result = lookup_ip(ip) # Uses 100 requests

# Efficient: 1 batch request
results = batch_lookup(ip_list[:100]) # Uses 1 request

4. Implement Request Queuing

For high-volume applications, queue requests and process them at a controlled rate:

import asyncio
from collections import deque
import time

class RateLimitedQueue:
def __init__(self, requests_per_second=10):
self.queue = deque()
self.interval = 1.0 / requests_per_second
self.last_request = 0

async def add(self, request_func):
self.queue.append(request_func)

async def process(self):
while True:
if self.queue:
now = time.time()
elapsed = now - self.last_request

if elapsed < self.interval:
await asyncio.sleep(self.interval - elapsed)

request_func = self.queue.popleft()
await request_func()
self.last_request = time.time()
else:
await asyncio.sleep(0.1)

5. Cache Results Locally

Cache API responses to reduce redundant requests:

from functools import lru_cache
from datetime import datetime, timedelta

# Simple in-memory cache
ip_cache = {}
CACHE_TTL = timedelta(hours=1)

def lookup_ip_cached(ip_address: str) -> dict:
now = datetime.now()

# Check cache
if ip_address in ip_cache:
cached_data, cached_time = ip_cache[ip_address]
if now - cached_time < CACHE_TTL:
return cached_data

# Make API call
result = lookup_ip(ip_address)

# Store in cache
ip_cache[ip_address] = (result, now)

return result

6. Use Retry-After Header

When rate limited, respect the Retry-After header:

def make_request_with_retry(url, headers, max_retries=3):
for attempt in range(max_retries):
response = requests.get(url, headers=headers)

if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Waiting {retry_after} seconds...")
time.sleep(retry_after)
continue

return response

raise Exception("Max retries exceeded")

Code Examples

Python with Rate Limit Handling

import os
import time
import requests
from dataclasses import dataclass
from typing import Optional

@dataclass
class RateLimitInfo:
limit: int
remaining: int
reset_time: int

class LimesIndexClient:
def __init__(self, api_key: Optional[str] = None):
self.api_key = api_key or os.environ.get("LIMESINDEX_API_KEY")
self.base_url = "https://api.limesindex.com"
self.rate_limit: Optional[RateLimitInfo] = None

def _update_rate_limit(self, response):
"""Update rate limit info from response headers."""
self.rate_limit = RateLimitInfo(
limit=int(response.headers.get('X-RateLimit-Limit', 0)),
remaining=int(response.headers.get('X-RateLimit-Remaining', 0)),
reset_time=int(response.headers.get('X-RateLimit-Reset', 0))
)

def _handle_rate_limit(self):
"""Wait if rate limit is low."""
if self.rate_limit and self.rate_limit.remaining < 5:
wait_time = max(0, self.rate_limit.reset_time - time.time())
if wait_time > 0:
print(f"Approaching rate limit. Waiting {wait_time:.0f}s...")
time.sleep(wait_time)

def lookup_ip(self, ip: str, max_retries: int = 3) -> dict:
"""Look up an IP with rate limit handling."""
self._handle_rate_limit()

for attempt in range(max_retries):
response = requests.get(
f"{self.base_url}/v1/ip/{ip}",
headers={
"X-API-Key": self.api_key,
"Accept": "application/json"
}
)

self._update_rate_limit(response)

if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Retrying in {retry_after}s (attempt {attempt + 1})")
time.sleep(retry_after)
continue

response.raise_for_status()
return response.json()

raise Exception("Max retries exceeded due to rate limiting")

def get_rate_limit_status(self) -> Optional[RateLimitInfo]:
"""Get current rate limit status."""
return self.rate_limit

# Usage
client = LimesIndexClient()
result = client.lookup_ip("8.8.8.8")
print(f"Result: {result}")
print(f"Remaining requests: {client.rate_limit.remaining}")

JavaScript with Rate Limit Handling

class LimesIndexClient {
constructor(apiKey) {
this.apiKey = apiKey || process.env.LIMESINDEX_API_KEY;
this.baseUrl = 'https://api.limesindex.com';
this.rateLimit = null;
}

updateRateLimit(headers) {
this.rateLimit = {
limit: parseInt(headers.get('X-RateLimit-Limit') || '0'),
remaining: parseInt(headers.get('X-RateLimit-Remaining') || '0'),
resetTime: parseInt(headers.get('X-RateLimit-Reset') || '0')
};
}

async sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}

async handleRateLimit() {
if (this.rateLimit && this.rateLimit.remaining < 5) {
const now = Math.floor(Date.now() / 1000);
const waitTime = Math.max(0, this.rateLimit.resetTime - now);
if (waitTime > 0) {
console.log(`Approaching rate limit. Waiting ${waitTime}s...`);
await this.sleep(waitTime * 1000);
}
}
}

async lookupIP(ip, maxRetries = 3) {
await this.handleRateLimit();

for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(`${this.baseUrl}/v1/ip/${ip}`, {
headers: {
'X-API-Key': this.apiKey,
'Accept': 'application/json'
}
});

this.updateRateLimit(response.headers);

if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.log(`Rate limited. Retrying in ${retryAfter}s (attempt ${attempt + 1})`);
await this.sleep(retryAfter * 1000);
continue;
}

if (!response.ok) {
throw new Error(`API Error: ${response.status}`);
}

return response.json();
}

throw new Error('Max retries exceeded due to rate limiting');
}
}

// Usage
const client = new LimesIndexClient();
const result = await client.lookupIP('8.8.8.8');
console.log('Result:', result);
console.log('Remaining:', client.rateLimit.remaining);

Go with Rate Limit Handling

package main

import (
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"strconv"
"time"
)

type RateLimitInfo struct {
Limit int
Remaining int
ResetTime int64
}

type Client struct {
apiKey string
baseURL string
http *http.Client
rateLimit *RateLimitInfo
}

func NewClient(apiKey string) *Client {
if apiKey == "" {
apiKey = os.Getenv("LIMESINDEX_API_KEY")
}
return &Client{
apiKey: apiKey,
baseURL: "https://api.limesindex.com",
http: &http.Client{Timeout: 30 * time.Second},
}
}

func (c *Client) updateRateLimit(resp *http.Response) {
c.rateLimit = &RateLimitInfo{
Limit: parseIntHeader(resp, "X-RateLimit-Limit"),
Remaining: parseIntHeader(resp, "X-RateLimit-Remaining"),
ResetTime: int64(parseIntHeader(resp, "X-RateLimit-Reset")),
}
}

func parseIntHeader(resp *http.Response, name string) int {
val, _ := strconv.Atoi(resp.Header.Get(name))
return val
}

func (c *Client) handleRateLimit() {
if c.rateLimit != nil && c.rateLimit.Remaining < 5 {
now := time.Now().Unix()
waitTime := c.rateLimit.ResetTime - now
if waitTime > 0 {
fmt.Printf("Approaching rate limit. Waiting %ds...\n", waitTime)
time.Sleep(time.Duration(waitTime) * time.Second)
}
}
}

func (c *Client) LookupIP(ip string) (map[string]interface{}, error) {
c.handleRateLimit()

maxRetries := 3
for attempt := 0; attempt < maxRetries; attempt++ {
req, _ := http.NewRequest("GET", fmt.Sprintf("%s/v1/ip/%s", c.baseURL, ip), nil)
req.Header.Set("X-API-Key", c.apiKey)
req.Header.Set("Accept", "application/json")

resp, err := c.http.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()

c.updateRateLimit(resp)

if resp.StatusCode == 429 {
retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After"))
if retryAfter == 0 {
retryAfter = 60
}
fmt.Printf("Rate limited. Retrying in %ds (attempt %d)\n", retryAfter, attempt+1)
time.Sleep(time.Duration(retryAfter) * time.Second)
continue
}

body, _ := io.ReadAll(resp.Body)
var result map[string]interface{}
json.Unmarshal(body, &result)
return result, nil
}

return nil, fmt.Errorf("max retries exceeded")
}

Monitoring Your Usage

Dashboard

View your API usage in real-time on the LimesIndex Dashboard:

  • Current usage vs. limit
  • Usage over time graphs
  • Top endpoints by request count
  • Rate limit events

Programmatic Monitoring

Use the /v1/stats endpoint to monitor your usage:

curl -X GET "https://api.limesindex.com/v1/stats" \
-H "X-API-Key: YOUR_API_KEY"

FAQs

Do batch requests count as one request?

Yes, a batch request to /v1/ip/batch counts as a single request against your rate limit, regardless of how many IPs are in the batch.

Are rate limits shared across API keys?

No, each API key has its own independent rate limit.

What happens if I need more requests?

Contact support or upgrade your plan for higher limits. Enterprise plans offer custom limits.

Are health check endpoints rate limited?

No, /healthz and /readyz are not rate limited and don't require authentication.