Distributed Web Crawling Made Simple
High-performance HTTP proxy service designed for large-scale web scraping across multiple datacenters
Why Choose Crawler Swarm?
Built for scale, performance, and reliability
Distributed Architecture
Deploy across multiple datacenters for optimal geographic distribution and fault tolerance
Async Performance
Built with FastAPI and async architecture for maximum throughput and concurrency
Secure & Validated
Token-based authentication with Pydantic validation for all requests
Real-time Monitoring
Track performance metrics and system health in real-time
Core Features
Everything you need for professional web crawling
- Multi-HTTP Method Support (GET, POST, PUT, PATCH, DELETE, HEAD, OPTIONS)
- Custom Headers & Cookies Forwarding
- Form Data & JSON Body Support
- Configurable Timeout (1–120 seconds)
- Follow Redirects Control
- Concurrent Request Limiting & Load Balancing
- Enhanced Error Handling
- Custom Crawler Headers
- Token-Based Authentication
- Real-time Performance Metrics
Test Proxy Endpoint
Try out the API directly from your browser
Response
System Status
Real-time monitoring and health metrics
System Online
All systems operational
API Endpoints
Available routes and their descriptions
-
GET
/
This UI interface
-
POST
/
Main proxy endpoint for fetching URLs
-
GET
/health
Health check & system metrics
-
GET
/ping
Get active request count (requires auth)