The most powerful and easy-to-use data collection API. 50+ ready-made crawlers,
simple REST endpoints, structured JSON output.
Start collecting data in minutes. No infrastructure to manage, no proxies to configure.
Google Maps, LinkedIn, Instagram, Twitter, and 50+ more ready to use.
Your scraping configuration.
URLs or parameters via API or CSV.
Launch and monitor progress.
Retrieve structured JSON or auto delivery.
import requests
API_KEY = "your_api_key"
headers = {"Authorization": f"Token {API_KEY}"}
# 1. Create a squid with a crawler
squid = requests.post("https://api.lobstr.io/v1/squids",
headers=headers,
json={"crawler": "4734d096159ef05210e0e1677e8be823", "name": "Restaurants Paris"}
).json()
# 2. Add tasks to scrape
requests.post("https://api.lobstr.io/v1/tasks",
headers=headers,
json={
"squid": squid['id'],
"tasks": [{"url": "https://google.com/maps/search/restaurants+paris"}]
}
)
# 3. Start the run
run = requests.post("https://api.lobstr.io/v1/runs",
headers=headers,
json={"squid": squid['id']}
).json()
# 4. Get results
results = requests.get("https://api.lobstr.io/v1/results",
headers=headers,
params={"run": run['id']}
).json()
for place in results['data']:
print(f"{place['name']} - {place['rating']}ā
")Get your API key and start collecting data in minutes.