Python's requests library is the most popular HTTP client in the ecosystem — and pairing it with rotating mobile proxies is the difference between a scraper that runs for hours and one that gets blocked in minutes.
Datacenter IPs are flagged at the ASN level before your first request completes. Residential proxies are better but burn fast under sustained load. Mobile proxies from real 4G/5G modems carry mobile carrier ASNs that are virtually impossible to block at scale — blocking them means blocking thousands of real mobile users sharing the same CGNAT IP range.
This tutorial walks through a complete setup: from the first pip install to a production-ready scraper with automatic IP rotation, retry logic, and anti-detection headers.
Get Rotating Mobile Proxies
Real 4G/5G IPs with API rotation — Ukraine, Romania, Latvia. From $50/mo dedicated.
Prerequisites
Install the required packages. The requests[socks] extra adds SOCKS5 support via urllib3, and PySocks handles the low-level protocol:
pip install requests "requests[socks]" python-socks
Verify the installation works:
import requests
import socks # from PySocks
print(requests.__version__)
You will also need your ProxyGrow credentials: host, port, username, password, and your rotation API URL (available in your ProxyGrow dashboard).
Basic Proxy Setup with requests
The simplest way to route a request through a SOCKS5 proxy is the proxies parameter:
import requests
proxies = {
'http': 'socks5h://username:password@host:port',
'https': 'socks5h://username:password@host:port',
}
response = requests.get('https://httpbin.org/ip', proxies=proxies)
print(response.json())
Run this and the response will show a Ukrainian, Romanian, or Latvian mobile carrier IP instead of your real address.
Why socks5h and Not socks5
This is the single most important detail in proxy configuration for Python:
socks5: DNS resolution happens on your machine, then the resolved IP is forwarded through the proxy. Your real DNS queries are visible.socks5h: DNS resolution happens through the proxy, on the remote side. Your machine never resolves the hostname directly.
The h stands for "host-name" — it means the hostname travels through the tunnel and gets resolved at the proxy server. This is critical for anonymity because DNS leaks can reveal your real location and identity even when your HTTP traffic is proxied correctly.
Always use socks5h for scraping. The only reason to use plain socks5 is if the proxy server does not support remote DNS resolution, which ProxyGrow servers do.
Session-Based Proxy (Persistent SOCKS5 Session)
Creating a Session object is almost always the right approach. Sessions reuse the underlying TCP connection, carry cookies automatically, and let you set proxy and headers once instead of on every call:
import requests
session = requests.Session()
session.proxies = {
'http': 'socks5h://user:pass@host:port',
'https': 'socks5h://user:pass@host:port',
}
# All requests through this session use the proxy
r = session.get('https://example.com')
print(r.status_code)
For scraping workflows where you make dozens or hundreds of requests, a session saves connection overhead and keeps the proxy authentication persistent. You configure the proxy once and forget it.
IP Rotation with the ProxyGrow API
Rotating the IP is not about switching between different proxy servers — it is about triggering a reconnection on the physical modem so the carrier assigns a new IP address. ProxyGrow exposes this as a simple API call.
Here is a complete rotation workflow:
import requests
import time
PROXY_HOST = "your-proxy-host"
PROXY_PORT = 1080
PROXY_USER = "username"
PROXY_PASS = "password"
ROTATION_URL = "https://api.proxygrow.com/rotate?key=YOUR_API_KEY"
def rotate_ip():
requests.get(ROTATION_URL)
time.sleep(5) # wait for modem to reconnect
def get_current_ip(session):
r = session.get('https://httpbin.org/ip')
return r.json()['origin']
session = requests.Session()
session.proxies = {
'http': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
}
urls_to_scrape = [
'https://example.com/page/1',
'https://example.com/page/2',
# ... more URLs
]
# Scrape with rotation every 10 requests
for i, url in enumerate(urls_to_scrape):
if i > 0 and i % 10 == 0:
rotate_ip()
print(f"Rotated IP. New IP: {get_current_ip(session)}")
r = session.get(url)
# process r.text
Why rotate every 10 requests and not every request? Rotating too frequently wastes time (each reconnect takes 3-6 seconds) and can trigger rate limits on the rotation API itself. Rotating too infrequently lets the target site build a behavioral profile on a single IP. Every 10-20 requests is a practical balance for most targets.
Why time.sleep(5) after rotation? The modem disconnects from the carrier network, renegotiates, and receives a new IP via DHCP or CGNAT assignment. If your next request fires before the modem is fully reconnected, you get a connection error. Five seconds covers the reconnection window reliably.
Error Handling and Retry Logic
Network errors are inevitable with proxies. Modems reconnect, carriers throttle, targets return 429s. The urllib3 retry mechanism handles transient failures automatically:
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
retry = Retry(
total=3,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503],
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
What each parameter does:
total=3: retry up to 3 times before giving upbackoff_factor=1: wait 1s, then 2s, then 4s between retries (exponential backoff)status_forcelist: treat these HTTP status codes as retryable failures
Mount the adapter on both http:// and https:// to cover all requests. Do this once after creating the session, before making any requests.
Setting Realistic Headers to Avoid Detection
A mobile carrier IP sending desktop browser headers is a statistical anomaly. Real traffic from a Kyivstar or Orange Romania IP is overwhelmingly from mobile devices. Anti-bot systems know this and flag mismatches.
Set headers that are consistent with the proxy's origin:
session.headers.update({
'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15',
'Accept-Language': 'uk-UA,uk;q=0.9,en;q=0.8', # match proxy country
'Accept-Encoding': 'gzip, deflate, br',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Connection': 'keep-alive',
})
Key points:
- User-Agent: use a mobile UA. For Ukrainian proxies, iOS or Android is appropriate. For Romanian proxies, the same applies — most mobile users in Romania and Ukraine use iOS or Android.
- Accept-Language: match the proxy country.
uk-UAfor Ukraine,ro-ROfor Romania,lv-LVfor Latvia. This is checked by some sites as a consistency signal. - Accept-Encoding: always include
br(Brotli). Real browsers send this. Scrapers that omit it are easier to fingerprint.
Rate Limiting Between Requests
Machine-speed requests are an instant detection signal. Add randomized delays between requests to mimic human browsing patterns:
import time
import random
for url in urls_to_scrape:
r = session.get(url)
# process r.text
time.sleep(random.uniform(1, 3))
random.uniform(1, 3) produces a float between 1.0 and 3.0 seconds, which means your scraper runs at 20-60 requests per minute. This is within the range of a fast human user and well below thresholds that trigger automatic rate limiting on most sites.
For more aggressive targets, increase the range: random.uniform(2, 6). For internal APIs or targets with no anti-bot protection, you can lower it.
Verifying the Proxy Works
Before running a full scraping job, always verify that the proxy is active and that rotation actually changes the IP:
import requests
import time
PROXY_HOST = "your-proxy-host"
PROXY_PORT = 1080
PROXY_USER = "username"
PROXY_PASS = "password"
ROTATION_URL = "https://api.proxygrow.com/rotate?key=YOUR_API_KEY"
session = requests.Session()
session.proxies = {
'http': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
}
# Check IP before rotation
ip_before = session.get('https://httpbin.org/ip', timeout=10).json()['origin']
print(f"IP before rotation: {ip_before}")
# Trigger rotation
requests.get(ROTATION_URL)
time.sleep(5)
# Check IP after rotation
ip_after = session.get('https://httpbin.org/ip', timeout=10).json()['origin']
print(f"IP after rotation: {ip_after}")
if ip_before != ip_after:
print("Rotation successful.")
else:
print("Warning: IP did not change. Check rotation URL or wait longer.")
Run this before every new scraping session. If the IPs are the same, either the rotation URL is wrong or the modem hasn't finished reconnecting — increase the sleep time and try again.
Scrapy Integration
For large-scale scraping projects, Scrapy's middleware system lets you plug in proxy rotation at the spider level. The pattern is a custom DownloaderMiddleware that sets request.meta['proxy'] on each outgoing request and triggers rotation based on a counter or response code.
The core idea:
class ProxyRotationMiddleware:
def process_request(self, request, spider):
request.meta['proxy'] = 'socks5h://user:pass@host:port'
# Rotation logic: call the API every N requests
For simpler Scrapy setups, the scrapy-rotating-proxies package handles pool management, but it does not support the API-triggered modem rotation that ProxyGrow provides. A custom middleware gives full control over when and how rotation happens.
Common Errors and How to Fix Them
requests.exceptions.ConnectionError
The proxy server is unreachable. Causes:
- Wrong host or port in your proxy URL
- The modem is in the middle of a rotation reconnect
- Network issue between your machine and the proxy server
Fix: check your credentials, wait 5-10 seconds after rotation, verify the host resolves correctly.
requests.exceptions.ProxyError
The proxy server rejected the connection, usually due to authentication failure. Causes:
- Wrong username or password
- Credentials have expired or been revoked
- Your IP is not whitelisted if the proxy uses IP-based auth
Fix: double-check your username and password in the ProxyGrow dashboard. Make sure you are using socks5h://user:pass@host:port format (not http://).
requests.exceptions.Timeout
The request did not complete within the timeout period. Most common causes in proxy setups:
- Making a request immediately after rotation (modem still reconnecting)
- Target site is slow to respond
- The modem lost signal temporarily
Fix: always set explicit timeouts and always sleep after rotation:
try:
r = session.get(url, timeout=(10, 30)) # (connect timeout, read timeout)
except requests.exceptions.Timeout:
print(f"Request to {url} timed out — retrying after delay")
time.sleep(10)
The tuple form (connect_timeout, read_timeout) is more useful than a single value because proxy connection and server response have different failure characteristics.
Complete Working Example
Putting it all together — a scraper with session, rotation, retries, headers, and error handling:
import requests
import time
import random
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
PROXY_HOST = "your-proxy-host"
PROXY_PORT = 1080
PROXY_USER = "username"
PROXY_PASS = "password"
ROTATION_URL = "https://api.proxygrow.com/rotate?key=YOUR_API_KEY"
def build_session():
session = requests.Session()
session.proxies = {
'http': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'socks5h://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
}
session.headers.update({
'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X) AppleWebKit/605.1.15',
'Accept-Language': 'uk-UA,uk;q=0.9,en;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
})
retry = Retry(total=3, backoff_factor=1, status_forcelist=[429, 500, 502, 503])
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
return session
def rotate_ip():
try:
requests.get(ROTATION_URL, timeout=10)
time.sleep(5)
except Exception as e:
print(f"Rotation error: {e}")
time.sleep(10)
def scrape(urls):
session = build_session()
for i, url in enumerate(urls):
if i > 0 and i % 10 == 0:
rotate_ip()
try:
r = session.get(url, timeout=(10, 30))
r.raise_for_status()
print(f"[{i}] {url} — {len(r.text)} bytes")
# process r.text here
except requests.exceptions.ProxyError:
print(f"[{i}] Proxy auth error — check credentials")
except requests.exceptions.Timeout:
print(f"[{i}] Timeout — skipping {url}")
except requests.exceptions.HTTPError as e:
print(f"[{i}] HTTP {e.response.status_code} — {url}")
time.sleep(random.uniform(1, 3))
urls = [f"https://example.com/page/{n}" for n in range(1, 51)]
scrape(urls)
This handles 50 URLs, rotates every 10, retries transient failures automatically, and prints meaningful error messages for each failure type.
Get Rotating Mobile Proxies
Real 4G/5G IPs with API rotation — Ukraine, Romania, Latvia. From $50/mo dedicated.
Summary
The key points from this tutorial:
- Use
socks5h://notsocks5://— remote DNS resolution is not optional for anonymity - Use
requests.Sessionfor all multi-request workflows - Rotate IP via the ProxyGrow API, not by switching proxy servers
- Always sleep 5 seconds after triggering rotation before making the next request
- Set headers that are consistent with the proxy country and IP type (mobile UA for mobile proxies)
- Add
random.uniform(1, 3)delays between requests - Use
Retrywithbackoff_factorto handle transient failures without manual retry loops - Match
Accept-Languageto the proxy GEO — it is a consistency signal that anti-bot systems check