If you’ve ever manually checked where your site ranks for a keyword, you know the drill. Open Ahrefs, type the keyword in, note the position, maybe copy it into a spreadsheet. It works when you’re watching 20 keywords. It stops working the moment that list hits a few hundred, and it completely falls apart at a few thousand.
That’s what rank tracker APIs solve. You send a keyword, get a position back in JSON, and pipe it into whatever system you’re running, your own dashboard, a client report, a Slack alert, whatever. There’s a real market for these services now. Some of them are shockingly cheap. Others will hit you with enterprise pricing the moment you outgrow their mid-tier plan, and the sticker shock is something nobody warns you about until you’re already locked in.
And then there’s the other option: skip the API entirely and build your own tracker with Python and rotating proxies. It’s more work upfront and more work to maintain, but at high volumes the cost difference between paying per API request and paying per gigabyte of proxy bandwidth gets hard to ignore.
This article covers both paths. The first half breaks down the ranking APIs worth considering in 2026, what they actually cost at real volumes (not the “starting at” number on the pricing page), and where the pricing cliffs are. The second half is a full walkthrough for building your own keyword rank tracker with Proxidize proxies, including working Python code you can run today.
How Rank Tracker APIs Work
Every ranking API in this space does basically the same thing under the hood. You send it a keyword, a domain, and a location. It goes and scrapes Google for you, parses through the results, and sends back a structured response telling you where your domain appeared. Position 4, here’s the URL that ranked, here’s the title, here’s whether it was a featured snippet or a regular organic result.
What you’re actually paying for when you use a keyword ranking API isn’t the data itself. It’s the scraping infrastructure. Google is very, very good at blocking automated search requests, and these API providers spend significant money maintaining the proxy pools, CAPTCHA solvers, and browser fingerprinting setups that keep their scraping running at scale. You get to skip all of that. Send a keyword, get a JSON response with your keyword ranking data, move on with your day.
Best Rank Tracker APIs
Not all ranking APIs are created equal, and “best” depends entirely on what you’re trying to do. Someone tracking 200 keywords for a single client has completely different needs than a dev team building rank tracking into a SaaS product. Here’s what’s worth looking at right now.
DataForSEO
If cost per request is what you care about most, DataForSEO wins and it’s not close. Their pricing is pay-as-you-go across three tiers: Standard at $0.0006 per request, Priority at $0.0012, and Live at $0.002 for real-time results.
One thing you need to know about that $0.0006 figure: it covers 10 search results. Not 100. If you only need to know whether you’re ranking on the first page of Google, that base price is all you pay.
But if you want to check deeper, say the top 100 results to find a page sitting at position 47, additional pages of results cost 75% of the base price each. So pulling 100 results on the Standard tier actually runs about $0.00465 per request. Still cheap, but almost 8x more than the headline number. A lot of comparison articles miss this, so keep it in mind when you’re doing your own cost estimates.
They cover Google, Bing, Yahoo, Yandex, and Baidu. The API documentation is solid, the data is structured and consistent, and there’s a $50 minimum deposit to get started. For developers building rank tracking into their own tools, DataForSEO is usually the first stop.
SerpApi
SerpApi has what’s widely considered the best developer experience of anything on this list. Clean documentation, well-structured responses, broad search engine coverage. It’s the one you see recommended in every “how to build an SEO tool” tutorial, and there’s a reason for that. Plans start at $25 a month for 1,000 searches on the Starter tier, then $75 for 5,000 on Developer, $150 for 15,000 on Production, and $275 for 30,000 on Big Data. Only successful searches count against your limit.
The problem is the cliff. After 30,000 monthly searches, the next option is enterprise, which starts at $3,750 per month and includes 100,000 searches. There’s nothing in between. If your keyword list is growing and you’re anywhere near that 30K ceiling, you need to plan for it, because that jump from $275 to $3,750 is the kind of surprise that makes finance people schedule meetings.
ScrapingBee
ScrapingBee is a general scraping API, not a dedicated rank tracker. You point it at a URL, it handles proxy rotation and CAPTCHA solving and headless rendering, and you get back the raw HTML. The ranking data extraction is on you.
Some developers prefer it that way. If you want to pull data that structured SERP APIs don’t include, or if you’re already using ScrapingBee for other scraping and want to consolidate, it makes sense. But if all you want is “give me the ranking for this keyword,” DataForSEO or SerpApi will save you the parsing work.
Keyword.com
Keyword.com has a few different products, and the naming can get confusing. Their API-only access plan starts at $46 a month and lets you pull SEO data programmatically into your own tools. Separately, they have platform plans starting at $6 a month for basic Google SERP tracking, scaling up with more keywords and AI visibility features. White-label dashboards and branded reports are available as add-ons across the platform plans, not as part of the API-only package.
If you’re a developer who just wants the data pipe, the $46 API plan is what you’re looking at. If you’re an agency that wants the full dashboard experience with client-facing reports, you’ll be on one of the platform plans with the white-label add-on, which is a different product and a different price.
AccuRanker
Fastest rank tracker on the market, or so they claim, and most people who’ve tested it seem to agree. AccuRanker does real-time ranking updates and tracks SERP features alongside regular positions. Their Professional tier starts at $224 a month for 2,000 keywords, and the Expert tier runs $764 a month for 10,000 keywords and up. Beyond 25,000 keywords you’re into custom enterprise pricing.
It’s one of the pricier options on this list, but the speed and accuracy tend to justify it for teams that need fresh data throughout the day rather than a once-daily snapshot. If daily checks are good enough for your workflow, you can get the same data for less elsewhere.
SE Ranking
SE Ranking is a full SEO platform with rank tracking, keyword research, and site audits baked in. Their Core plan starts at $103.20 a month and includes 2,000 tracked keywords, but here’s the thing: API access is not included on any of the standard plans. If you want programmatic access to your rank data, you’re looking at either a $149 per month add-on to your existing plan, or a standalone API subscription starting at $179 a month.
So while SE Ranking works fine as a rank tracking platform you log into and use through the dashboard, it’s not the budget API option it might look like at first glance. If a keyword ranking API is the whole point, the dedicated providers are a better deal.
When the APIs Stop Making Sense
At small volumes, a ranking API is the whole answer. DataForSEO at $0.0006 per first-page check means 500 keywords tracked daily costs you about $9 a month. Building your own scraper to save $9 a month is a terrible use of anyone’s time.
The math changes when volume goes up and when you start caring about deeper position data.
If you’re checking first-page positions only (top 10 results), DataForSEO’s Standard tier keeps you at $0.0006 per keyword. Track 5,000 keywords daily and that’s about $90 a month. Track 50,000 daily and you’re at $900. Still manageable for a serious operation.
But if you need to know your position beyond page one, say checking the top 100 results to find pages sitting at positions 15 through 80, the per-request cost jumps to about $0.00465. At that rate, 5,000 keywords daily costs roughly $700 a month, and 50,000 daily climbs to roughly $7,000. SerpApi doesn’t even have a standard plan that covers 50,000 daily keywords; you’d be deep into enterprise pricing.
That’s the range where building your own rank tracker with rotating proxies can start making financial sense. Let’s do the actual math instead of hand-waving about it.
A Google SERP page runs about 100 to 200KB of HTML when you’re not loading images and stylesheets. Call it 150KB on average. At 50,000 keywords daily, that’s 7.5GB per day, or roughly 225GB per month. On Proxidize residential proxies at $1/GB, that’s $225 a month. On mobile proxies at $2/GB, it’s $450.
Compare that to the API costs at the same volume. DataForSEO checking top 10 only: $900. DataForSEO checking top 100: roughly $7,000. SerpApi: custom enterprise pricing, likely well north of $3,750.
The proxy bandwidth cost is dramatically lower. But it’s not the whole picture, and this is where a lot of “build your own scraper” articles get dishonest. The comparison isn’t as clean as “proxy plan vs. API bill.” You also need to factor in:
- Engineering time. Someone has to build this, and someone has to maintain it. Google changes their SERP HTML, proxy pools go down, CAPTCHAs evolve. This isn’t a weekend project that runs itself forever.
- Failure and retry rates. Not every request succeeds. Blocked requests, CAPTCHAs, and timeouts mean you’ll burn through more bandwidth than the simple math suggests. A realistic buffer is 15 to 25% on top of the baseline, so closer to $260-$280 on residential rather than a clean $225.
- At low volumes, the savings don’t justify the work. 5,000 keywords daily checking top 10 costs $90 on DataForSEO. The same volume on residential proxies costs maybe $25 in bandwidth. You’re not building and maintaining a custom scraping pipeline to save $65 a month. The crossover point where DIY starts genuinely winning is somewhere in the tens of thousands of keywords, especially when you need deep position data that pushes the API per-request cost from fractions of a cent into the multi-cent range.
There are also reasons to build your own that aren’t about cost at all:
You want to collect data the APIs don’t return. Most keyword ranking APIs give you organic positions, maybe SERP features. Your own scraper can grab anything in the HTML. People Also Ask questions, ad copy, local pack results, schema markup, whatever you need.
You don’t want your keyword list on someone else’s servers. Every keyword you send to a rank tracker API sits in their database. If your keyword strategy is competitive intelligence you’d rather not share, that’s worth thinking about.
You’re already running proxy infrastructure for other things. If you’ve got a proxy plan handling product scraping or social media monitoring and the bandwidth is there, bolting rank tracking onto it costs almost nothing extra.
Building Your Own Rank Tracker with Proxies
Alright, the DIY part. Python, because that’s what everyone uses for this. Proxidize proxies, because you need rotating IPs to keep Google from blocking you. Let’s get into it.
Why Proxies Are Non-Negotiable at Scale
Google has made it very clear, both in their terms of service and in how aggressively their systems enforce it, that they do not want automated SERP scraping happening. You can get away with a handful of searches from your own IP, sure. But anything beyond casual volumes and you’ll hit CAPTCHAs fast. A few more requests and that IP is blocked. Keep pushing and the block sticks.
Rotating proxies fix this. Each request goes out from a different IP, so to Google it looks like a bunch of unrelated people searching for things.
Residential proxies work for most SERP scraping and are the more cost-effective option at $1/GB. For pure rank tracking volume, residential is usually the right call.
Mobile proxies have higher success rates at $2/GB (or $59/proxy for a dedicated mobile proxy), because carriers use CGNAT to route real mobile users through the same IP pools. Google is much more cautious about blocking those IPs since it would risk cutting off real people on their phones. Mobile IPs aren’t invincible, you’ll still see occasional CAPTCHAs, but the pass rate is significantly better than residential or datacenter alternatives. Save mobile for when residential success rates aren’t high enough on your specific targets.
We’ve got a deeper breakdown of how all of this works in our guide on the best proxies for web scraping if you want the full picture.
Setting Up
Python 3.10 or higher (3.8 and 3.9 are end-of-life as of late 2025). Terminal open. Install what you need:
pip install requests beautifulsoup4 pandas lxmlrequests makes HTTP calls through the proxy. beautifulsoup4 parses Google’s HTML. pandas organizes the output and exports it.
A note on requests specifically: Google uses TLS fingerprinting (JA3/JA4) to identify what client is making the request, and Python’s requests library has a TLS signature that doesn’t look like any real browser. With good proxy rotation, you can still get results, especially at moderate volumes. But if you’re seeing high failure rates, consider switching to curl_cffi, which can impersonate a real browser’s TLS handshake. The swap is straightforward since curl_cffi has a requests-like API. For this tutorial we’re using standard requests to keep things simple, and because this is the same approach used in our existing Google SERP scraping guide.
You could also use Selenium or Playwright for full browser rendering, but requests is faster, lighter on bandwidth (which matters when you’re paying per GB for proxy traffic), and handles standard SERPs fine.
The Script
Takes a list of keywords and a target domain, scrapes Google for each one through a Proxidize proxy, and records where (if at all) the domain shows up.
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse, parse_qs
import pandas as pd
import time
import random
from datetime import datetime
PROXY_USER = "customer-YOUR_USERNAME"
PROXY_PASS = "YOUR_PASSWORD"
PROXY_HOST = "pg.proxi.es"
PROXY_PORT = "20000"
USER_AGENTS = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/148.0.0.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 15_4) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.4 Safari/605.1.15",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:150.0) Gecko/20100101 Firefox/150.0",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/148.0.0.0 Safari/537.36",
]
def create_session():
session = requests.Session()
proxy_url = f"http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}"
session.proxies = {"http": proxy_url, "https": proxy_url}
return session
def domain_matches(url, target_domain):
try:
hostname = urlparse(url).hostname
if not hostname:
return False
hostname = hostname.lower()
target = target_domain.lower()
return hostname == target or hostname.endswith(f".{target}")
except Exception:
return False
def get_google_rank(session, keyword, target_domain, location="us", num_results=100): session.headers.update({
"User-Agent": random.choice(USER_AGENTS),
"Accept-Language": "en-US,en;q=0.9",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
})
try:
resp = session.get(
"https://www.google.com/search",
params={"q": keyword, "num": num_results, "hl": "en", "gl": location},
timeout=30,
)
resp.raise_for_status()
except requests.RequestException as e:
print(f" Request failed: {e}")
return {
"keyword": keyword,
"position": None,
"url": "",
"title": "",
"date": datetime.now().strftime("%Y-%m-%d"),
"status": "error",
}
if "/sorry/" in resp.url or "unusual traffic" in resp.text:
print(" CAPTCHA triggered, skipping")
return {
"keyword": keyword,
"position": None,
"url": "",
"title": "",
"date": datetime.now().strftime("%Y-%m-%d"),
"status": "captcha",
}
soup = BeautifulSoup(resp.text, "lxml")
for pos, result in enumerate(soup.select("div.g"), 1):
link = result.select_one("a[href]")
if not link:
continue
href = link["href"]
if href.startswith("/url?"):
href = parse_qs(urlparse(href).query).get("q", [href])[0]
if domain_matches(href, target_domain):
title = result.select_one("h3")
return {
"keyword": keyword,
"position": pos,
"url": href,
"title": title.text.strip() if title else "",
"date": datetime.now().strftime("%Y-%m-%d"),
"status": "found",
}
return {
"keyword": keyword,
"position": None,
"url": "",
"title": "",
"date": datetime.now().strftime("%Y-%m-%d"),
"status": "not_found",
}
def track_keywords(keywords, target_domain, location="us"):
session = create_session()
results = []
for i, kw in enumerate(keywords):
print(f"[{i + 1}/{len(keywords)}] {kw}")
result = get_google_rank(session, kw, target_domain, location)
results.append(result)
if i < len(keywords) - 1:
time.sleep(random.uniform(3, 8))
return pd.DataFrame(results)
if __name__ == "__main__":
keywords = [
"mobile proxy provider",
"residential proxy service",
"rotating mobile proxies",
"buy mobile proxies",
"4g proxy provider",
]
target_domain = "proxidize.com"
print(f"Tracking {len(keywords)} keywords for {target_domain}\n")
df = track_keywords(keywords, target_domain)
filename = f"rankings_{datetime.now().strftime('%Y%m%d')}.csv"
df.to_csv(filename, index=False)
print(f"\nSaved to {filename}")
print(df.to_string(index=False))Swap out customer-USER and PASS with your actual Proxidize credentials. Keywords and target domain are at the bottom. It pulls the top 100 results for each keyword and scans through them looking for your domain. If it doesn’t find it, position gets recorded as None, which is fine. That’s useful data too. Knowing you’re not in the top 100 for a keyword tells you just as much as knowing you’re at position 12.
A Few Caveats
The script uses requests instead of Selenium, so it’s making plain HTTP requests rather than opening a whole browser. Faster, uses way less bandwidth, and works for standard Google SERPs. The downside is it won’t automatically handle CAPTCHAs. If Google serves one instead of search results, that keyword just fails.
With rotating proxies, CAPTCHAs shouldn’t come up often at moderate volumes. If they start appearing regularly, there are three likely causes: your delays are too tight, your IP pool is getting flagged, or Google’s TLS fingerprinting is catching the requests library signature. Swapping to curl_cffi with browser impersonation is the easiest fix for the TLS issue. If even that doesn’t help, step up to a CAPTCHA solver or switch to mobile proxies if you haven’t already.
The 3 to 8 second random wait between requests makes the traffic look less robotic. Mobile proxies buy you more leeway here because of the CGNAT trust score, but even then, going under 2 seconds per request is pushing it.
One more thing: the script asks for 100 results in one shot via num=100. Google generally honors this. If you only care about the top 10, change that number. Less bandwidth, faster response, smaller HTML to parse.
The CSS selector div.tF2Cxc is what Google currently uses for organic result containers. Google changes these from time to time, so if the script starts returning empty results despite successful HTTP responses, an outdated selector is the first thing to check.
Tracking Over Time
Running this once gives you a snapshot. A rank tracker needs to give you the trend. Schedule the script to run daily with cron (Linux/macOS) or Task Scheduler (Windows), and either append results to a growing CSV or write them to a database.
For CSV:
import os
results.to_csv("rankings_history.csv", mode="a", header=not os.path.exists("rankings_history.csv"), index=False)For anything beyond a personal project, use a database. SQLite if it’s just you. PostgreSQL if a team needs access. The schema doesn’t need to be complicated: keyword, position, URL, date. Add whatever else matters to your workflow.
If You Need It Faster
Sequential processing at 5 seconds per keyword means 1,000 keywords takes about 80 minutes. Fine for overnight runs. Not fine if you need data by lunch.
Python’s concurrent.futures.ThreadPoolExecutor lets you parallelize. Start with 5 to 10 threads and see how it holds up. More than that and you risk flooding the proxy gateway with too many simultaneous requests, which kind of defeats the purpose of spacing things out in the first place. If your plan supports multiple proxy ports, spread the threads across them. Proxidize lets you set up different ports with different rotation settings, so you can keep your rank tracking traffic on one port and your other scraping jobs on another without them stepping on each other.
Conclusion
Two ways to track rankings, both legitimate, and the right pick depends on where you’re at. The APIs are faster to set up, easier to maintain, and make sense at low to mid volume. DataForSEO is absurdly cheap for first-page checks. SerpApi has the best developer experience by most accounts. Keyword.com’s API-only plan gives you raw data access at a low price point.
When the keyword list gets big enough that the monthly bill starts feeling like a problem, or when you need data that the APIs don’t return, that’s when you build your own. Python, rotating proxies, a script that runs overnight and dumps results into a CSV or a database. It takes more work to set up and more work to maintain, but the economics can flip in your favor past a certain scale. Just make sure you’re counting the real costs, engineering time and proxy bandwidth included, not just the proxy subscription.
Most teams end up doing both at different points. Start with a ranking API, graduate to DIY when it makes sense. There’s no rush.
Key takeaways:
- DataForSEO is the cheapest rank tracker API at $0.0006 per first-page request on their Standard tier. That base rate covers 10 results; checking deeper positions costs more. Do the math for your specific use case.
- SerpApi has the best docs and developer experience, but the jump from $275 to $3,750 at the enterprise tier is real. There’s nothing in between. Watch your usage.
- API costs scale linearly. At high keyword counts and deep position checks, the monthly bill grows fast. That’s when DIY starts becoming worth the engineering investment.
- You can’t do large-scale Google scraping without proxies. Mobile proxies have the best success rate because CGNAT makes their IPs significantly harder to block, though not impossible. Residential proxies work for most volumes.
- Building your own tracker is not a one-time project. Google changes their SERP HTML, TLS detection gets tighter, proxy pools need monitoring. You’re trading an API fee for ongoing maintenance.
- If
requestsgets blocked, trycurl_cffi. It impersonates real browser TLS fingerprints, which helps bypass Google’s JA3/JA4 detection. Same Python-like API, better pass rate.
Frequently Asked Questions:
What is a rank tracker API?
A rank tracker API is a service that scrapes search engine results on your behalf and returns structured data containing ranking positions. You provide keywords and a target domain, and the API handles the scraping infrastructure on its end, including proxy rotation, CAPTCHA solving, and browser fingerprinting. The response comes back as structured JSON with the ranking position, the URL that ranked, and any SERP features associated with the result.
Which rank tracker API is the cheapest?
DataForSEO on their Standard queue tier at $0.0006 per request for the first page of results (10 entries). At that rate, tracking 1,000 keywords daily costs about $18 a month. If you need results beyond the first page, additional pages of results carry a 75% surcharge on the base price, which brings the cost for 100 results to approximately $0.00465 per request.
Do I actually need proxies to scrape Google?
At any meaningful volume, yes. Google rate-limits and blocks IP addresses that send automated search requests. A small number of searches from a single IP will not raise flags, but once you are making hundreds or thousands of requests, you will encounter CAPTCHAs and IP blocks. Large-scale SERP scraping requires proxy rotation to distribute requests across different IP addresses.
How many keywords can a DIY setup handle?
It depends on your threading configuration and your proxy plan. A single-threaded setup with 5-second delays between requests can process roughly 700 keywords per hour. Running 5 to 10 parallel threads increases that to several thousand keywords per hour. At high volumes, the bottleneck is typically proxy bandwidth and the number of concurrent connections your plan supports rather than the script itself.
Is scraping Google legal?
The legal status of scraping publicly accessible search results is not settled. In the US, the Ninth Circuit ruled in hiQ Labs v. LinkedIn that scraping publicly accessible data does not violate the Computer Fraud and Abuse Act. That ruling is real precedent, but it applies only in the Ninth Circuit. The case itself ended in a 2022 settlement that required hiQ to cease all scraping, destroy their collected data, and pay $500,000 in damages on breach of contract and other grounds. The CFAA precedent stands, but the outcome demonstrates that other legal theories, including breach of contract, trespass to chattels, and state computer access laws, can still be applied against scrapers. Google’s terms of service explicitly prohibit automated scraping, which means they can block your access even if criminal liability does not apply. For any commercial scraping operation, consult a lawyer familiar with your jurisdiction.



