How To Scrape Google Search Results With Python

Image of two people holding magnifying glasses over a desktop that shows a web page and the python logo. Text next to the image reads

Share

IN THIS ARTICLE:

If you decide to scrape Google search results, you might find some major advantages that can help your business grow. As Google is the largest search engine online it holds boundless information. Scraping Google search results will give you a clue on how or why certain results are placed where they are and what you could do to boost your business.

However, scraping Google’s SERP on a large scale can hold some challenges as Google is very protective over anyone scraping their website. With this article, you should understand how to scrape Google search results, why you would want to do so, and how to ensure your scraping efforts remain undetected.  Alternatively, you can skip straight to the code.

Image of a man holding a magnifying glass over a webpage. Text above reads

Why Scrape Google Search Results?

Deciding to scrape Google search results comes from more of a marketing desire than anything else, specifically a search engine optimization (SEO) pathway. Whether you like it or not, Google and SEO go hand in hand. Almost every person on the internet uses Google to find information, and if you wish for people to find your business or service, you must use Google as a tool for being found. Google indexes most of the public web pages so deciding to scrape Google search results will allow you to access numerous data insights. You can scrape Google search results to analyze the latest market trends, build your own learning language model (LLM), scrape Google ads data, perform price-competitive intelligence, or build a rank tracking system. 

In the past, Google SERP (or search engine results page) used to be just a list of links and nothing more. It has evolved to include a variety of data points, including paid and organic search results, ads, videos, images, popular products, related questions, related searches, featured snippets, a local pack showing nearby businesses, top stories, recipes, jobs, knowledge panels, and travel results. Scraping all that information might be detrimental to your company’s success or failure. 

Google holds 82% of the global search market, surpassing any competitors like Bing or Yahoo, making Google the most likely place for customers to find you and competitors to outrank you. Choosing to scrape Google search results will give you the necessary data you need to identify trending topics and emerging customer demands before they reach the mainstream, monitor competitor activities to stay ahead of the market, discover any gaps in existing offerings, assess user sentiment and brand awareness, and refine SEO and ad strategies to understand which search terms are attracting the most attention.

Scraping Google SERP can provide useful insights into customer behavior, market direction, and competitive landscapes, all useful information to give you and your business data-driven decisions to maintain a competitive edge. With all that out of the way, let us explore how you can scrape Google search results. 

Image of a computer screen with a magnifying glass over it. Text above the image reads

Scraping Google Search Results with a Search Term

For our article, we will present how you can scrape Google search results by using Python. We will explain how you should set up your environment and follow it up with tips and tricks to avoid getting blocked.

Setting Up Your Environment 

First things first, you have to make sure you have the latest version of Python installed on your computer. As of the writing of this article, the latest version of Python is 3.13.3 and can be downloaded using this link. Once that’s taken care of, open up your preferred IDE or scripting environment. Enter this prompt into your terminal to create a folder that will hold your script:

mkdir google

We will be using three libraries to scrape Google search results; Selenium, BeautifulSoup, and pandas. You can install these libraries by inputting this command into your terminal:

pip install beautifulsoup4 selenium pandas

You will also need to install a ChromeDriver as Google relies heavily on JavaScript to render the search results content. ChromeDriver will allow Selenium to control a full browser that executes JavaScript, making your scraping effort smoother. Now, you need to create a Python file where your script will be. 

Full Code to Scrape Google Search Results

The script below is written to allow you to scrape Google search results based on a query. When you run it, it will ask you to enter a search term as you would through Google.

After you enter the term, it will open a browser window showing the Google SERP of that term before closing after a few seconds. Afterwards, you will find the first page of results which includes the position, title, link, description, and the source of the result.

This data will also be saved in a CSV file in your folder that contains the script. This makes the script more usable for any term you wish to scrape rather than constantly needing to change the URL.

Whether you’re conducting keyword research or competitive monitoring, this method gives you the flexibility to scrape Google search results dynamically and repeatedly. If you prefer to work with a headless browser, you can activate headless mode in Selenium for faster performance and better stealth.

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
import pandas as pd
import time
from urllib.parse import quote_plus

def scrape_google(query):
    options = webdriver.ChromeOptions()
    options.add_argument("--window-size=1920,1080")
    options.add_argument("--disable-blink-features=AutomationControlled")
    driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)

    search_url = "https://www.google.com/search?q=" + quote_plus(query) + "&hl=en"
    driver.get(search_url)
    time.sleep(3)

    soup = BeautifulSoup(driver.page_source, 'html.parser')
    driver.quit()

    results = []
    cards = soup.select('div.tF2Cxc')

    for i, card in enumerate(cards, 1):
        result = {"position": i}

        title_tag = card.select_one("h3")
        link_tag = card.select_one("a")
        snippet_tag = card.select_one("div.VwiC3b")
        source_tag = card.select_one("div.yuRUbf cite")

        result["title"] = title_tag.text.strip() if title_tag else ""
        result["link"] = link_tag["href"].strip() if link_tag else ""
        result["description"] = snippet_tag.text.strip() if snippet_tag else ""
        result["source"] = source_tag.text.strip() if source_tag else ""

        results.append(result)

    df = pd.DataFrame(results)
    df.to_csv("google_results.csv", index=False, encoding="utf-8-sig")
    print(df.to_markdown(index=False))

if __name__ == "__main__":
    query = input("Enter your search query: ")
    scrape_google(query)

This search scraper saves your results in CSV format, which is the preferred format for quick uploads to Google Sheets or for bulk data imports.

Image of a computer screen showing bugs and a locked screen. Text above reads

Scrape Google Search Results Without Getting Blocked

While deciding to scrape Google search results might seem easy now that you have the script, Google is not necessarily kind when it comes to people scraping its website. Aggressively scraping any website could lead to your IP being banned because websites that notice heavy scraping activity start to worry that their website will crash, causing issues to any potential customers. However, there is a workaround that will allow you to scrape Google or any other website without risking an IP ban or a crash.

Rotating User-Agents

Rotating user agents when deciding to scrape Google search results is necessary because Google actively monitors request patterns to detect botting activities. The user agent is the string in your request header that identifies your browser, operating system, and device. If your requests to Google come from the same user agent, Google will easily flag you as a scraper, especially if you send too many requests at once.

By initiating a rotating user agent, your traffic will appear as if it is coming from different real users with different devices and browsers. Some scrapers even integrate efficient Chrome extensions like the Detailed SEO Extension to extract search result titles, meta tags, and knowledge graph content directly from the current SERP. This will make your scraper blend into normal traffic and reduce the risk of being served with a temporary block or a CAPTCHA (more on that later).

Using Proxies

Another way to ensure your activities remain undetected by Google when you scrape Google search results is by using mobile proxies. While rotating your user agent might help avoid detection, another unique way to bypass Google’s detection methods is through using a mobile proxy. This will give you a different IP address than your own, allowing you to safely scrape Google search results without worrying about getting your IP banned and losing access to the homepage of the internet.

If you implement rotating mobile proxies, your chances of detection will reduce significantly as your connection will be constantly rerouted through different IP addresses, making your scraping requests appear as though they are coming from multiple different users rather than just you. Using residential proxies or mobile proxies, especially from a complex proxy provider or proxy service with IP rotation, allows you to bypass regional restrictions and emulate requests from any country domain.

Handling CAPTCHAs

There are instances where your browser will open up, and you will be greeted with a CAPTCHA to solve. While you could manually solve it and move forward with the scraping, this could be a sign that you are sending too many requests at once. This problem can be solved either by using a rotating user agent or proxies, however, if you want to avoid using those solutions but are still facing CAPTCHAs, you could implement a CAPTCHA solver within your script to help bypass the issue.

Conclusion 

Deciding to scrape Google search results with Python could be a great help to your business as it provides you with valuable insights that you cannot get anywhere else. Whether you’re building a rank tracking system or monitoring the SERP for ads and content shifts, being able to scrape Google search results programmatically gives you speed and scale. While you could manually check each keyword and scroll through Google to study the results, occasionally, time is of the essence and using a scraper will save you valuable minutes to start ranking for a keyword before the trends change and you miss the window of opportunity.

Key Takeaways:

  1. Gain Competitive Insight: Scraping Google’s SERP not only helps uncover trending topics and emerging customer demands but also helps you refine your SEO and advertising strategy.
  2. Hands-On Python Implementation: Setting up your environment with Python, Selenium, BeautifulSoup, and pandas, along with the installation of ChromeDriver, offers a practical and accessible pathway to dynamically extract essential search data.
  3. Structured Data Collection: The provided code efficiently captures key elements—such as the position, title, link, description, and source—ensuring that the extracted data is well-organized and easily exportable in CSV format for further analysis.
  4. Mitigate Scraping Challenges: By incorporating rotating user agents and mobile proxies, you can keep your scraping efforts under the radar, effectively minimizing the risks associated with detection and IP bans.
  5. Flexible and Scalable: Whether you’re manually testing search queries or integrating headless browsing for faster performance, the approach is scalable to support various SEO research and competitive monitoring needs, keeping you ahead in the digital landscape.

By using the scraper we have provided within this article, you will be able to access the information available on the first SERP of Google including the ranking, the title, the description, and the URL, allowing you to create a marketing strategy or business plan to put you on the top of the SERP and attract more customers to your services. Whether you’re doing a quick guide on SEO, setting up job listings, running a competitive analysis, or using a Google Custom Search API or Google SERP API for informational purposes, scraping Google can help you maintain your online presence.


Frequently Asked Questions

How can I scrape Google search results without getting blocked?

To avoid detection, use IP rotation via mobile proxies, rotate user agents, operate in headless mode, and slow down your requests.

What’s the difference between Google SERP API and scraping?

The Google SERP API is an official method for fetching structured SERP data directly via an API endpoint. Scraping involves rendering and parsing the actual HTML from search results, which gives you more control but comes with risk of blocking.

Can I scrape Google Maps or job listings with the same method?

Not directly. Google Maps and job listings require different selectors and often need specific search parameters. For Maps, you may need to work with specific tools.

Is scraping allowed under Google’s terms of service?

Scraping Google for anything beyond informational purposes may violate their terms of service. It’s important to understand the liability with respect to using scraped data.

About the author

Zeid is a content writer with over a decade of writing experience. He has written for publications in Canada and the United States before deciding to start writing informational articles for Proxidize. He gained an interest with technology with a specific interest in proxies.

Leave a Reply

Your email address will not be published. Required fields are marked *

IN THIS ARTICLE:

Save Up To 90% on Your Proxies

Discover the world’s first distributed proxy network, which guarantees the best IP quality, reliability and price.

Related articles

Maximize Ad Verification Success by Leveraging Mobile Proxies
Maximize Ad Verification Success by Leveraging Mobile Proxies

As the digital advertising market grows year after year, so too does the number of bad actors seeking to benefit

Omar Rifai

A drawing of a laptop showing the Scrapoxy logo next to the title
What is Scrapoxy? All Your Proxies on One Interface

Scrapoxy is an open source proxy orchestration tool that unifies multiple proxies behind one user-friendly endpoint. It started as a

Omar Rifai

Web Scraping Beautifulsoup
Basics of Web Scraping with Beautiful Soup

Web scraping is an important technique for extracting information from websites, allowing users to gather data efficiently and systematically. By

Zeid Abughazaleh

Start for Free! Start for Free! Start for Free! Start for Free! Start for Free!