Google search results, also known as SERPs (search engine results pages), can be a valuable source of information for businesses, researchers, and individuals. However, manually collecting this data can be time-consuming and tedious. Fortunately, the more straightforward solution is using a SERP API like SerpApi.
SERP APIs like SerpApi allow you to scrape Google search results without coding or maintaining scraping scripts. Instead, you can make API calls to the SerpApi server to retrieve the search results you need. This can save you a lot of time and hassle.

Before you start scraping Google SERPs, it's important to understand the structure of the search results page. A Google SERP consists of many components, including organic results, local results, ad results, the knowledge graph, direct answer boxes, images results, news results, shopping results, video results, and more. Understanding this structure will allow you to extract the relevant information in an organized and efficient manner. Visit the SerpApi document page to explore more than 50 Google SERPs components and what the JSON response looks like.
Setting up a SerpApi account
The best way to get started with scraping Google SERPs is to create a SerpApi account. You can sign up for a free account on the SerpApi website, allowing you to make up to 100 API requests per month. If you need more, paid plans are also available with higher limits.
Hooray! With your newly created account, you can visit the SerpAPI playground to try some searches and scrape your first result.

Once you've created an account, the next step is to obtain the API key. You can use this key in your code to access the SerpApi API and retrieve the desired information.
Making your first Google SERP scrape with SerpApi (Ruby)
To retrieve the Google SERP for a given search query using Ruby, you'll first need to install the SerpApi Ruby client library. You can do this by adding the following line to your Gemfile:
gem 'google_search_results'
And then run the following command in your terminal:
bundle install
Next, you'll need to require the library and specify your API key:
require 'google_search_results'
search = GoogleSearch.new(q: "coffee", serp_api_key: "secret_api_key")
To retrieve the Google SERP for a given search query, you can use the following code:
results = search.get_hash
The results
variable now contains a hash with all the information from the Google SERP. Now that we have the response, we can extract the search results. For example, we can store the "related_questions" in a CSV file or use it to populate a database.
require 'csv'
CSV.open("related_questions.csv", "w") do |csv|
csv << ["question", "snippet", "title", "link"]
results[:related_questions].each do |question|
csv << [question[:question], question[:snippet], question[:title], question[:link]]
end
end
We also had a guide to scrape Google results with NodeJS. Feel free to check it out.
It's important to remember that scraping is a delicate and often controversial topic, both ethically and legally. The SerpApi qualifies as "Legal US Shield" and is trusted by many customers like The New York Times, IBM, Shopify, KPMG, Airbnb, Harvard University, BrightLocal, and more.
Additionally, to avoid being detected and blocked by Google, SerpApi uses a proxy and the latest technologies to mimic human behavior. It's by default when you're using SerpApi to scrape data.
If you have any questions, please feel free to reach out to me.