When retrieving search engine results via SerpApi at scale, individual payloads may be small, but across hundreds of thousands or millions of searches, the total data transferred can add up quickly. One effective optimization is to enable gzip compression for API responses. This significantly reduces payload sizes and lowers overall bandwidth usage.

In this post, we’ll briefly cover why compressed responses are helpful, and how to request and handle compressed results in cURL, JavaScript, and Python. We’ll also show how to decompress the data on the client side.

Why Compress SerpApi API Responses?

SerpApi is a real-time API that scrapes search engine results for you and returns them as structured JSON, so you don’t have to manage your own scraping infrastructure.

Enabling compression on API responses reduces the size of the data transmitted over the network. This can lower bandwidth costs or help you stay within usage limits when bandwidth is a factor. Since JSON is text-based, it compresses extremely well with gzip. Gzip is optimized for text and JSON data, providing fast compression and decompression, which makes it ideal for API network transfers.

If your project makes many SerpApi calls, compression helps ensure you’re not transferring unnecessary data. It significantly reduces bandwidth usage while adding minimal overhead on both the client and server.

JSON Restrictor

While response compression is the ideal option when working with full payloads, you can also reduce the size of the data returned by using SerpApi’s JSON Restrictor feature. This lets you specify exactly which attributes you want to receive, and SerpApi will only scrape and return those fields. Doing so improves response times and reduces overall payload size.

You can read more about the JSON Restrictor here:

SerpApi: JSON Restrictor
Restrict the SERP JSON to the fields you need.

How to Request Compressed Responses (and Decompress Them)

HTTP compression is a client-server agreement: the client tells the server it can handle compressed data, and the server replies with the data compressed. This happens through HTTP headers:

  • The client adds an Accept-Encoding: gzip header in the request, indicating it supports gzip compression.
  • If the server (SerpApi) supports it, it will compress the JSON response using gzip and include a Content-Encoding: gzip header to indicate that the data is compressed.

SerpApi’s servers do honor gzip compression when requested. However, the current official JavaScript and Python client libraries don’t expose any built-in option to enable it. This means that when using the official SerpApi packages, responses will typically be returned uncompressed by default. You can work around this by using standard HTTP tools, such as cURL or your language’s built-in HTTP libraries, to request compressed responses and decompress them yourself.

cURL Example (Command Line)

The simplest way to test gzip compression is with cURL. cURL has a convenient --compressed option that will do two things: send an Accept-Encoding: gzip header and automatically decompress the response for you. For example:

curl --compressed "https://serpapi.com/search.json?engine=google&q=coffee&api_key=YOUR_API_KEY"

In this command:

  • --compressed tells cURL to include Accept-Encoding: gzip, deflate in the request, and cURL will handle decompressing the response.
  • The URL is a SerpApi query (Google Search for “coffee”), including your api_key (replace YOUR_API_KEY with your actual key).

Without --compressed:

➜  Developer curl \
  -w "\nUncompressed download size: %{size_download} bytes\n" \
  -o /dev/null \
  "https://serpapi.com/search.json?engine=google&q=coffee&api_key=API_KEY"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  210k  100  210k    0     0  1724k      0 --:--:-- --:--:-- --:--:-- 1726k

Uncompressed download size: 215728 bytes

With --compressed:

➜  Developer curl --compressed \
  -w "\nCompressed download size: %{size_download} bytes\n" \
  -o /dev/null \
  "https://serpapi.com/search.json?engine=google&q=coffee&api_key=API_KEY"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 45529    0 45529    0     0   479k      0 --:--:-- --:--:-- --:--:--  483k

Compressed download size: 45529 bytes

As you can see, we went from a payload size of ~210.79 KB to ~44.48 KB, roughly a 4.7x reduction.

cURL will output the JSON result directly to your terminal as if it were uncompressed (since it automatically unzips it). The transfer, however, was compressed behind the scenes; if you use the -v (verbose) flag, you can see the Content-Encoding: gzip in the response headers, indicating compression was used.

➜  Developer curl --compressed -v \
  -o /dev/null \
  "https://serpapi.com/search.json?engine=google&q=coffee&api_key=API_KEY"
...
> Host: serpapi.com
> User-Agent: curl/8.7.1
> Accept: */*
> Accept-Encoding: deflate, gzip
>
* Request completely sent off
< HTTP/2 200
< content-type: application/json; charset=utf-8
< content-encoding: gzip
< x-frame-options: SAMEORIGIN
< x-xss-protection: 1; mode=block
< x-content-type-options: nosniff
< x-download-options: noopen
< x-permitted-cross-domain-policies: none
< referrer-policy: strict-origin-when-cross-origin
< x-robots-tag: noindex, nofollow
< serpapi-search-id: 6915208fbf04d5f8cab7cdd0
< cache-control: max-age=3600, public
< etag: W/"937c88f1192291d10fa8ce4663bf03b5"
< x-request-id: 82d17608-eecc-45ae-a99c-61c44c321037
< x-runtime: 2.819488
< cf-cache-status: HIT
< vary: Accept-Encoding
< server: cloudflare
< cf-ray: 99da1bf63d0d4e7c-PDX
< alt-svc: h3=":443"; ma=86400

Alternatively, if you want to see the raw compressed data size or handle decompression yourself, you can do:

curl -H "Accept-Encoding: gzip" -o result.json.gz "https://serpapi.com/search.json?engine=google&q=coffee&api_key=YOUR_API_KEY"
# The response is now saved as result.json.gz in gzip format.
gunzip result.json.gz  # This will decompress the file to result.json

After running gunzip, the file result.json will contain the original JSON response. You can compare the file sizes of result.json.gz vs result.json to appreciate the difference. In summary, using cURL with compression is straightforward and demonstrates the concept: ask for gzip in the request, get a gzip-compressed response, then unzip it to use the data.

JavaScript Example (Node.js)

With Node.js, we can achieve the same using common libraries. We’ll use axios for the HTTP request and Node’s built-in zlib module to decompress. (You could also use node-fetch or the native https module; the approach is similar.)

const axios = require('axios');
const zlib = require('zlib');

async function fetchSerpApiCompressed(query) {
  const url = `https://serpapi.com/search.json?engine=google&q=${query}&api_key=${API_KEY}`;
  // Send request with Accept-Encoding: gzip
  const response = await axios.get(url, {
    headers: { "Accept-Encoding": "gzip" },
    responseType: "arraybuffer", // get raw bytes (Buffer)
    decompress: false, // disable automatic decompression
  });
  console.log("Content-Encoding header:", response.headers["content-encoding"]); // should be "gzip" if compressed

  // The data is in gzip format (bytes)
  const outputBuffer = zlib.gunzipSync(Buffer.from(response.data));
  const jsonString = outputBuffer.toString("utf-8");
  const result = JSON.parse(jsonString);
  console.log("Total result count:", result.search_information.total_results);
}

fetchSerpApiCompressed("coffee");

Let’s break down what this does:

  • We set the Accept-Encoding: gzip header so SerpApi knows we want a compressed response.
  • We use responseType: 'arraybuffer' so that Axios gives us the raw binary data (instead of trying to convert it to a string or JSON automatically). The Content-Encoding from SerpApi should be gzip (you can check response.headers as shown).
  • We then use zlib.gunzipSync to unzip the response data. This function takes the compressed buffer and returns a new buffer with the decompressed data.
  • Finally, we convert the decompressed data to a string (UTF-8 text) and parse it as JSON to get the result object. At this point, the result is the same JavaScript object you would normally get if you had fetched uncompressed JSON.
💡
In Node, many HTTP libraries (Axios included) will automatically handle gzip if they see the Content-Encoding: gzip header. Axios sets decompress: true by default, so it may already return decompressed data if Accept-Encoding is honored. Here, we explicitly performed the decompression to demonstrate the process.

The key takeaway is that you must include the Accept-Encoding: gzip header in your request; without it, SerpApi will send plain JSON (most servers default to no compression unless asked ). With the header, the response shrinks dramatically, and you need a one-liner to unzip it in your code.

Python Example

For Python, we’ll use the requests library. Python’s requests can also handle gzip transparently, but we’ll show the manual steps for clarity:

import requests, gzip, json

url = "https://serpapi.com/search.json?engine=google&q=coffee&api_key=YOUR_API_KEY"
headers = { "Accept-Encoding": "gzip" }
response = requests.get(url, headers=headers)

print("Content-Encoding:", response.headers.get("Content-Encoding"))  # should be "gzip" if compressed

compressed_data = response.content  # raw bytes of the response
if response.headers.get("Content-Encoding") == "gzip":
    # Decompress the gzip data
    decompressed_bytes = gzip.decompress(compressed_data)
    data = json.loads(decompressed_bytes.decode('utf-8'))
else:
    # If by chance it's not compressed (Content-Encoding not gzip), parse directly
    data = response.json()

print("Total result count:", data["search_information"]["total_results"])

Explanation:

  • We send a GET request with Accept-Encoding: gzip in the headers.
  • We then check the Content-Encoding of the response - if SerpApi compressed it, it will be "gzip".
  • We take response.content, which is the raw bytes received. If it’s gzip-compressed, we use Python’s built-in gzip.decompress() to unzip the data. This returns the original JSON as bytes, which we decode to a string and then load into a Python dict using json.loads.
  • If the response wasn’t compressed (just in case), we fall back to response.json() to parse it normally.
  • Finally, we print something from the data (e.g., the total results count from search_information) to verify it worked.

Under the hood, requests will actually handle gzip for you (it sets Accept-Encoding: gzip, deflate, br by default, and decompresses the response when you access .text or .json() in many cases). So you could simplify the above by just doing data = response.json(). But explicitly calling gzip.decompress as shown leaves no doubt that we are handling the compressed data. Either way, the result data will be the same JSON content as usual, obtained in a more efficient way over the network.

Conclusion

Enabling gzip compression for SerpApi responses is a handy trick to decrease payload sizes. By adding a simple header to your request, you allow SerpApi to significantly reduce the JSON payload without losing any data. This leads to lower bandwidth consumption. Once the compressed data is received, you just need to decompress (unzip) it in your client code.

In practice, this optimization is usually straightforward: HTTP clients negotiate compression via Accept-Encoding and Content-Encoding headers, and modern tools/libraries often handle the heavy lifting. As a developer, it’s good to understand how it works so you can enable it when appropriate and ensure you handle the compressed data correctly. Given that compression can save a lot of bandwidth at minimal cost, it’s often worth using when dealing with large API responses or high volumes of requests.

By compressing SerpApi responses, you make your data transfer leaner, which is a win-win for both your application performance and resource usage.