SerpApi is CPU-intensive. A search usually takes over 500ms to complete. We are constantly looking for ways to improve the search performance of the API.

Apart from optimizing the code, a simple and effective approach will be to upgrade the hardware. You easily get performance upgrades by paying more money. But for CPU-intensive servers, a more powerful CPU directly leads to lower response time, leading to fewer concurrent workers needed and, thus, fewer servers, potentially saving money.

We will now compare cloud providers' CPU benchmarks, including DigitalOcean, Vultr, and Equinix. We tested both virtual servers and bare metal servers.

Initially, we used sysbench to run CPU benchmarks. But we soon found that sysbench produces very unreliable results on CPU scores, which cannot reflect the performance of SerpApi. We later switched to 7z b -mmt1 and verified that its results were linearly correlated with SerpApi performance.

Here are the results.

Server7z b -mmt1 compressing (higher is better)7z b -mmt1 decompressing (higher is better)SerpApi HTML parsing (lower is better)Price
DigitalOcean Basic - Premium Intel (NYC3)1532 (3146)1867 (2777)65588.41037299999 (27313.848413000007)$0.021 / h / 2 vCPUs
DigitalOcean Basic - Premium Intel (NYC1)26902896$0.021 / h / 2 vCPUs
DigitalOcean Basic - Premium Intel (SFO3)29723041$0.021 / h / 2 vCPUs
DigitalOcean CPU-Optimized (SFO3)3703351423464.391850099993$0.063 / h / 2 vCPUs
DigitalOcean CPU-Optimized (NYC3)2633 (3533)2868 (3349)64994.0308969999 (26170.577726299984)$0.063 / h / 2 vCPUs
Vultr Shared - Intel Performance (NJ)33673567$0.018 / h / 2 vCPUs
Vultr Shared - Intel Performance (LA)3283354424379.91118200002$0.018 / h / 2 vCPUs
Vultr Shared - AMD Performance (NJ)22912437$0.018 / h / 2 vCPUs
Vultr Shared - AMD Performance (LA)26512849$0.018 / h / 2 vCPUs
Vultr Dedicated - Storage Optimized (NJ)39843451
Vultr Dedicated - Storage Optimized (LA)47025007
Vultr Dedicated - CPU Optimized (NY)36153749$0.060 / h / 2 vCPUs
Vultr Dedicated - CPU Optimized (LA)4925473615853.220384799943$0.060 / h / 2 vCPUs
Vultr Dedicated - Memory Optimized (NY)34423300
Vultr Dedicated - Memory Optimized (LA)49455027
Vultr Bare Metal - Intel E-2286G (6 cores 12 threads, 32 GB)5702515215177.97118870003$0.275 / h
Vultr Bare Metal - Intel E-2288G (8 cores 16 threads, 128 GB)59865280$0.521 / h
Vultr Bare Metal - Intel E-2388G (8 cores 16 threads, 128 GB)45663765$0.521 / h
Vultr Bare Metal - AMD EPYC 7443P (24 cores 48 threads, 256 GB)5738505514520.745685900056$1.079 / h
Equinix bare metal - m3.small.x86 (8 cores 16 threads, 64 GB)6460598211851.607105099947$1.05 / h
Equinix bare metal - c2.medium.x86 (24 cores 48 threads, 64 GB)3473288927199.17227299993$1.35 / h
Equinix bare metal - c3.medium.x86 (24 cores 48 threads, 64 GB)4117369320330.06136100016$1.50 / h

The results were quite interesting to study.

For the virtual servers, Vultr was generally faster than DigitalOcean by up to 10% for shared CPUs and up to 33% for dedicated CPUs.

It was weird that servers in Los Angeles or San Francisco were more performant than servers in New York or New Jersey, both DigitalOcean and Vultr. We would have to benchmark all locations to pick a winner, but let's do that later.

Vultr AMD servers were generally slower than Intel ones. We later benchmarked DigitalOcean servers too, and it was the same result! Unfortunately, they sell them at the same price.

We got a very low CPU score on DigitalOcean New York Datacenter 3, both shared CPU and dedicated CPU. The score became normal the next day when we ran the benchmark the second time. The cause of the low score can be noisy neighbors, but unexpectedly, they can be 2x-3x slower. Dedicated CPU servers were also affected. Shouldn't they "have guaranteed access to the full hyperthread at all times" according to the documentation?

For bare metal servers, Equinix m3.small.x86 was the absolute winner. It's 22% faster than Vultr AMD EPYC 7443P. But the price of the Equinix one was higher too. The Vultr one was the cost-performance choice.

Bare metal servers are guaranteed to provide complete system resources for CPU-intensive tasks. We are willing to try them out in production.