Batch processing

If you would like to access a large number of searches, places, or reviews, batch processing can significantly speed up the process. Instead of sending individual requests, a batch request allows for up to 1,000 queries in a single request.

In the following examples, we'll demonstrate real-world use cases, and show how to use batch requests most efficiently to achieve the intended goal.

Example one - searching for multiple queries at one location

In the first example, we'll search for three different types of places located around the same coordinates.

Nimble APIs requires that a base64 encoded credential string be sent with every request to authenticate your account. For detailed examples, see Web API Authentication.

curl -X POST 'https://api.webit.live/api/v1/batch/serp' \
--header 'Authorization: Basic <credential string>' \
--header 'Content-Type: application/json' \
--data-raw '{ 
    "requests": [
        { "query": "Restaurants" },
        { "query": "Theaters" },
        { "query": "Cafes" }
    ],
    "coordinates": {
        "latitude": "40.7123695",
        "longitude": "-74.0357317"
    },
    "search_engine": "google_maps_search",
    "storage_type": "s3",
    "storage_url": "s3://Your.Repository.Path/",
    "callback_url": "https://your.callback.url/path"
}'

Parameters that are placed outside the requests object, such as coordinates, search_engine, storage_type, storage_url, and callback_url , are automatically applied as defaults to all defined requests.

If a parameter is set both inside and outside the requests object, the value inside the request overrides the one outside.

Example two - searching for one query at multiple locations

In the next example, we'll search for one type of place at multiple coordinates:

curl -X POST 'https://api.webit.live/api/v1/batch/serp' \
--header 'Authorization: Basic <credential string>' \
--header 'Content-Type: application/json' \
--data-raw '{ 
    "requests": [
        { "coordinates": "@38.838023,-76.9945248,9.45z" },
        { "coordinates": "@39.9536112,-75.1691209,11.87z" },
        { "coordinates": "@40.7493898,-74.0559509,10.87z" },
        { "coordinates": "@34.0299743,-118.2947275,11.49z" }
    ],
    "query": "Restaurants",
    "search_engine": "google_maps_search",
    "storage_type": "s3",
    "storage_url": "s3://Your.Repository.Path/",
    "callback_url": "https://your.callback.url/path"
}'

By placing the query parameter outside of the requests object, we apply it as a default on all the defined requests. Thus, this bulk request would trigger four individual searches for "Restaurants" located around four unique coordinates.

Example three - combining queries and locations

Next, we combine the previously highlighted features in order to search for a different type of place at unique coordinates for each request:

curl -X POST 'https://api.webit.live/api/v1/batch/serp' \
--header 'Authorization: Basic <credential string>' \
--header 'Content-Type: application/json' \
--data-raw '{ 
    "requests": [
        { "query": "Restaurants", "coordinates": "@38.838023,-76.9945248,9.45z" },
        { "query": "Theaters", "coordinates": "@39.9536112,-75.1691209,11.87z" },
        { "query": "Cafes", "coordinates": "@40.7493898,-74.0559509,10.87z" },
        { "query": "Bars" }
    ],
    "coordinates": "@34.0299743,-118.2947275,11.49z",
    "search_engine": "google_maps_search",
    "storage_type": "s3",
    "storage_url": "s3://Your.Repository.Path/",
    "callback_url": "https://your.callback.url/path"
}'

Notice that for the last request, we search for "Bars" without explicitly defining the coordinates where this search should be performed. In this case, the default coordinates defined outside the requests object would be used instead.

Request options

Batch requests use the same parameters as asynchronous requests, with the exception of the requests object.

ParameterRequiredTypeDescription

requests

Optional

Object array

Allows for defining custom parameters for each request within the bulk. Any of the parameters below can be used in an individual request.

search_engine

Required

Enum: google_maps_search | google_maps_place | google_maps_reviews

The search engine from which to collect results.

query

Required Applicable only when search_engine = google_maps_search

string

The terms or phrases to search for.

coordinates

Optional Applicable only when search_engine = google_maps_search

String or Object "@{latitude},{longitude},{zoom}z"

The coordinates to target. When using a string, zoom is optional (default = 14).

"coordinates": { "latitude": "40.7590562", "longitude": "-74.0042502", "zoom": "14" }

place_id/data_id

Required Applicable only when search_engine = google_maps_place or google_maps_reviews

Array[string]

Strings used by Google to identify a particular place. place_id and data_id cannot both be used in a single batch.

domain

Optional

String

Search through a custom top-level domain of Google. eg: "co.uk"

country

Optional (default = all)

String

Country used to access the target URL, use ISO Alpha-2 Country Codes i.e. US, DE, GB

locale

Optional (default = en)

String

String | LCID standard locale used for the URL request. Alternatively, user can use auto for automatic locale based on country targeting.

location

Optional

String

Search Google through a custom geolocation, regardless of country or proxy location. eg: "London,Ohio,United States".

parse

Optional (default = true)

Enum: true | false

Instructs Nimble whether to structure the results into a JSON format or return the raw HTML.

storage_type

Optional Leave blank to enable Push/Pull delivery.

ENUM: s3 | gs

Use s3 for Amazon S3 and gs for Google Cloud Platform

storage_url

Optional Leave blank to enable Push/Pull delivery.

String

Repository URL: s3://Your.Bucket.Name/your/object/name/prefix/ | Output will be saved to TASK_ID.json

callback_url

Optional

String

A url to callback once the data is delivered. Nimble APIs will send a POST request to the callback_url with the task details once the task is complete (this “notification” will not include the requested data).

Setting GCS/AWS access permissions

GCS Repository Configuration

In order to use Google Cloud Storage as your destination repository, please add Nimble’s system user as a principal to the relevant bucket. To do so, navigate to the “bucket details” page in your GCP console, and click on “Permission” in the submenu.

Next, past our system user [email protected] into the “New Principals” box, select Storage Object Creator as the role, and click save.

That’s all! At this point, Nimble will be able to upload files to your chosen GCS bucket.

S3 repository configuration

In order to use S3 as your destination repository, please give Nimble’s service user permission to upload files to the relevant S3 bucket. Paste the following JSON into the “Bucket Policy” (found under “Permissions”) in the AWS console.

Follow these steps:

1. Go to the “Permissions” tab on the bucket’s dashboard:

2. Scroll down to “Bucket policy” and press edit:

3. Paste the following bucket policy configuration into your bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::744254827463:user/webit-uploader"
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectACL"
            ],
            "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
        },
        {
            "Sid": "Statement2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::744254827463:user/webit-uploader"
            },
            "Action": "s3:GetBucketLocation",
            "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME"
        }
    ]
}

Important: Remember to replace “YOUR_BUCKET_NAME” with your actual bucket name.

Here is what the bucket policy should look like:

4. Scroll down and press “Save changes”

S3 Encrypted Buckets

If your S3 bucket is encrypted using an AWS Key Management Service (KMS) key, additional permissions to those outlined above are also needed. Specifically, Nimble's service user will need to be given permission to encrypt and decrypt objects using a KMS key. To do this, follow the steps below:

  1. Sign in to the AWS Management Console and open the AWS Key Management Service (KMS) console.

  2. In the navigation pane, choose "Customer managed keys".

  3. Select the KMS key you want to modify.

  4. Choose the "Key policy" tab, then "Switch to policy view".

  5. Click "Edit".

  6. Add the following statement to the existing policy JSON, ensuring it's inside the Statement array:

{
	"Version": "2012-10-17",
	"Id": "example-key-policy",
	"Statement": [
		// ... your pre-existing statements ...
		{
			"Sid": "Allow Nimble APIs account",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::744254827463:user/webit-uploader"
			},
			"Action": [
				"kms:Encrypt",
				"kms:Decrypt",
				"kms:ReEncrypt*",
				"kms:GenerateDataKey*",
				"kms:DescribeKey"
			],
			"Resource": "*"
		},
	]
}
  1. Click "Save changes" to update the key policy.

That's it! You've now given Nimble APIs permission to encrypt and decrypt objects, enabling access to encrypted buckets.

Please add Nimble's system/service user to your GCS or S3 bucket to ensure that data can be delivered successfully.

Response

Initial Response

Batch requests operate asynchronously, and treat each request as a separate task. The result of each task is stored in a file, and a notification is sent to the provided callback any time an individual task is completed.

{
    "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
    "batch_size": 3,
    "tasks": [
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "2e508d43-8b02-4fc0-96c7-0968ab454a0c",
            "state": "pending",
            "output_url": "s3://Your.Repository.Path/2e508d43-8b02-4fc0-96c7-0968ab454a0c.json",
            "callback_url": "https://your.callback.url/path",
            "status_url": "https://api.webit.live/api/v1/tasks/2e508d43-8b02-4fc0-96c7-0968ab454a0c",
            "created_at": "2022-07-24T08:09:23.205Z",
            "modified_at": "2022-07-24T08:09:23.205Z",
            "input": {
            ...
        },
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "63cc3bd5-01b4-4787-90a2-f382b9960c77",
            "state": "pending",
            ...
        },
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "4cb39bbf-5580-4c50-8ed4-4a7905e2ec52",
            "state": "pending",
            ...
        }
    ]
}

Checking batch progress and status

POST https://api.webit.live/api/v1/batches/<batch_id>/progress

Like asynchronous tasks, the status of a batch is available for 24 hours.

curl -X GET 'https://api.webit.live/api/v1/batches/<batch_id>/progress' \
--header 'Authorization: Basic <credential string>'

Response

The progress of a batch is reported in percentages.

{
    "status": "success",
    "completed": false,
    "progress": 0.333333
}

Once a batch is finished, its progress will be reported as “1”.

{
    "status": "success",
    "completed": true,
    "progress": 1
}

Retrieving Batch Summary

One a batch has finished, it’s possible to return a summary of the completed tasks, by using the following endpoint:

GET https://api.webit.live/api/v1/batches/<batch_id>

For example:

curl -X GET 'https://api.webit.live/api/v1/batches/<batch_id>' \
--header 'Authorization: Basic <credential string>'

The response object lists the status of the overall batch, as well as the individual tasks and their details:

Response

{
    "status": "success",
    "tasks": [
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "2e508d43-8b02-4fc0-96c7-0968ab454a0c",
            "state": "success",
            "output_url": "s3://Your.Repository.Path/2e508d43-8b02-4fc0-96c7-0968ab454a0c.json",
            "callback_url": "https://your.callback.url/path",
            "status_url": "https://[base_url]/api/v1/tasks/2e508d43-8b02-4fc0-96c7-0968ab454a0c",
            "created_at": "2022-07-24T08:09:23.205Z",
            "modified_at": "2022-07-24T08:10:27.244Z",
            "input": {
        ...
            }
        },
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "63cc3bd5-01b4-4787-90a2-f382b9960c77",
            "state": "success",
            "output_url": "s3://Your.Repository.Path/63cc3bd5-01b4-4787-90a2-f382b9960c77.json",
            "callback_url": "https://your.callback.url/path",
            "status_url": "https://[base_url]/api/v1/tasks/63cc3bd5-01b4-4787-90a2-f382b9960c77",
            "created_at": "2022-07-24T08:09:23.205Z",
            "modified_at": "2022-07-24T08:10:27.973Z",
            "input": {
        ...
            }
         },
        {
            "batch_id": "7a07a96d-c402-4d98-a17f-4ecb390d11a3",
            "id": "4cb39bbf-5580-4c50-8ed4-4a7905e2ec52",
            "state": "success",
            "output_url": "s3://Your.Repository.Path/4cb39bbf-5580-4c50-8ed4-4a7905e2ec52.json",
            "callback_url": "https://your.callback.url/path",
            "status_url": "https://[base_url]/api/v1/tasks/4cb39bbf-5580-4c50-8ed4-4a7905e2ec52",
            "created_at": "2022-07-24T08:09:23.205Z",
            "modified_at": "2022-07-24T08:10:30.292Z",
            "input": {
        ...
            }
        }
    ],
    "completed": true,
    "progress": 1
}

500 error

{
    "status": "error",
        "task_id": "<task_id>",
        "msg": "can't download the query response - please try again"
}

400 Input Error

{
        "status": "failed",
        "msg": error
}

Response Codes

StatusDescription

200

OK

400

The requested resource could not be reached

401

Unauthorized/invalid credental string

500

Internal service error

501

An error was encountered by the proxy service

Last updated