Skip to main content
Nimble Map is like having a bird’s-eye view of any website. Give it a URL, and it automatically discovers and lists related URLs on that site, including site-map, with helpful context like titles and descriptions. This context enables AI agents or Developers to intelligently decide which pages to scrape, making data collection smarter and more efficient.

Quick Start

Example Request

from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com"
})

print(f"Found {len(result['links'])} URLs")

Example Response

{
  "task_id": "123e4567-e89b-12d3-a456-426614174000",
  "success": true,
  "links": [
    {
      "url": "https://www.example.com",
      "title": "Home Page",
      "description": "Welcome to our website"
    },
    {
      "url": "https://www.example.com/about"
    },
    {
      "url": "https://www.example.com/products",
      "title": "Products"
    },
    {
      "url": "https://www.example.com/blog",
      "title": "Blog",
      "description": "Latest news and updates"
    }
  ]
}

How it works

1

You provide a starting URL

Give Map the URL of the website you want to explore
2

Map discovers URLs

  • Reads the website’s sitemap (if available)
  • Analyzes page links and navigation
  • Identifies all discoverable URLs on the site
3

Collects metadata for each URL

  • Extracts page titles from sitemap or meta tags
  • Gathers descriptions when available
  • Associates context with each discovered URL
4

Returns structured URL list

Get a complete list of URLs with titles and descriptions ready for AI reasoning or crawling

Parameters

Supported input parameters:
url
string
required
The website URL you want to map. This is where the mapping process starts.**Example: **https://www.example.com
sitemap
string
default:"include"
Choose how to use the website’s sitemap for discovering URLs.Options:
  • include (default) - Get URLs from both the sitemap and site iteself
  • only - Only use URLs listed in the sitemap
  • skip - Ignore the sitemap URLs
domain_filter
string
default:"all"
Control whether to stay on one domain or include related domains.Options:
  • domain - Only get URLs from the exact domain you specified
  • subdomain - Include URLs from subdomains
  • all (default) - Include URLs from any domain that’s linked
limit
integer
default:"5000"
Maximum number of URLs to return.
  • Min: :   1
  • Max:   100000
  • Default: 100
country
string
default:"ALL"
Map the site as if you’re visiting from a specific country. Use ISO Alpha-2 Country Codes
Useful for sites that show different content based on location.
Use ISO Alpha-2 country codes like US, GB, FR, DE, CA, JP, etc. Use ALL for random country selection.
locale
string
Set the language preference for the mapping. Use LCID standard
Helpful for multilingual sites.
Locale Examples:
  • en-US - English (United States)
  • en-GB - English (United Kingdom)
  • fr-FR - French (France)
  • de-DE - German (Germany)

Usage

Basic map

Map a website using default settings:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com"
})

print(f"Found {len(result['links'])} URLs")
for link in result['links']:
    print(link['url'])

Sitemap-only mapping

Fast mapping using only the sitemap:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com",
    "sitemap": "only"
})

print(f"Found {len(result['links'])} URLs in sitemap")

Skip sitemap

Map without using sitemap, discovering URLs through page crawling:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com",
    "sitemap": "skip"
})

print(result)

Include subdomains

Map URLs across all subdomains:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com",
    "domain_filter": "subdomain"
})

print(result)

Exact domain

Restrict mapping to exact domain only:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://blog.example.com",
    "domain_filter": "domain"
})

print(result)

Geo-targeted mapping

Map with specific country and locale settings:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com",
    "country": "UK",
    "locale": "en-GB"
})

print(result)

Combined parameters

Map with multiple parameters for precise control:
from nimble_python import Nimble

nimble = Nimble(api_key="YOUR-API-KEY")

result = nimble.map({
    "url": "https://www.example.com",
    "sitemap": "include",
    "domain_filter": "subdomain",
    "limit": 500
})

print(f"Found {len(result['links'])} URLs across subdomains")

Response Fields

When you use Map, you receive:
  • URLs with context - Each URL includes optional title and description for AI reasoning
  • Complete discovery - Every discoverable page on the site
  • Fast results - Most sites mapped in seconds
  • Smart filtering - Subdomain control and comprehensive sitemap + link discovery
{
  "task_id": "123e4567-e89b-12d3-a456-426614174000",
  "success": true,
  "links": [
    {
      "url": "https://www.example.com",
      "title": "Home Page",
      "description": "Welcome to our website"
    },
    {
      "url": "https://www.example.com/about"
    },
    {
      "url": "https://www.example.com/products",
      "title": "Products"
    },
    {
      "url": "https://www.example.com/blog",
      "title": "Blog",
      "description": "Latest news and updates"
    }
  ]
}
FieldTypeDescription
task_idstringUnique identifier for the map task
successbooleanWhether the request succeeded
linksarrayDiscovered URLs
links[].urlstringThe discovered URL
links[].titlestringPage title (if available)
links[].descriptionstringPage description (if available)

Use cases

AI Agent Planning

Give AI agents URL context (titles, descriptions) to intelligently decide which pages to scrape

Pre-crawl Planning

Discover all URLs before crawling to plan budget and prioritize important pages

Site Audits

Quick inventory of all pages for SEO audits, content analysis, or quality checks

Competitive Research

See competitor site structure - product pages, blog posts, landing pages

Real-world examples

Scenario: Your AI agent needs to extract product data from an e-commerce site but should only scrape relevant product pages.How Map helps:
  • Discovers all URLs with titles and descriptions
  • AI agent reads the context (title: “Laptop Pro 15”, description: “High-performance laptop…”)
  • Agent intelligently decides which URLs contain product data vs navigation/legal pages
  • Only scrapes relevant product URLs, saving costs and time
Result: AI-powered smart scraping that automatically filters out irrelevant pages based on URL context.
Scenario: You want to understand a competitor’s product catalog structure.How Map helps:
  • Discover all product category pages
  • Find hidden product pages not linked from the homepage
  • Identify promotional landing pages
  • Map out the entire site structure before detailed scraping
Result: Complete visibility into their online catalog without manual browsing.
Scenario: You’re tracking content strategy across multiple news sites.How Map helps:
  • Get a complete list of articles and sections
  • Discover new content categories as they’re added
  • Identify archive structures and content organization
  • Plan targeted crawling for specific content types
Result: Systematic content discovery at scale.
Scenario: Analyzing website architecture for SEO optimization.How Map helps:
  • Visualize complete site hierarchy
  • Identify orphaned pages (pages not well-linked)
  • Discover deep pages buried in site structure
  • Understand internal linking patterns
Result: Comprehensive site architecture insights in seconds.
Scenario: You need to scrape data from a large website efficiently.How Map helps:
  • Get complete URL inventory before crawling
  • Filter for specific sections or page types
  • Plan crawl budget and prioritization
  • Avoid wasting resources on irrelevant pages
Result: Smarter, more efficient data collection.

Map vs Crawl

NeedUse
Quick URL discoveryMap - completes in seconds
URL list with titles/descriptionsMap
Deep link followingCrawl
Complex filtering patternsCrawl
Extract content from pagesCrawl
Map is the starting point - Use it to discover URLs with context, then use Crawl or Extract for the actual data collection.

Next steps