Skip to main content
Prerequisites:
  • A ManyPI API key (obtain here).
  • An active ManyPI subscription.
Follow these steps to implement ManyPI in your application or workflow.
1

Set up your scraper

Visit the ManyPI Dashboard and click “Create Scraper.”
Enter the URL of the webpage you want to convert into an API, then confirm.
Next, describe the data you want to extract—for example:
“Retrieve all product names and prices for items currently in stock.”
Our AI will generate a type-safe JSON schema based on your requirements. After you confirm it, we’ll automatically create a new API endpoint.
2

Get your API endpoint

Once your scraper is set up, you’ll receive a custom API endpoint that you can use to run data extractions. The route will look like this:
https://app.manypi.com/api/scrape/your-endpoit-id-here
3

Use your API endpoint

Now you can use cURL to integrate your new API into any environment, application or workflow that supports REST APIs. Use it with your API key:
curl -X POST 'https://app.manypi.com/api/scrape/your-endpoit-id-here' 
-H 'Authorization: Bearer YOUR_API_KEY' 
-H 'Content-Type: application/json' 
-d '{"url":"https://example.com"}'

Getting responses

By default, responses are processed synchronously, and each request has a 60-second timeout.
This means your account has run out of credits, which are required to run extraction tasks (your scrapers). Each extraction uses a certain amount of credits based on the volume of data processed and the compute required by our AI and extraction engine.
  1. Upgrade to a higher (paid) plan.
  2. Purchase add-on credits (if on a paid plan).
Curious about what changed in the latest version? Check out the changelog.