- Step 1: Set up Docker environment or run the server locally
- Step 2: Access the API endpoint `/scrape` for single URL scraping
- Step 3: Use `/scrape-batch` to process multiple URLs at once
- Step 4: Use `/crawl` to crawl and scrape entire websites
- Step 5: Retrieve and process the scraped data