Render raises $100M at a $1.5B valuation
Read the announcementWhy deploy Browser-use on Render?
Browser Use is a Python library that enables AI agents to control web browsers programmatically. It allows LLMs to perform browser-based tasks like navigating websites, clicking elements, and extracting information by translating natural language instructions into browser actions.
This template gives you a Browser Use API with FastAPI endpoints already configured—just add your LLM provider keys and you have a working /run endpoint for browser automation tasks. Setting this up manually means installing Playwright dependencies, configuring headless Chrome, and wiring up the Browser Use agent yourself. With Render's Blueprint deploy, you go from fork to running service in minutes, and the container handles all the browser runtime complexity so you can focus on the tasks you want to automate.
Architecture
What you can build
After deploying, you'll have an API endpoint that can browse the web on your behalf—send it a natural language task like "find the top post on Hacker News" and it returns the result. The service runs a headless browser controlled by the LLM of your choice (OpenAI, Anthropic, or Google), so you can automate scraping, form submissions, or UI checks without managing browser infrastructure yourself.
Key features
- Browser automation API: FastAPI service with POST /run endpoint that executes Browser Use tasks via natural language instructions.
- Multi-LLM provider support: Configurable backend supporting OpenAI, Anthropic, Google, and Browser Use Cloud as LLM providers with model selection.
- One-click Render deploy: Includes render.yaml blueprint for deploying directly from a forked repo with environment variable configuration.
- Playwright Docker image: Dockerfile based on Playwright Python image with Chromium and dependencies pre-installed for headless browser execution.
- Browser Use Cloud offload: Optional mode to offload browser execution to Browser Use servers, reducing local memory requirements.
Use cases
- DevOps engineer monitors competitor pricing pages for daily automated reports
- QA tester verifies login flows work correctly across staging environments
- Data analyst scrapes trending GitHub repos for weekly team newsletters
- Marketing manager extracts top Hacker News posts for content inspiration
Prerequisites
- One of the following:
- OpenAI API Key: Your OpenAI API key, required when using OpenAI as the LLM provider (recommended for best reliability on Render).
- Anthropic API Key: Your Anthropic API key, required when using Anthropic as the LLM provider.
- Google API Key: Your Google API key, required when using Google as the LLM provider.
- Browser Use API Key: Your API key for Browser Use Cloud, required when using Browser Use as the LLM provider.
Next steps
- Test the health endpoint by running
curl "$SERVICE_URL/health". You should receive a JSON response with a healthy status confirming the API is ready to accept requests - Run your first browser task with
curl -X POST "$SERVICE_URL/run" -H "Content-Type: application/json" -d '{"task":"Go to news.ycombinator.com and tell me the title of the #1 story"}'. You should see the agent navigate to Hacker News and return the current top story title within 30-60 seconds - Configure your preferred LLM provider by adding LLM_PROVIDER and LLM_MODEL environment variables in the Render dashboard, then redeploy — You should see the service restart and subsequent /run requests use your chosen model (check logs for the provider name)