Debug your Render services in Claude Code and Cursor.
Try Render MCPDebug your Render services in Claude Code and Cursor.
Try Render MCP@task(concurrency=50)async def analyze_documents(file_paths):files = await get_files(file_paths)for path in files:summary = await summarize_with_llm(path)await save_summary(path, summary)@task(retries=3, retry_backoff_factor=2)async def summarize_with_llm(path, text):return await call_llm_for_summary(text)@taskasync def save_summary(path, summary):await db.upsert_summary(path, summary)
Avoid hitting rate limits with exponential backoff.
Restore to your last healthy state after an interruption.
Avoid duplicated work, even across retries.
Spin up hundreds or even thousands of
workers when your queue spikes.
Go beyond 15-minute serverless limits—tasks
can stay active for a day or more.
Workers automatically spin down when there’s
nothing to work on. Only pay for what you use.
Start running with just a few lines of code—
no heavy frameworks or steep learning curve.
Iterate on your machine, then scale on ours.
View per-task logs, retries, and timelines.
@task(concurrency=50)async def query_llms_and_evaluate(prompt):# Query 3 LLMs in parallel using model namesmodel_configs = [{"provider": "openai", "model": "gpt-5"},{"provider": "anthropic", "model": "claude-opus-4"},{"provider": "google", "model": "gemini-2.5-pro"},]queries = [query_llm(cfg["provider"], cfg["model"], prompt) for cfg in model_configs]responses = await asyncio.gather(*queries)# Have a 4th LLM evaluate and pick the best responsereturn await select_best_result(responses)@task(retries=3, retry_wait_duration=5000)async def query_llm(provider, model_name, prompt):providers = {"openai": ChatOpenAI,"anthropic": ChatAnthropic,"google": ChatGoogleGenerativeAI,}# Construct the appropriate LLM client based on providerllm = providers[provider](model=model_name)return call_llm(llm, [HumanMessage(content=prompt)])@task(retries=2)async def select_best_result(responses):evaluator = ChatOpenAI(model="gpt-5")eval_prompt = ("Which response is best?f'{"1}: {r}" for i, r in enumerate(responses))}')evaluation = await evaluator.ainvoke([HumanMessage(content=eval_prompt)])return evaluation.content