The return_run_details parameter is an optional boolean flag in the GitHub Actions workflow dispatch API that forces the endpoint to return a 200 OK status with the specific workflow run ID and URL. This creates a direct link between the API request and the resulting execution, enabling precise workflow run tracking without polling.

Back in 2009, when I was working at a boutique IT consulting firm, my life revolved around parsing terabytes of server logs. We had a primitive job scheduler—a custom Python beast that would kick off data aggregation scripts across a few dozen servers. The biggest headache wasn't the data size; it was the "fire and forget" nature of our trigger mechanism. We would send a start command, get a simple "OK" back, and then have absolutely no idea which process ID matched which request. We spent weeks writing fragile polling logic just to figure out if a job had failed or finished.

For a long time, the GitHub Actions API felt eerily similar to that old system. You would hit the workflow_dispatch endpoint to trigger a build, and GitHub would return a 204 No Content. It was basically the server nodding at you and saying, "I heard you." But it didn't tell you what it was doing. If you wanted to track that specific run, you had to query the list of recent runs and guess which one was yours based on timestamps. It was a race condition nightmare.

That has finally changed. The API now supports returning the run ID immediately. It sounds like a minor patch, but for anyone building serious automation orchestration or connecting CI/CD pipelines to external dashboards, this removes a massive architectural headache. It means we can finally stop writing retry loops just to find a simple ID.

The Problem: The 204 Black Hole

Before this update, integrating GitHub Actions into a larger platform—like a custom dashboard or an orchestration tool like n8n—was frustrating. Here is how the flow usually looked:

  1. Your app sends a REST API POST request to trigger a workflow.
  2. GitHub responds with 204 No Content. success, but zero data.
  3. Your app waits 5-10 seconds (hoping GitHub processed the queue).
  4. Your app polls the /runs endpoint, sorting by creation time.
  5. You write logic to match the timestamp and branch name to guess which run corresponds to your trigger.

I have seen teams fail to account for high-concurrency environments here. If two webhooks trigger the same workflow within the same second, your polling logic often grabs the wrong run ID. This destroys observability and makes debugging automated pipelines nearly impossible.

The Solution: Using return_run_details

The fix is straightforward. You can now pass return_run_details: true in the body of your request. Instead of a 204, you get a 200 OK and a JSON body containing the execution details.

Here is how the request looks using curl:

curl -L \ -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer <YOUR-TOKEN>" \ -H "X-GitHub-Api-Version: 2022-11-28" \ https://api.github.com/repos/OWNER/REPO/actions/workflows/WORKFLOW_ID/dispatches \ -d '{"ref":"main","inputs":{"environment":"production"},"return_run_details":true}'

The response payload will now provide the critical identifiers immediately:

  • id: The unique integer for the run.
  • workflow_id: The ID of the workflow file.
  • html_url: The direct link to the UI for that run.

This allows you to store the id in your database immediately alongside the request record. In my work with SocketStore, where we handle millions of API requests, having a 1-to-1 mapping between a trigger and an execution ID is the difference between a system you can debug in five minutes and one that takes five hours.

Idempotency and Automation Orchestration

This update significantly improves webhook idempotency and safety. When you are building systems that chain events together—what we often call automation orchestration—you need certainty.

For example, if you are using a tool like n8n or Zapier to trigger a deployment workflow when a record updates in your database, you can now capture the Run ID in the very next step. You can log "Deployment #89201 started for Ticket #55" immediately.

I have verified this works seamlessly with the latest GitHub CLI as well. If you are a terminal junkie like me, the command gh workflow run now returns the URL of the created run by default (as of v2.87.0). You used to have to run the command and then quickly type gh run list to see what happened. Now, it just tells you.

Expanded Input Limits

While we are discussing the workflow_dispatch event, it is worth noting another recent improvement that pairs well with better tracking. As of late 2025, GitHub increased the input limit for these manual triggers.

Feature Previous Limit New Limit Impact
Input Parameters 10 inputs 25 inputs Allows for complex configuration objects without bundling JSON strings.
Response Code 204 No Content 200 OK (with flag) Enables immediate tracing and logging of workflow runs.
Token Scope Restricted GITHUB_TOKEN allowed Simplifies authentication for internal repository automation.

This increase to 25 inputs allows for much more granular control over your API-triggered workflows. I recently refactored a client's deployment pipeline that required passing distinct flags for region, instance size, and feature toggles. We used to have to compress these into a single JSON string input and parse it inside the workflow runner. Now, we just map them to distinct inputs.

n8n API Integration Strategy

If you are using n8n for n8n API integration, this update simplifies your HTTP Request node configuration. Previously, you had to construct a loop node to poll for the run ID. Now, your flow looks like this:

  1. Webhook Trigger: Receive signal from external app.
  2. HTTP Request: POST to GitHub API with return_run_details: true.
  3. Set Node: Extract body.id and body.html_url from the HTTP response.
  4. Slack/Email Node: Send message: "Build started! Track it here: [URL]".

This eliminates about three nodes and complex logic from your canvas. It reduces the API calls against your GitHub rate limit, which is vital if you are on a free or team plan with strict limits.

Observability and Debugging

In the world of big data and platform engineering, observability is everything. If I can't see it, I assume it is broken. When I built SocketStore, one of our core promises was 99.9% uptime. You do not achieve that by guessing the status of your background jobs.

By capturing the Run ID at the moment of dispatch, you create a perfect audit trail. If a deployment fails, you don't have to ask "Which run was triggered by the API call at 2:03 PM?" You look at your logs, see the returned ID, and go straight to that run.

This is particularly useful when using the GITHUB_TOKEN to trigger workflows from other workflows. Since 2022, GitHub has allowed this (preventing recursive loops automatically), but tracking the "child" workflow from the "parent" was difficult. Now, the parent workflow can log the child's ID to its own output summary.

Commercial Context

While GitHub Actions is ubiquitous, it is not free once you hit scale. It is important to keep an eye on your billing if you are heavily utilizing GitHub Actions API triggers.

  • GitHub Free: 2,000 automation minutes/month.
  • GitHub Team: 3,000 minutes/month (starts around $4/user/month).
  • Enterprise: 50,000 minutes/month.

If you are building custom dashboards to visualize this data, integration costs are usually just developer time. However, if you are relying on third-party observability platforms (like Datadog or Splunk) to ingest these logs, ensure you are only logging the necessary metadata returned by this new API capability to keep ingestion costs down.

Need Better Data Pipelines?

I have spent the last decade dealing with API integrations, from parsing raw logs in a cold Midwestern garage to architecting SocketStore. If your team is struggling to build reliable data ingestion pipelines or needs a unified API to handle social media data without the headache of constant maintenance, that is exactly what we built SocketStore to do.

We provide a single interface for real-time analytics across multiple platforms, guaranteed by a 99.9% uptime SLA. If you are looking to simplify your data stack, check out our API documentation or look at our pricing options. For more complex custom architecture needs, I also take on select consulting projects to help teams optimize their automation orchestration.

Frequently Asked Questions

Do I need to update my existing API calls to keep them working?

No. The return_run_details parameter is optional. If you do not include it, the API will continue to return a 204 No Content status code, so your existing integrations will not break.

Does this feature work with the GitHub CLI?

Yes. As of GitHub CLI version 2.87.0, the gh workflow run command automatically utilizes this feature. It will print the URL of the new run to the console and provide a command to view the run details immediately.

Can I use this for workflows triggered by push or pull_request events?

No. This specific capability applies to the workflow_dispatch endpoint, which is used for manual or API-based triggers. Push and PR events function differently and are tied to git commits rather than explicit API calls.

How does this impact API rate limits?

It actually helps conserve your rate limits. Previously, you might have made one call to trigger the workflow and then 5-10 calls polling the list endpoint to find the run ID. Now, you get the data in a single request, reducing your overall API consumption.

Is this available on GitHub Enterprise Server?

Features usually land on GitHub.com first and roll out to Enterprise Server in subsequent release cycles. You should check your specific server version's release notes, but it is expected to be available in recent updates.

What happens if the workflow fails to start immediately?

If the request is valid but the workflow fails to queue (for example, if the reference branch does not exist or the workflow file is invalid), the API will return a 4xx error with details, rather than a 200 or 204.