Use Cases

Automating Uptime Monitoring: Server Surveillance with n8n and Make.com

Automatic server and website monitoring with alerts and reporting.

11 min read

Your website goes down and you hear about it from your customers first? It doesn't have to be that way. With automated uptime monitoring, you get notified within seconds, not hours. In this guide, we show you how to combine professional server monitoring with automation.

Why Automate Uptime Monitoring?

The Cost of Downtime:
Business SizeCost/Minute of Downtime
Small E-Commerce$10-50
Medium SaaS$100-500
Enterprise$1,000-10,000
Response Time is Critical:
  • Without monitoring: 30-60 min until detection
  • With monitoring: 30 seconds until detection
  • With automation: 30 seconds until response

Uptime Monitoring Tools Overview

Free Options

ToolChecksIntervalAPI
UptimeRobot505 MinYes
Freshping501 MinYes
Uptime KumaUnlimited1 MinYes
Hetrix Tools151 MinYes

Paid Options

ToolStarting PriceFeatures
Pingdom$10/monthClassic, reliable
Datadog$15/monthComprehensive, Enterprise
Better Uptime$20/monthBeautiful status pages
Checkly$7/monthSynthetic monitoring

Step 1: Setting Up Monitoring

Option A: UptimeRobot (Free)

  • Create an account at uptimerobot.com
  • Add a monitor:
  • - Monitor Type: HTTP(s)

    - URL: https://your-website.com

    - Monitoring Interval: 5 minutes

  • Configure alert contacts
  • Option B: Uptime Kuma (Self-Hosted)

    # Start with Docker
    

    docker run -d \

    --name uptime-kuma \

    -p 3001:3001 \

    -v uptime-kuma:/app/data \

    louislam/uptime-kuma:1

    Then open http://localhost:3001 and set up your monitors.

    Step 2: Webhooks for Automation

    UptimeRobot Webhook

  • My Settings -> Alert Contacts -> Add Alert Contact
  • Type: Webhook
  • URL: Your n8n/Make.com webhook URL
  • POST Value:
  • {
    

    "monitorID": "<em>monitorID</em>",

    "monitorURL": "<em>monitorURL</em>",

    "monitorFriendlyName": "<em>monitorFriendlyName</em>",

    "alertType": "<em>alertType</em>",

    "alertTypeFriendlyName": "<em>alertTypeFriendlyName</em>",

    "alertDetails": "<em>alertDetails</em>",

    "alertDuration": "<em>alertDuration</em>"

    }

    Uptime Kuma Webhook

  • Settings -> Notifications -> Add
  • Type: Webhook
  • URL: Your webhook URL
  • Request Body:
  • {
    

    "monitor": "{{name}}",

    "status": "{{status}}",

    "msg": "{{msg}}",

    "url": "{{url}}"

    }

    Step 3: Automated Responses

    Workflow: Simple Alert

    Uptime Monitor Webhook
    

    (Server Down)

    |

    Parallel:

    |-- Slack: #alerts Channel

    |-- SMS: On-Call Team

    |-- PagerDuty: Create incident

    +-- Status Page: Update status

    n8n Implementation

    Node 1: Webhook Trigger
    // Webhook receives UptimeRobot alert
    

    {

    "monitorID": "123456",

    "monitorURL": "https://api.your-app.com",

    "monitorFriendlyName": "API Server",

    "alertType": "1", // 1 = Down, 2 = Up

    "alertTypeFriendlyName": "Down",

    "alertDetails": "Connection timeout",

    "alertDuration": "0" // Seconds offline

    }

    Node 2: Check Alert Type
    // Node: IF
    

    const isDown = $json.alertType === "1";

    const isUp = $json.alertType === "2";

    if (isDown) {

    return { route: 'down' };

    } else if (isUp) {

    return { route: 'up' };

    }

    Node 3: Slack Alert (Down)
    // Node: Slack
    

    {

    "channel": "#alerts",

    "text": "",

    "attachments": [

    {

    "color": "danger",

    "title": "Server Down: {{ $json.monitorFriendlyName }}",

    "fields": [

    { "title": "URL", "value": "{{ $json.monitorURL }}", "short": true },

    { "title": "Reason", "value": "{{ $json.alertDetails }}", "short": true },

    { "title": "Time", "value": "{{ $now }}", "short": true }

    ]

    }

    ]

    }

    Node 4: Recovery Alert (Up)
    // Node: Slack
    

    {

    "channel": "#alerts",

    "text": "",

    "attachments": [

    {

    "color": "good",

    "title": "Server Recovered: {{ $json.monitorFriendlyName }}",

    "fields": [

    { "title": "URL", "value": "{{ $json.monitorURL }}", "short": true },

    { "title": "Downtime", "value": "{{ $json.alertDuration }} seconds", "short": true }

    ]

    }

    ]

    }

    Workflow: Escalation

    Server Down Alert
    

    |

    Slack: #alerts

    |

    Wait: 5 minutes

    |

    Still down? --No--> End

    | Yes

    SMS to On-Call

    |

    Wait: 10 minutes

    |

    Still down? --No--> End

    | Yes

    Call Team Lead

    |

    PagerDuty Incident

    Implementation

    // Node: Wait + Status-Check
    
    

    // Step 1: Send initial alert

    await sendSlackAlert(monitor);

    // Step 2: Wait 5 minutes

    await wait(5 <em> 60 </em> 1000);

    // Step 3: Check status again

    const stillDown = await checkStatus(monitor.url);

    if (stillDown) {

    // Escalation Level 2: SMS

    await sendSMS(onCallPhone, ALERT: ${monitor.name} down for 5 min);

    await wait(10 <em> 60 </em> 1000);

    const stillDown2 = await checkStatus(monitor.url);

    if (stillDown2) {

    // Escalation Level 3: Call + PagerDuty

    await createPagerDutyIncident(monitor);

    }

    }

    Workflow: Automatic Diagnostics

    Server Down Alert
    

    |

    Automatic Checks:

    |-- DNS resolvable?

    |-- SSL certificate valid?

    |-- Server reachable (Ping)?

    |-- HTTP Response Code?

    +-- Last deployment time?

    |

    Create diagnostic report

    |

    Send with alert

    Implementation

    // Node: Code - Gather diagnostics
    

    const url = new URL($json.monitorURL);

    const diagnostics = {

    timestamp: new Date().toISOString(),

    monitor: $json.monitorFriendlyName,

    checks: []

    };

    // DNS Check

    try {

    const dns = await resolveDNS(url.hostname);

    diagnostics.checks.push({ name: 'DNS', status: 'OK', value: dns });

    } catch (e) {

    diagnostics.checks.push({ name: 'DNS', status: 'FAIL', error: e.message });

    }

    // HTTP Check

    try {

    const response = await fetch(url, { timeout: 10000 });

    diagnostics.checks.push({

    name: 'HTTP',

    status: response.ok ? 'OK' : 'WARN',

    value: response.status

    });

    } catch (e) {

    diagnostics.checks.push({ name: 'HTTP', status: 'FAIL', error: e.message });

    }

    // SSL Check

    try {

    const ssl = await checkSSL(url.hostname);

    diagnostics.checks.push({

    name: 'SSL',

    status: ssl.valid ? 'OK' : 'WARN',

    value: Expires: ${ssl.expiresIn} days

    });

    } catch (e) {

    diagnostics.checks.push({ name: 'SSL', status: 'FAIL', error: e.message });

    }

    return diagnostics;

    Workflow: Automatic Restart

    Warning: Only for non-critical services!
    Server Down Alert
    

    |

    Is Auto-Recovery enabled?

    | Yes

    SSH to Server:

    "systemctl restart nginx"

    |

    Wait 30 seconds

    |

    Check status

    |

    Up? -> Report success

    Down? -> Escalate manually

    n8n SSH Node

    // Node: SSH
    

    {

    "host": "server.your-domain.com",

    "port": 22,

    "username": "deploy",

    "privateKey": "{{ $env.SSH_PRIVATE_KEY }}",

    "command": "sudo systemctl restart nginx && sleep 5 && systemctl is-active nginx"

    }

    Workflow: Status Page Update

    Server Down Alert
    

    |

    Update Status Page:

    - Component: "API" -> "Partial Outage"

    - Create incident

    |

    [On Recovery]

    |

    Update Status Page:

    - Component: "API" -> "Operational"

    - Mark incident as "Resolved"

    Integration with Statuspage.io

    // Node: HTTP Request
    

    // Statuspage.io API

    // Create incident

    {

    "method": "POST",

    "url": "https://api.statuspage.io/v1/pages/{{ $env.STATUSPAGE_ID }}/incidents",

    "headers": {

    "Authorization": "OAuth {{ $env.STATUSPAGE_TOKEN }}"

    },

    "body": {

    "incident": {

    "name": "{{ $json.monitorFriendlyName }} - Service Disruption",

    "status": "investigating",

    "impact_override": "partial",

    "body": "We are currently investigating issues with {{ $json.monitorFriendlyName }}.",

    "component_ids": ["abc123"],

    "components": {

    "abc123": "partial_outage"

    }

    }

    }

    }

    Multi-Location Monitoring

    Check from different locations:

    Parallel Checks:
    

    |-- Frankfurt (Hetzner)

    |-- Amsterdam (DigitalOcean)

    |-- New York (AWS)

    +-- Singapore (GCP)

    |

    At least 2 failed?

    |

    Yes: Real outage -> Alert

    No: Local issue -> Log

    Dashboard with Grafana

    Collecting Metrics

    // On each check: Save metric
    

    {

    "metric": "uptime_check",

    "tags": {

    "monitor": $json.monitorFriendlyName,

    "location": "frankfurt"

    },

    "fields": {

    "response_time": 234,

    "status": 1, // 1 = up, 0 = down

    "ssl_days_remaining": 45

    }

    }

    Grafana Dashboard

    • Uptime percentage (last 30 days)
    • Response time graph
    • Incidents timeline
    • SSL expiration warnings

    Make.com Scenario

    Module Setup

  • Webhooks -> Custom Webhook (UptimeRobot)
  • Router -> Down / Up / Degraded
  • Slack -> Send alert
  • Twilio -> SMS on escalation
  • HTTP -> Status Page API
  • Google Sheets -> Incident log
  • Best Practices

    1. Check Intervals

    Service TypeRecommended Interval
    Landing Page5 minutes
    E-Commerce1 minute
    API (SLA)30 seconds
    Payment Gateway30 seconds

    2. Alerting Rules

    • No alerts at night for non-critical services
    • Deduplication (not 100 alerts for 1 outage)
    • Clear escalation paths

    3. Avoiding False Positives

    // Only alert after X failed checks
    

    const failedChecks = await getRecentChecks(monitorId, 3);

    const allFailed = failedChecks.every(c => c.status === 'down');

    if (allFailed) {

    await sendAlert(monitor);

    }

    Costs

    SetupCost/Month
    UptimeRobot (Free) + n8n$20-50
    Uptime Kuma (Self-Hosted) + n8n$5-10 (Server)
    Pingdom + Make.com$30-50
    Better Uptime$20 (All-in-One)

    Conclusion

    Automated uptime monitoring is essential:

    • Instant notification on outages
    • Automatic escalation
    • Self-healing systems possible
    • Transparency through status pages

    Next Steps

  • Choose a monitoring tool (UptimeRobot to start)
  • Set up webhook in n8n/Make.com
  • Configure alert channel (Slack/SMS)
  • Define escalation process
  • Set up status page (optional)
  • We can help you build a robust monitoring infrastructure.

    Questions About Automation?

    Our experts will help you make the right decisions for your business.