What Is Restreaming (Simulcasting) and How Does the Server Do It?

In streaming, restreaming (also called simulcasting) means taking one live input and delivering it to multiple destinations and/or multiple output formats at the same time. For radio DJs, podcasters, churches, school stations, and live event crews, restreaming is how you go live once and appear everywhere—your station player, mobile apps, smart speakers, and social platforms—without running multiple encoders on your laptop.

Shoutcast Net treats restreaming as a server-side distribution and protocol translation problem: ingest one clean stream, then replicate, repackage, and relay it to listeners and platforms efficiently. Done correctly, this lets you stream from any device to any device while keeping bandwidth costs predictable, latency controlled, and reliability high.

You’ll also see why Shoutcast Net’s flat-rate model is often a better fit than Wowza-style pricing that can become expensive with per-hour/per-viewer billing—especially when you’re scaling to large audiences or running 24/7 stations.

Quick FAQ

  • Restreaming: one input → many outputs
  • Simulcasting: same concept, often cross-platform
  • Why server-side: saves uplink bandwidth + improves reliability
  • Best for: 24/7 stations, live shows, multi-platform video

What is restreaming (simulcasting)?

Restreaming is the act of receiving an incoming stream (audio-only or audio+video) and immediately re-publishing it to one or more outputs. Those outputs might be:

  • Multiple listener endpoints (your website player, iOS/Android apps, smart speaker skills)
  • Multiple platforms (social/video destinations) — Restream to Facebook, Twitch, YouTube
  • Multiple formats from the same source (MP3 + AAC; HLS + WebRTC)
  • Multiple bitrates (ABR ladder for varying network conditions)

Why creators use restreaming

Most creators start by going live from one encoder (BUTT, Mixxx, OBS, a hardware encoder, a phone app). Without restreaming, you’d need to upload separate streams to each destination—doubling or tripling your required uplink bandwidth and increasing the chance of failure.

Restreaming moves that complexity to the server. You send one clean ingest, and the server fans it out. This makes it practical to stream from any device to any device—even if the ingest protocol is not the same as the playback protocol.

Real-world example (radio + social video)

A school radio station might do an audio-only Shoutcast stream for regular listening, but also simulcast the studio camera to social platforms during big events. Restreaming lets the producer send a single OBS output (or a single audio encoder output) and have the server deliver:

  • Audio stream to the station player (MP3/AAC)
  • Video stream to social platforms via RTMP
  • Mobile-friendly HLS for embedded playback

Pro Tip

If your home/venue uplink is limited (common for churches and outdoor events), server-side restreaming is the simplest way to avoid saturating your connection. Upload once, distribute many.

How a restreaming server works under the hood

A restreaming server is essentially a real-time media router. It accepts an incoming stream, optionally processes it, and then republishes it to multiple destinations while monitoring health, buffering, and reconnection logic.

Core pipeline: ingest → normalize → distribute

At a high level, the data path looks like this:

        (Encoder: OBS / BUTT / Hardware)
                 |
                 |  Ingest (RTMP / Icecast / Shoutcast / SRT)
                 v
        +------------------------+
        |   Restreaming Server   |
        |------------------------|
        | 1) Auth + stream key   |
        | 2) Jitter buffer       |
        | 3) Mux/Demux           |
        | 4) Optional transcode  |
        | 5) Packetization       |
        | 6) Fan-out to outputs  |
        +------------------------+
           |        |        |
           v        v        v
        HLS/CDN   Shoutcast  RTMP to Platforms
        to viewers to listeners (Facebook/Twitch/YouTube)

Fan-out: one input, many outputs

The “fan-out” stage is where restreaming saves you money and complexity. Instead of your encoder pushing 3 separate uploads, the server duplicates the stream internally and pushes copies to each destination. This often includes:

  • Output mapping (which input stream goes to which platform/channel)
  • Rate control (avoid bursts that cause RTMP disconnects)
  • Health checks (auto-reconnect to remote ingest endpoints)
  • Failover behaviors (switch to backup input if primary drops)

Protocol translation: bridging incompatible worlds

Many workflows require protocol conversion and container/codec repackaging. That’s where the server can accept “X” and publish “Y” so your audience can play it anywhere. In practical terms, restreaming platforms aim to support any stream protocols to any stream protocols (RTMP, RTSP, WebRTC, SRT, etc) depending on the feature set.

Optional processing: transcode vs passthrough

There are two major modes:

  • Passthrough (no transcode): lowest CPU cost, preserves original quality, minimal added latency.
  • Transcode: converts codec/bitrate/resolution, enables ABR ladders, but adds CPU cost and typically increases latency.

For audio-only stations, passthrough is usually ideal. For video simulcast, you may transcode to create an HLS ladder (e.g., 1080p/720p/480p) while also pushing a single RTMP feed to social.

Pro Tip

When you can, ingest at a stable bitrate and let the server do distribution. A stable ingest reduces “encoder-induced” disconnects and makes the fan-out more reliable—especially to strict RTMP endpoints.

Protocols and formats: Icecast/Shoutcast vs RTMP/HLS

Restreaming gets confusing because people mix up protocols (how data moves) and formats (how audio/video is packaged and encoded). Let’s clarify the most common combinations for the target audience here.

Audio streaming (Shoutcast/Icecast)

Shoutcast and Icecast are classic HTTP-based streaming systems widely used for radio. They’re excellent for 24/7 audio, simple player embeds, and broad compatibility. Shoutcast Net offers both Shoutcast hosting and Icecast options, and can pair them with AutoDJ for always-on stations.

Typical audio codecs:

  • MP3: maximum compatibility
  • AAC/AAC+: better quality per kbps (useful at 48–96 kbps)

Video-first streaming (RTMP ingest, HLS playback)

RTMP is commonly used to send live video from an encoder (OBS) to a server or social platform. Most viewer playback, however, happens over HLS (HTTP Live Streaming) because it works well with CDNs and browsers/devices.

A typical pattern is:

  • Ingest: RTMP from OBS → server
  • Playback: HLS from server/CDN → viewers

Latency notes: HLS vs WebRTC

If you need interactive chat-driven shows, auctions, or “call-in” style participation, latency matters. HLS often lands in the 10–30 second range depending on segment size and buffering. WebRTC is designed for real-time interaction and can reach very low latency 3 sec or below in optimized deployments.

This is why modern restreaming discussions often include bridging multiple protocols so the same event can be delivered as:

  • WebRTC (ultra low latency) for interactive viewers
  • HLS (scalable) for large audiences
  • Shoutcast/Icecast (audio-only) for radio listeners

Comparison table: practical differences

Use case Best-fit protocol Typical latency Pros Trade-offs
24/7 radio station audio Shoutcast / Icecast (HTTP) ~5–20s (player dependent) Simple, compatible, great for continuous audio Not designed for ultra-low-latency interaction
Simulcast to social platforms RTMP (ingest) ~3–10s (platform dependent) Standard for OBS → platforms Strict ingest rules; reconnect logic matters
Browser/mobile live playback at scale HLS (playback) ~10–30s CDN-friendly, reliable, adaptive bitrate Higher latency than real-time protocols
Interactive “real-time” viewing WebRTC Sub-second to ~3s Closest to real-time Harder to scale; needs careful infrastructure

Pro Tip

If your goal is “go live everywhere,” pick an ingest your encoder supports reliably (often RTMP for video, Shoutcast/Icecast for audio) and let the server do the multi-protocol publishing for different viewer devices.

Bandwidth, latency, and reliability (99.9% uptime)

Restreaming is not just “copy the stream.” The server must keep the stream stable under varying network conditions and audience spikes. The three big engineering constraints are bandwidth, latency, and reliability.

Bandwidth math you can actually use

A simple bandwidth estimate (audio) is:

egress_mbps ≈ (bitrate_kbps × listeners) / 1000

Example:
128 kbps stream × 500 listeners ≈ 64,000 kbps ≈ 64 Mbps egress

For video, the same concept applies, but bitrates are much larger (e.g., 2,500–6,000 kbps for 720p/1080p). This is where server-side restreaming and CDN-friendly packaging (HLS) become critical.

Why uploading once matters (uplink savings)

Let’s say you’re simulcasting a 4 Mbps video feed to three platforms. If you send directly from OBS to each platform, your outbound uplink must sustain ~12 Mbps continuously (plus overhead). With restreaming, you upload 4 Mbps once to the server, and the server pushes out to platforms and viewers from a data center with stronger connectivity.

Latency: where it comes from

Latency is cumulative. Common contributors include:

  • Encoder buffering (GOP size, B-frames, VBV)
  • Network jitter buffers (to prevent stutter)
  • Protocol design (HLS segments add inherent delay)
  • Player buffering (device-dependent)

If your use case demands very low latency 3 sec, you typically combine low-latency settings on the encoder with a low-latency delivery protocol (often WebRTC, or low-latency HLS variants) and avoid unnecessary transcoding steps.

Reliability: what “99.9% uptime” means

99.9% uptime is a practical reliability target for streaming services. Over a 30-day month (~720 hours), 99.9% corresponds to about:

720 hours × 0.1% downtime ≈ 0.72 hours ≈ 43 minutes/month

Achieving this requires redundant network paths, service monitoring, automated restarts, and capacity planning so audience spikes don’t overload a single node. Shoutcast Net’s hosting is designed for continuous broadcasting with features like SSL streaming (HTTPS-compatible players) and unlimited listeners plans so growth doesn’t force a surprise migration.

Security and trust: SSL streaming

Modern browsers increasingly block or warn about mixed-content audio players. SSL streaming (serving streams over HTTPS) keeps embeds working on secure websites and is especially important for schools, churches, and organizations with strict IT policies.

Pro Tip

If you ever see “random buffering,” test your ingest stability first. Many “server problems” are actually local uplink issues. Restreaming helps by moving the heavy distribution to the data center, but your single ingest must still be stable.

AutoDJ + live shows: common 24/7 station workflows

A lot of stations don’t just go live occasionally—they run 24/7. That’s where AutoDJ changes everything: you can schedule content, playlists, and rotations so the station stays online even when no one is live in the studio.

Workflow A: AutoDJ baseline + live takeover

This is the classic internet radio workflow:

  • AutoDJ plays music and pre-recorded shows continuously
  • When a DJ connects live, the server switches to the live source (highest priority)
  • When the DJ disconnects, the server automatically falls back to AutoDJ
Priority 1: Live DJ (source client)
       |
       v (disconnect)
Priority 2: AutoDJ playlist rotation
       |
       v (schedule)
Priority 3: Backup loop / emergency file

Workflow B: Live show restreamed to social + audio listeners

A podcast or church service might be produced in OBS (video) but also needs an audio-only stream for listeners. Restreaming can publish multiple outputs from the same production:

  • Audio-only to Shoutcast/Icecast for listeners at work or in the car
  • RTMP output to social destinations (simulcast)
  • HLS for a website player

This is an example of using the server to stream from any device to any device without forcing the producer to run multiple encodes.

Workflow C: Emergency fallback + scheduled programming

Professional stations plan for the “DJ’s internet drops” scenario. A common approach is to keep AutoDJ active as a safety net and schedule IDs/PSAs. This reduces dead air and improves listener retention. Industry analytics often show that buffering and silence are top reasons for drop-offs, so eliminating dead air can materially improve average listening time.

Where Shoutcast Net fits

Shoutcast Net combines hosting + AutoDJ so you can build a station that behaves like a broadcast workflow, not just a “when I remember to go live” stream. Many legacy Shoutcast setups required more manual switching and had limitations around modern HTTPS embeds and scaling; Shoutcast Net focuses on an updated, reliable hosting experience with 99.9% uptime and SSL streaming.

Pro Tip

Treat AutoDJ like your “automation layer” and live DJs like scheduled overrides. That mindset makes your station predictable, easier to staff, and easier to scale.

When to restream and cost models (flat-rate vs per-viewer)

Restreaming is most valuable when you need reach, reliability, or both. The pricing model you choose can also determine whether restreaming remains affordable when you grow.

When restreaming is the right tool

  • Multi-platform live events: one show, multiple destinations (Restream to Facebook, Twitch, YouTube)
  • Limited uplink venues: churches, outdoor events, schools on shared networks
  • Hybrid audio/video brands: audio station + occasional video simulcasts
  • Scaling audiences: you want infrastructure that won’t collapse at peak
  • Format/protocol bridging: any stream protocols to any stream protocols (RTMP, RTSP, WebRTC, SRT, etc)

Cost models: why billing structure matters

Two common ways streaming services charge:

  • Flat-rate hosting: predictable monthly cost (great for 24/7 and growth)
  • Usage-based billing: per-hour, per-GB, or per-viewer (can spike unexpectedly)

Many creators get surprised by usage-based platforms when a stream goes viral or when they run long events. Wowza-style pricing is often criticized for being expensive at scale due to per-hour/per-viewer billing, which can punish success. For budget-sensitive organizations (schools, churches, community radio), predictability matters.

Shoutcast Net’s advantage: predictable, scalable hosting

Shoutcast Net is built for broadcasters who want to grow without rethinking the budget every month. Key advantages include:

  • $4/month starting price options
  • unlimited listeners plans (so audience growth doesn’t break the model)
  • 99.9% uptime target for consistent availability
  • SSL streaming for modern secure embeds
  • AutoDJ support for true 24/7 stations
  • A 7-day free trial — start with a 7 days trial

Practical example: budgeting a growing station

Imagine a community station averaging 200 concurrent listeners at 128 kbps, with occasional peaks to 1,000 during local sports. Usage-based services may bill significantly more during peak months (and require careful monitoring). A flat-rate, broadcaster-focused host is often simpler for operations, planning, and sponsorship commitments.

If you’re ready to build or upgrade, start with the Shoutcast Net shop to choose a plan, or test first with the 7 days trial.

Pro Tip

If you stream frequently (weekly shows, 24/7 radio, regular services), prioritize predictable flat-rate hosting. Usage-based pricing can look cheap at first but becomes painful exactly when your audience grows.

Next Step: Audio station

Launch a reliable radio stream with modern compatibility and scaling.

Explore Shoutcast Hosting →

Next Step: 24/7 automation

Keep your station online even when nobody is live.

Add AutoDJ →