I found the multicast registery here.
https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.
Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.
So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?
How would using these obsolete Facebook and YouTube?
You’d just have a multicast aware client look on the multicast net for video streams
Not just video streams, everything that is a “facebook” post could be broadcast to anyone that subscribe to the “multi cast group” in a far more efficient and direct manner than the current CDN networks (assuming that private dark fiber nets would have to be relinquished to the internet now that they couldn’t be privatized for profits).
And yes, that would have its own sets of challenges, but what it would do, is eliminate facebook from the equation directly.
All aspects of “One person to many” communications are the perview of multicast.
But ok, let’s image an actual concrete example.
Imagine I have a dozen cat videos that I want to share with the world.
Approximately 4 megabytes each, they are stored on my Debadged western digital “WD Live Book” running a simple webdav server modified for multicast compatibility. Something (multicast webdav) that I don’t think exist because multicast is intentionally broken internet-wide and so no one ever tried.
Let’s imagine that 500 persons in Japans want to see it at the same time.
Whatever multicast stream browsing client they are using enters my “multicast group”, my modified webdav servers sends one copy of the 4 megabyte file in approximately 40 milliseconds as a series of multicast packets.
These packets make their way to the fattest cheapest transpacific link from here to Japan.
Once they arrive at the right router in Japan where the route splits between the 3 ISPs that host my 500 viewers, the packets are now sent, simultaneously toward each of the ISPs, this take exactly the same amount of trans-pacific network resources as one unicast address sending one file to another unicast address accross the ocean.
In each of the ISPs, as there are routers that split between regions and town and street, one copy of the packet is sent to the route that contains one person that wants it. Functionally that is the same as if we had a one-to-one unicast connection. But if there are 10 people on the same street, requesting the same file, the network cost for all 10 of them is the exact same as if just one person requested the file.
All of this is just ONE aspect of the almost unthinkable changes that functionning internet multicast could bring us, to empower us as individual to talk to each other, as groups, in one-to-many communication, with the interposition of using “someone else’s computer”.
This is the internet without having to use Zuckerberg’s computer.
It would not just a global revolution but a local one, imagine tell your street, sending a simple text message “hey guys, I’m making hot dogs who wants some ?”. Something that is technologically impossible on the internet without using Zuckerberg’s computer, and even then, I can only message 3 persons on my street via facebook anyway !
I don’t see how that would work. So all my friends video streams, for instance, would be streaming data to all my devices as they are broadcast.
But my laptop is currently asleep. It wouldn’t receive anything.
How do you solve that without storing the video on a server that I can pull from on demand?
Even for my devices that are on, they’d have to store everything as it was broadcast.
And the streams (including every other broadcast) would constantly be eating up my bandwidth.
How would I not receive streams that I’m not interested in? What would decide which broadcast packets do or don’t get sent to my router?
You subscribe to multicast groups, when the cast happens, you have something to receive it, or you don’t that’s up to you.
In the old days we have these glass tubes, you turn them on and the streams appears on the front, when they’re off you can’t see the stream.
We had little black box underneath the tube, and if you push the right incantation of button, it would store the stream and you’d watch it later.I know that might sound a little far fetched, a little magical, but the black square in your pocket that receives emails, it can receives those casts as well.
Depending on how much spam the manufacturer has injected in it, it can probably store around a couple hundred hours of videos, and a couple hundred million tweets or “short text messages” as they used to be calledFrom the research I’ve done since posting that, it has become evident that all the little internet fiefdoms that make up the net, each want a slice of the pie, and CDN networks, a parrallel pseudo network to the internet, is the bridge where they get to collect that toll.
If multicast worked as designed, this toll would be in danger, because anyone could just use it instead of the CDN or using unicast and local caches.
No the streams wouldn’t be “constantly offering your bandwidth” that’s a “broadcast” a broadcast you do always receive it, but a multicast you need to enter the multicast group or else you don’t receive it.
How would I not receive streams that I’m not interested in?
That’s the same as above, you just don’t subscribe or you unsubscribe to the multicast group.
There would be multicast groups just for knowing what streams you could join if you wanted to.
Your street would have a multicast of just your neighbours and just for text.
This might feel foreign to you, but that’s because the software to browser is as non-existent and the capability of multicast on the internet itself.
Nobody is going to make these browsers and pseudo internet VHS, when we all know that ISPs will never allow multicast through unless forced to by the statists, finally a legitimate use of force besides building roads and powerlines !
What would decide which broadcast packets do or don’t get sent to my router?
multicast subscription to multicast groups, a trivial affair, it was trivial in 1980s, it is still trivial now, the ISPs will tell you this is a complex scaling impossibility, they are lying
I see I misunderstood how you mean this to work, that routing would handle sending data only to subscribers. I was imagining that it mean a simple LAN broadcast using a packet with the subnet bits all set (e.g. 192.168.255.255). I think that it’s more analogous to a mailing list distribution, but for general data/streams?
But your earlier example of downloading the cat video still fails unless many people request the video at the same time (otherwise you’re multicasting to one). What happens if I watch the video on my phone while out, then watch it again on my laptop at home? It will still need sending twice.
Wouldn’t a more efficient approach just be to have something like ipfs with lots of local caching?
It’s not “video-on-demand”, you don’t subscribe to a file but to an address, the multicaster sends the file or stream or message when they’re ready, you receive them if you’re listening, everyone subscribed gets the same series of packets. It’s the only benefit that multicast really has over unicast, the sender just sends the packet once. There’s no server, no caching, no repeats. Direct from you to them and it can work for everything.
So when a video is created it is immediately sent to subscribers?
In that case, for things to be sent once, it relies on the receivers always being online. That doesn’t work if my laptop is closed at the time.
That’s why I’m thinking that it needs online caching to work. Or everyone has a cloud server that handles sending and receiving while they’re not online.
In fact, that starts to sound like everyone running their own personal lemmy-like instance, to which their friends subscribe.
And in that case it wouldn’t matter if messages were sent more than once, each person’s server would handle it.
The information could be live streamed from the camera or from a recording, that doesn’t make a difference. It could also be ANY data, not just video.
Also, yes, if you are not listening for the packets, then you will not receive them later. There is no servers between the sender and receiver, this means no gatekeeper, no middleman, it’s a democratization of broadcast without intermediaries.
The only reason it is more efficient is because of how direct it is.
Before the internet we had TVs which, if they were not turned on, could not store and receive any of the video stream being broadcast, it’s a lot like that. You didn’t ask the TV station to send you a video file, they sent it out regardless and you listened to it or you didn’t.
The problem with caching or storing anything, is now you’re back to need one connection per receiver, you’re no longer sending out a single copy, you have to send 500 hundred copies if 500 people want it, that takes far too much resources.
Assuming multicast worked across the internet, it’s not going to work in practice. Multicast works by sending a packet and fanning it out to all receivers.
It works with broadcast TV like IPTV because everybody is watching the same few set of channels at the same time, but on YouTube I can watch any video at any time. How does a mythical Transmitter know what video packets to send when? Are they on loop? Are clients receiving packets for videos they don’t care about?
You might be interested in PeerTube which uses unicast peer to peer to distribute videos in a way that works.
This minor technical difficulties could easily be avoided, they not at all a problem inside the CDN network, once your past the gatekeepers just look at twitch, it’s a piece of cake to make it work even though the engineers will disagree.
The only technical limitation they had to bypass, was every little ISP being it’s own little bridge troll about it and the CDN sidestep this entirely by running a whole parrallel network.
And now, there’s all that CDN sunk cost making sure we’ll never EVER have working multicast on the internet, and they’ll have all the make believe excuses to pretend we can’t ,even though, it’s basically the same as unicast routing with extra steps…
Multicast wouldn’t really replace any of the sites you mention because people want and are used to on-demand curated content.
It’s also not as practical as you make it sound to implement it for the entire internet. You claim that this would be efficient because you only have to send the packets out once regardless of the number of subscribers. But how would the packets be routed to your subscribers? Does every networking device on the internet hold a list of all subscriptions to correctly route the packets? Or would you blindly flood the entire internet with these packets?
people want and are used to on-demand curated content.
If we had had a multicast backbone, this would already be a solved problem, the curation would be crowdsourceds, publish would be auto-curated through cryptographic verified consensus reputation, nodes emitting a history of opinions about other nodes, anonymous but with a reputation history and we’d have the abuse part out of the game already, instead we got Zuck’s faceless Jannies wiping our collective butts !
It’s also not as practical as you make it sound to implement it for the entire internet.
This is the for-profit network operator’s consensus view, which their profits are in part made from selling the solution to the disabling of multicast back to us. I don’t believe it is meaningfully a technical problem if there had been the will, it would already have been done.
Does every networking device on the internet hold a list of all subscriptions
The routers do (not every), yes, the MBGP table will be megabytes long and extremely dynamic, this is impossible to solve in 1980, a crushing challenge in 1990, feasible but not economically expendient in in 2000, and globally trivial but opposed to financial interest in 2010 and “contempt of business model” in 2020.
As far as the router, it’s a challenge on par with keeping DNS running with 2000s hardware.
Or would you blindly flood the entire internet with these packets?
No, that’s broadcast
globally trivial
Please share your trivial solution then.
We organize multicast nationally and globally like we do RF band plans. Some addresses are reserved to stream advertising that anyone can pickup with the global, national, regional, metropolitan, city, town, street levels.
Eligible host subscribe, or advertise their “subscription” to a particular address (and port, we’ve got 65536 ports and most communications between two hosts use just one)
The subscription are broadcast within their scope, pooled into distributed tables copied in bulk between routers. It’s simple association of multicast groups and subscribed hosts.
The end result is that at minimum all routers existing between an host and their subscriber have a copy of the multicast group membership for that address.
When a packet to that address arrives on any router in between, the route trigger and the router sends it down each of its WAN port that has a unicast subscriber down stream.
That’s basically the multicast process with just a little improved protocols and caching for effciency.
I say a little improved, but I think it’s already all there, there burned into the silicon already. It’s just a matter of turning it on and politicians putting on the screws on ISPs to make them play nice.
The bulk of it is already in these protocols
IGMP (Internet Group Management Protocol) PIM (Protocol Independent Multicast) PIM Sparse Mode (PIM-SM) PIM Dense Mode (PIM-DM) MSDP (Multicast Source Discovery Protocol) MBGP (Multiprotocol BGP)
There might be still a bit of glue to get there, but on the whole, on a technical front this is less technology than it took to get bittorrent to work. And bittorrent works, really really well !
We’re going to need more client side software, but that will come as soon as “multicast works” because it just didn’t make sense to make global multicast stream browser when there was no global multicast
We’re talking stream browsers, viewer clients for all kinds of media types, video viewer “tv” “radio”, text streams, notification streams, things we cannot even imagine yet
And we’ll need a non-censorious curation system, anonymous cryptographic crowdsourced reputation system, that’s “letsencrypt” on steroids. Voting and beyong-voting systems of likes, dislikes, superlikes, blocks, bans, replies, forward, crosspost all the social media stuff but floating mid-air without a single server or janitor managing it all, just the same abuse prevention system that deal with DDOS and SPAM, everything else is fair game and section 230 protected.
I think you are missing the part of the intent of the question. Multicast is wasteful in a large chunk of IPv4 range. If it were a smaller range, the leftover IP’s would be available for general use.
You’re correct in that it wouldn’t help for the other reasons OP noted, since CDN’s do all that heavy lifting already, and do it better than pure multicast could (geo-location, for example).
Honestly, no clue, but in my career the answer why something beneficial hasn’t been done is usually “backwards compatibility”.
What should we do when backward compatibility (and other likely excuses) are wielded against progress and justice ?
I remember when backward compatibility killed progress for IRC and NNTP and now we have discord and reddit, but we lost so much freedom in the process not to mention atomization, social fragmentation and oppressive censorship culture wielded by both sides of the political spectrum.
Will backward compatibility, and it’s righteous stand against “capability backsliding” always end up strangling us ?
Does it have to be this way for every technological consensus that we establish ?
To have it clogged up by well meaning (and hidden commercial) interests until the technological consensus is fragmented into private technological fiefdoms (cloudflare, apple, the other usual suspects).
(It took me over 40 years to figure that out about one extremely niche topic, isn’t just an “accident of technology”, what else is like that in the world and I just can’t see it ?)
Multicast addresses are handled specially in routers and switches all over the world.
Changing that would require massive firmware updates everywhere to get this to work and we can’t even get people to adopt IPv6. Nevermind the complexity in figuring out to how manage IGMP group membership at the Internet scale.
Given the complexity with either change, its better to adopt IPv6 and use PeerTube. Multicast at the Internet scale won’t work and IPv6 is less work
Oh multicast’s not going to work on IPv6 either. From my research since writing the above, it has become clear to me when they say “it won’t scale”
they don’t mean some kind of Big O of N notation complexity or compute scaling, they mean economically
This crucial feature if it were unlocked, wouldn’t make the profits scale up,
they wouldn’t be able to say hey netflix, on our network multicast access is “this much” or facebook you CDN cache is “that much”That’s what they mean with “scaling”, it’s not some kind of technical difficulty
If we had governments and they said “you will make it work OR ELSE” it could work, it could work very well.
I could send that 4 megabyte file to 500 person without paying a single bridge toll for it, just like in unicast.We’re ALWAYS going to have to use someone else’s computer because that’s where the bridge toll is collected, if not in cash then in kind.
I don’t know who they is in the case, but let’s think about this for a minute.
Technically what do you need for this to work?
How many Multicast Addresses do you need? How are multicast addresses assigned? Can anybody write to any multicast address? How do I decide that 239.53.244.53 is for my file not your movie? How do we know who is listening? This is effectively BGP, but more tricky because depending on the answer to the previous question you may not benefit from any network block sizes to reduce the routing info being shared. How do you decide when to start transmitting a file? Is anybody listening? Does anybody care?
You seem latched on to assume that technically would work and haven’t asked if it is actually technically a good solution. P2P is going to work better than multicast
I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.
And “They” is the operators of the internet backbone, the CDNs, the ISPs
There protocols for dealing about what you’re asking Since multicast is a dead (murdered) technology, I can’t tell you exactly what does what but here they are
IGMP (Internet Group Management Protocol) MLD (Multicast Listener Discovery) PIM (Protocol Independent Multicast) DVMRP (Distance Vector Multicast Routing Protocol) MOSPF (Multicast OSPF) MSDP (Multicast Source Discovery Protocol) BSR (Bootstrap Router) Auto-RP (Automatic Rendezvous Point) MBGP (Multiprotocol BGP) MADCAP (Multicast Address Dynamic Client Allocation Protocol) GLOP Addressing ALM (Application-Layer Multicast) AMT (Automatic Multicast Tunneling) SSMPing MRD (Multicast Router Discovery) CBT (Core-Based Trees) mVPN (Multicast VPN)
There would be many more of course, things that specifically resolve any unexpected issues that might arise from “actually existing multicast” in the hands of “the public”, which has never happened.
While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.
Also multicast open protocol that might have developed if ISPs didn’t ruin multicast for everyone, would have steered the whole internet toward a “standard solution” in the same way that we all use the “same email system”. There would be one way that was “the way” of doing this one-to-many communication
To specifically answer your question, as far as routers are concerned, whenever a packet arrives, the router has to decide, WHICH of its WAN ports does the packet need to go to or does the packet need to be dropped.
From the point of view of the router, the whole internet is divided up in the number of WAN port it has and it sends the packet down the port with the shortest path to the destination host.
Multicast is a lot like that, the main difference is that the router MIGHT send the packet to more than one destination.
I think the solution is that receivers that wish to receive the multicast packets to a particular address (and port), from a particular source host, would subscribe to it. The literature mentions “multicast group subscription” I’m pretty sure this is already what this is for.
I think what this does is add branches in the routing table for the subscribed addresses in the multicast range. This tells the router about hosts that become part of the multicast group. I’m not sure if this is supposed to tell every router on the whole internet, or just the routers between the source and the destination host, but it gives the routers that need to know, where to send those packets pretty much in the same way as unicast, except with multiple destinations as specified in the multicast group subscriber list.
It’s really just unicast with extra steps, and not that many more steps, and those are all already baked in L3 switches silicon for decades. These protocols were designed to run on computers from the 1980s, I don’t believe for a minute that today we can’t handle that.
While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.
Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?
In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!
It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.
P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.
Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.
single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?
Yep, that is exactly it
In that case, torrent/peertube-like technology gets you almost all of the way there!
I am also excited at peer-to-peer technology, however P2P unicast remains a store-and-forward technology, under the best of condition we’re look at at least a 10 millisecond latency per hop and of course a doubling of the total network bandwidth used per node as they each send and receive at least once. Still very exciting stuff that I wish were further along than it is, but this isn’t the “multicast dream” as such, which does not use “Zuck’s computer” by wish I mean it does not use the cloud which is “someone else’s computer”. We can imagine a glorious benevolent P2P swarm that understands that it’s own participation is both a personal and a public good, that warm and fuzzy feeling of a torrent with a 10 to 1 seeding ratio. But we’re still using “someone else’s computer” … at “we’re” using “our computer” and that’s the royal “we”. Multicast is all switch no server, all juice, no seed.
that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended
Yes, well each node is a server and a middleman, but it’s “our” guys I guess, of course in the real world we’ve now got NAT, firewalls, STUN/TURN/ICE, blocked ports, port forwarding you know all that jazz that used to put a serious strain on my router and might ending up killing “our” phones battery, plus here P2P if you’re on cell your bandwidth is ratioed, some scummy ISP do not treat traffic the same way up or down, we’re starting to accumulate quite a lot of asterisks here.
we are already within a factor of two of the ideal case of working multicast!
Ah no, in this case the total bandwidth use has massively increase, those users aren’t communicating with multicast efficiency, they are point-to-point and those points run through the same backbones hundreds of times, coming AND going, while the sender does not have to carry that load, the internet is not a MUCH more congested place because of the lack of multicast
only so because the peertube protocol is not yet geographic-aware
I don’t know enough about peertube to answer that, I suspect it’s a best effort, but here I’m sure the focus is on “unstoppable delivery” BEFORE “efficient delivery”
it gets you within better than a factor of 2 of the ideal multicast dream
I’m not sure this math is mathing, we’re having double the total network bandwidth per host, and it’s not geography aware, it’s network topology aware, a topology that is often obscured by the ISPs for a variety of benign and malevolent reasons. The worst is that the peers will cross the backbone many time, I think we’re looking at a “network effect scale” in wasted bandwidth compared with multicast. “n2” ? I’m not sure, probably n2 is the worst case ontario.
without requiring every single internet backbone provider and ISP to cooperate with you
Yes this is essential, for multicast to “work” it would have to be like that. Unicast and IPv4, the internet would be useless if you had to negociate each packets between you and your peers.
considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful
Yes, I believe they do stand in the way, I believe most of the long range communication is dark fiber, which they have bought on the cheap and have made their business model to exploit and therefore NEED to keep the utility of the public internet as low as possible, that includes never allowing “actually existing multicast” to flourish.
I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans
You can because you’re in a drop in the consumer bucket, you exist in the cracks of the system. If everyone suddenly used the internet to this full potential, then we would get the screws turned on us. The internet is largely built like a cablo-distribution network and we’re supposed to just be passive consumers, we purchase product, we receive, we are not meant to send.
the intercontinental pipe owners might start complaining,
yes I think so too, and they wouldn’t wait for their complaints to be heard, we have been here before, throttling, QoS deprioritizing (to drops), dropped packet, broken connections, port blocking, transient IP bans, we are sitting ducks on the big pipes if we start really using them proper. Multicast would essentially fly under the radar.
I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.
I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.
But it does work if you run it on a parallel network, if you side step all the ISP’s toll bridges.
What you can’t do is negotiate with every ISP on the internet between you and your end users, giving a 30% cut to every one of them along the way. Especially since most of these ISPs were cable TV distributors in their previous life. They made sure to break it, to break it so good it becomes unimaginable that it could ever have worked in the first place.
And I think that has turned out just fine for netflix, the enormous deployment costs for their CDN means they have a moat, no small operator is going to be eating their lunch. Add to that brand name and platform power, the lack of standardized “one-to-many” infrastructureless broadcasting method, like “email” for broadcasting.
We’ll be stuck using Zuck’s computer to talk to each other pretty much forever now …
They are still in use as multicast. Typically it’s for local traffic.
I don’t think multicast over the internet would have taken off as multicast requires all routers between the source and any destinations to be multicast aware. Each would need to keep track of the subscriptions, meaning more resources that would mean higher cost. There was also less interest as one of the pluses of internet delivery was that delivery was on demand.
In the end cdns were going to be created anyway for static content and streaming could just use the same systems to produce effectively the same improvements.
But your next question would be why have they not done it for the experimental range.
Well, everything knows those packets are not on the internet so will block them. If you want to ask the internet to upgrade everything for that, well just ask how the ip6 upgrade is going.
Each would need to keep track of the subscriptions, meaning more resources that would mean higher cost.
They need to do it with unicast, which necessarily takes more resources to do. Think of it, 500 unicast stream or a single multicast stream, it’s not even close how much less computing power multicast takes.
Make no mistake, multicast is broken by choice. Working multicast is “contempt of business model”, it would cannibalize CDN profits to become as free as unicast.
the same systems to produce effectively the same improvements.
One crucial distinction is that you as an individual, will Zuck’s permission to use their system, in their way, in their rules.
And of course by “Zuck” I mean, “the cloud” aka “someone else’s computer”, which was not needed if multicast did just work, another enforced cloud dependency
1/16 of all IPv4 addresses were reserved for PUBLIC USE but they remain firmly in the grasp of private hands, private hand that want you to pay the toll and obey their masters
well just ask how the ip6 upgrade is going.
My GPON FIBER ISP said “we’ll probably never implement IPv6”, even though every single piece of equipment on their network supports it, even their horrible rebadged Huwawei routers
It won’t work and it will keep not working until we make them.