I found the multicast registery here.
https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.
Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.
So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?
What should we do when backward compatibility (and other likely excuses) are wielded against progress and justice ?
I remember when backward compatibility killed progress for IRC and NNTP and now we have discord and reddit, but we lost so much freedom in the process not to mention atomization, social fragmentation and oppressive censorship culture wielded by both sides of the political spectrum.
Will backward compatibility, and it’s righteous stand against “capability backsliding” always end up strangling us ?
Does it have to be this way for every technological consensus that we establish ?
To have it clogged up by well meaning (and hidden commercial) interests until the technological consensus is fragmented into private technological fiefdoms (cloudflare, apple, the other usual suspects).
(It took me over 40 years to figure that out about one extremely niche topic, isn’t just an “accident of technology”, what else is like that in the world and I just can’t see it ?)
Multicast addresses are handled specially in routers and switches all over the world.
Changing that would require massive firmware updates everywhere to get this to work and we can’t even get people to adopt IPv6. Nevermind the complexity in figuring out to how manage IGMP group membership at the Internet scale.
Given the complexity with either change, its better to adopt IPv6 and use PeerTube. Multicast at the Internet scale won’t work and IPv6 is less work
Oh multicast’s not going to work on IPv6 either. From my research since writing the above, it has become clear to me when they say “it won’t scale”
they don’t mean some kind of Big O of N notation complexity or compute scaling, they mean economically
This crucial feature if it were unlocked, wouldn’t make the profits scale up,
they wouldn’t be able to say hey netflix, on our network multicast access is “this much” or facebook you CDN cache is “that much”
That’s what they mean with “scaling”, it’s not some kind of technical difficulty
If we had governments and they said “you will make it work OR ELSE” it could work, it could work very well.
I could send that 4 megabyte file to 500 person without paying a single bridge toll for it, just like in unicast.
We’re ALWAYS going to have to use someone else’s computer because that’s where the bridge toll is collected, if not in cash then in kind.
I don’t know who they is in the case, but let’s think about this for a minute.
Technically what do you need for this to work?
How many Multicast Addresses do you need? How are multicast addresses assigned? Can anybody write to any multicast address? How do I decide that 239.53.244.53 is for my file not your movie? How do we know who is listening? This is effectively BGP, but more tricky because depending on the answer to the previous question you may not benefit from any network block sizes to reduce the routing info being shared. How do you decide when to start transmitting a file? Is anybody listening? Does anybody care?
You seem latched on to assume that technically would work and haven’t asked if it is actually technically a good solution. P2P is going to work better than multicast
I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.
And “They” is the operators of the internet backbone, the CDNs, the ISPs
There protocols for dealing about what you’re asking Since multicast is a dead (murdered) technology, I can’t tell you exactly what does what but here they are
There would be many more of course, things that specifically resolve any unexpected issues that might arise from “actually existing multicast” in the hands of “the public”, which has never happened.
While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.
Also multicast open protocol that might have developed if ISPs didn’t ruin multicast for everyone, would have steered the whole internet toward a “standard solution” in the same way that we all use the “same email system”. There would be one way that was “the way” of doing this one-to-many communication
To specifically answer your question, as far as routers are concerned, whenever a packet arrives, the router has to decide, WHICH of its WAN ports does the packet need to go to or does the packet need to be dropped.
From the point of view of the router, the whole internet is divided up in the number of WAN port it has and it sends the packet down the port with the shortest path to the destination host.
Multicast is a lot like that, the main difference is that the router MIGHT send the packet to more than one destination.
I think the solution is that receivers that wish to receive the multicast packets to a particular address (and port), from a particular source host, would subscribe to it. The literature mentions “multicast group subscription” I’m pretty sure this is already what this is for.
I think what this does is add branches in the routing table for the subscribed addresses in the multicast range. This tells the router about hosts that become part of the multicast group. I’m not sure if this is supposed to tell every router on the whole internet, or just the routers between the source and the destination host, but it gives the routers that need to know, where to send those packets pretty much in the same way as unicast, except with multiple destinations as specified in the multicast group subscriber list.
It’s really just unicast with extra steps, and not that many more steps, and those are all already baked in L3 switches silicon for decades. These protocols were designed to run on computers from the 1980s, I don’t believe for a minute that today we can’t handle that.
Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?
In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!
It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.
P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.
Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.
Yep, that is exactly it
I am also excited at peer-to-peer technology, however P2P unicast remains a store-and-forward technology, under the best of condition we’re look at at least a 10 millisecond latency per hop and of course a doubling of the total network bandwidth used per node as they each send and receive at least once. Still very exciting stuff that I wish were further along than it is, but this isn’t the “multicast dream” as such, which does not use “Zuck’s computer” by wish I mean it does not use the cloud which is “someone else’s computer”. We can imagine a glorious benevolent P2P swarm that understands that it’s own participation is both a personal and a public good, that warm and fuzzy feeling of a torrent with a 10 to 1 seeding ratio. But we’re still using “someone else’s computer” … at “we’re” using “our computer” and that’s the royal “we”. Multicast is all switch no server, all juice, no seed.
Yes, well each node is a server and a middleman, but it’s “our” guys I guess, of course in the real world we’ve now got NAT, firewalls, STUN/TURN/ICE, blocked ports, port forwarding you know all that jazz that used to put a serious strain on my router and might ending up killing “our” phones battery, plus here P2P if you’re on cell your bandwidth is ratioed, some scummy ISP do not treat traffic the same way up or down, we’re starting to accumulate quite a lot of asterisks here.
Ah no, in this case the total bandwidth use has massively increase, those users aren’t communicating with multicast efficiency, they are point-to-point and those points run through the same backbones hundreds of times, coming AND going, while the sender does not have to carry that load, the internet is not a MUCH more congested place because of the lack of multicast
I don’t know enough about peertube to answer that, I suspect it’s a best effort, but here I’m sure the focus is on “unstoppable delivery” BEFORE “efficient delivery”
I’m not sure this math is mathing, we’re having double the total network bandwidth per host, and it’s not geography aware, it’s network topology aware, a topology that is often obscured by the ISPs for a variety of benign and malevolent reasons. The worst is that the peers will cross the backbone many time, I think we’re looking at a “network effect scale” in wasted bandwidth compared with multicast. “n2” ? I’m not sure, probably n2 is the worst case ontario.
Yes this is essential, for multicast to “work” it would have to be like that. Unicast and IPv4, the internet would be useless if you had to negociate each packets between you and your peers.
Yes, I believe they do stand in the way, I believe most of the long range communication is dark fiber, which they have bought on the cheap and have made their business model to exploit and therefore NEED to keep the utility of the public internet as low as possible, that includes never allowing “actually existing multicast” to flourish.
You can because you’re in a drop in the consumer bucket, you exist in the cracks of the system. If everyone suddenly used the internet to this full potential, then we would get the screws turned on us. The internet is largely built like a cablo-distribution network and we’re supposed to just be passive consumers, we purchase product, we receive, we are not meant to send.
yes I think so too, and they wouldn’t wait for their complaints to be heard, we have been here before, throttling, QoS deprioritizing (to drops), dropped packet, broken connections, port blocking, transient IP bans, we are sitting ducks on the big pipes if we start really using them proper. Multicast would essentially fly under the radar.
Yes, I’m using “geographic awareness” here as shorthand for the same algorithm that BGP uses to calculate shortest route. As far as I know, BGP has no knowledge of “countries” or “continents”, it makes decisions purely on local policy and connectivity info available to it. However, the resulting topology map does greatly resemble the corresponding geographic map, a natural consequence of the internet being a physical engineering structure. I’m not sure how publicly available the global BGP data is. If you were designing a backbone-bandwidth-preserving P2P app you would either give it BGP data directly, or if that’s not available, give it the world map to get most of the same benefit.
The multicast proposal would need to be routed through the very same ISP-obscured topology, so there is no advantage over topology-aware P2P.
As a graph problem, it does look to me within factor of 2 is practical.
First consider a hypothetical topology-aware “daisy chain” scheme, where every swarm user has upload ratio of exactly one. Then every backbone and last-mile connection gets used exactly twice. This is why I say factor of 2 is the upper limit. It’s like a maze problem where you can navigate an entire maze and only traverse each corridor twice. Then look at the more practical “pyramid” scheme where half the users have upload ratio of about 2. Some links get used twice but many get used only once! UK-UK1 link is the only one to be used 3 times. Notably observe that US-JP and US-UK transcontinental links only get used once, as you wanted! Overall this pyramid scheme looks to me to be within 20% efficiency of the optimal multicast scheme.
What do you think backbone routers are? They are computers! Specialized for a particular task, but computers nonetheless. Owned by someone other than you. Your whole lament is that you can’t force those owners to implement multicast on their routers. I think using the royal “our” computer, something we can do right now without forcing anyone else, is much better by comparison. If you insist that P2P swarm members, they who actually want to see your livestream, are not good enough, that you only want to use “your” computer to broadcast and no one else’s, then you are left with no options other than bouncing HAM video signals off the ionosphere. And even the radio spectrum is claimed by governments.
I think you underestimate the size. Imagine if multicast were ubiquitous, billions of internet-connected users each with dozens? hundreds? of multicast subscriptions. Each video content creator is a multicast, each blogpost you follow, each multi-twitter handle, each lemmy community you subscribe to. Hundreds easily. Thats many gigabytes, possibly hundreds of gigabytes, of state to fit into every router. BGP is simple because you care only about the physical links you actually have. You can stuff entire IP ranges into a single routing table entry. Your entire table could be a dozen entries. Fits inside the silicon. With multicast I don’t think you can fold it in, you must keep the entire many-to-many table on every single router[1]. And consult the 100GB table to route every single packet, in case it needs to get split. As you said, impossible with 1990s technology, probably possible but contrary to business goals in 2020.
You are concerned about the battery life of your phone when you use the bandwidth of 2 video streams compared to watching just 1? Yet you expect every single router owner to plug in hundreds of gigabytes of extra RAM sticks and spend extra CPU power and electricity to look up routing tables to handle your multicast traffic for you. You are just offloading the resource usage onto other people’s computers! Not “our” computers - “theirs”. Remember how much criticism Bitcoin got for wasting resources? Not the proof of work, but the having to store a duplicate copy of 100GB’s of transactions blockchain on every single node? All that hard drive space wasted! When “Mastercard” and “Visa” can do it with only a single database on a mainframe. Yet now you want “them” to do the same and “waste” 100GB’s of RAM on every single router just so your battery life is a little better.
This does not follow. Didn’t you say that multicast was already sabotaged by the very same cablo-distribution networks to maintain their send-monopoly? You expect to force the ISPs to turn multicast back on and somehow have it fly under the radar, but P2P would get the screws turned? It can’t be one and not the other! If you plan to have the governments force the ISPs to fall in line and implement multicast standards, then why couldn’t you have the same governments (driven by democratic pressure of billions of internet users demanding freedom, presumably) enshrine P2P rights? Again, remember that P2P is something we already have, something that already works and can be expanded with no additional cooperation from other players. Multicast is something that would need to be forced on others, on everyone, and require physical hardware updates. If there are future restrictions on P2P, they would be easier to defend against politically and technologically. If you cannot defend P2P, then you for sure do not have enough political power to force multicast.
[1]: Thinking about this, maybe you could roll it in a little. Given N internet users (~a billion), each with S subscriptions (say a hundred), C number of content feeds (a hundred million? 10% of users are also creators, 90% are pure consumers), and each router has P physical links (say ten), then instead of N*S amount of state (100GB’s), each router could fold it down into C*P amount of state (1GB’s). As in “If I receive a multicast packet from [source ip=US.5.6.7] to [destination ip=anyone], route copies of it out through phy04, phy07, and phy12”. You would still need a mechanism to propagate table changes pretty rapidly (full refresh about once every minute?). Your phone can be switching cells or powering on and off. You don’t want to multicast packets to a powered-off IP - that would be waste of resources!
And how do you detect oversubscribing? If a million watchers subcribe to 1 multicast livestream - it’s fine, but what happens when 1 troll subscribes to a million livestreams? If I subscribe to 1 million video streams, obviously my last-mile connection cannot fit them all. With TCP unicast, the senders would not receive TCP ACK replies from me and throttle down. But with multicast, the routers in between do not know about my last mile, or even if my phone is still powered on since later than a minute ago. All they know is “if receive multicast from IP1, send to phy04; if receive multicast from IP2, send to phy04;” etc. Would my upstream routers not get saturated trying to send a million video streams to a dead IP? Would we need to implement some sort of a reverse-multicast version of “TCP ACK”?
1 ↩︎
I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.
But it does work if you run it on a parallel network, if you side step all the ISP’s toll bridges.
What you can’t do is negotiate with every ISP on the internet between you and your end users, giving a 30% cut to every one of them along the way. Especially since most of these ISPs were cable TV distributors in their previous life. They made sure to break it, to break it so good it becomes unimaginable that it could ever have worked in the first place.
And I think that has turned out just fine for netflix, the enormous deployment costs for their CDN means they have a moat, no small operator is going to be eating their lunch. Add to that brand name and platform power, the lack of standardized “one-to-many” infrastructureless broadcasting method, like “email” for broadcasting.
We’ll be stuck using Zuck’s computer to talk to each other pretty much forever now …