• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    17 hours ago

    On radiators, plugging it into this formula:

    https://projectrho.com/public_html/rocket/heatrad.php

    I get a circular radiator at least a kilometer wide, assuming the radiator is quite efficient, a rather modest datacenter, and very hot coolant (70C).

    …Realistically, the coolant temperature would need to be much lower. See how it’s a power of four in the formula? That means the radiator area gets very large real quick.

    I cannot emphasize how expensive a functional 1km+ radiator would be in space. It’s mind bogglingly expensive.


    If a space datacenter is in LEO like Starlink, then it’s in Earth’s shadow a lot of the time, and would have to be “part” of the starlink network constantly zooming over the ground. If it’s geosynchronous, then laser communication (or any communication) gets real tricky, and latency is limited by the speed of light. I’m not saying it’s impossible, but reliable high data rates would be an expensive engineering challenge.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      17 hours ago

      For 100kW? I’m not going to try and figure things out from that massive site. A pre made calculator would have been nice if they had one.

      edit: It is going to be LEO and likely connected to starlink with the same laser link they use.

      Edit: Looking at orbits they might use sun synchronous orbits? It might not be in sun 100% of the time, but they are nearly always in sun.

      Edit: I have no way to know if this is right, but a couple AI responses are saying for 100kW it would be ~150-170 square meters with temperatures around 70c

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        16 hours ago

        100kW? Nvidia BGX 200 servers are 14kW each, not counting the interconnect, or anything else. According to nuggets I’ve read online, we’re talking 200 megawatts for an Earth-based AI datacenter these days, without something exotic like underclocked Cerebras WSEs (which would be pretty neat, actually…)

        Plugging 200 megawatts into this:

        https://www.calctool.org/quantum-mechanics/stefan-boltzmann-law

        I get about 0.46 square kilometers, depending on the coolant temperature, and ultimate efficiency of the system (with how you orient the thing relative to solar panels, how you circulate coolant…)

        I have no clue what the construction of such a huge structure would look like, but if it was a simple 0.5 inch aluminum sheet, it would weigh like 15,000 metric tons. Even much thinner, that’s still on the order of “mass of a cargo ship”


        Why is that, though?

        Well, something like the ISS doesn’t generate much heat, and hypothetical rockets that need big radiators have very hot coolant to dissipate heat quickly. But space data centers are the sinister combination of “tons of waste heat” and “needs a low coolant temperature.”

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          16 hours ago

          They aren’t making a datacenter like on earth. They’re putting up a ton of satellites that will each generate about 100kW.

          Everyone keeps thinking they’re putting these massive things up there, they are not doing that.

          Edit: Oh I missed your tool this time was a real calculator this time, thank you! That says 127 square meters, with black body, 70c and 1 (but no idea if those are good values)

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            16 hours ago

            That’s interesting, but what’s the point? If it’s like 2 DGX boxes in each satellite, spaced out, the interconnect between them is going to be very slow, and the individual computational power of each satellite will not be that impressive.

            And if you connect them all in one constructed mesh and wire them together, well, you’ve made a 200MW datacenter! The economies remain the same.

            If hardware gets more power efficient, well… Then why do you need to go to space anymore?

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              To put into perspective, each satellite that could only accommodate, at most, 2-3 servers would have a power and cooling burden greater than the entire international space station. For each 2-3 server unit, you have an ISS-magnitude power and cooling challenge. They would be looking to have hundreds of thousands of ISS-scale satellites in orbit…

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              15 hours ago

              Ya, the economies of how much total space / material for the global network is similar, although lets say higher due to losses in efficiency in distributing it over so many dishes, but in terms of how big any individual radiator is and how much space each one is going to take, the smaller sizes make it easier to manage. Trying to figure out a 150-200m2 solar panel radiator is a lot easier than trying to figure out a 1km2

              The individual power of each satellite having to use a mesh network to train might not be fast enough, maybe they’ll still use land based ones for training, but no single person needs more compute than what a satellite can provide. So from the inference / customer computation side of things, it isn’t a problem.

              edit: I meant radiator, not solar panel

              edit: looks like blackwells can run sunstained at 88c, so that will help a bit more as well on size, the calculator now says 103m2 instead of 127m^2

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                So from the inference / customer computation side of things, it isn’t a problem.

                Not necessarily. There are inference schemes where spreading MoE models across 40+ GPUs with a fast interconnect yields better efficiency.

                looks like blackwells can run sunstained at 88c

                The coolant still needs to remain relatively cool to hold that silicon temperature, though. Practically it can’t be like 60C.

                • NotMyOldRedditName@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 hours ago

                  The coolant still needs to remain relatively cool to hold that silicon temperature, though. Practically it can’t be like 60C.

                  Ah, ya that makes sense, whatever the numbers the the chip can be the coolant will be less.