krick an hour ago

It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.

  • captainkrtek an hour ago

    > It would be a good thing, if it would cause anything to change. It obviously won't.

    I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.

    However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.

    Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.

    • chii 34 minutes ago

      the root cause is customers refusing to punish these downtime.

      Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.

      So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.

      • MikeNotThePope 12 minutes ago

        Is a little downtime such a bad thing? Trying to avoid some bumps and bruises in your business has diminishing returns.

  • ehhthing 27 minutes ago

    With the rise in unfriendly bots on the internet as well as DDoS botnets reaching 15 Tbps, I don’t think many people have much of a choice.

oidar 7 minutes ago

I wonder what would life without cloudflare look like? What practices would fill the gaps if a company didn't - or wasn't allowed to -- satisfy the the concerns that cloudflare fills.

zie1ony 6 minutes ago

My friend wasn't able to do RTG during the outage. They had to use ultrasound machine on his broken arm to see inside.

stroebs an hour ago

The problem is far more nuanced than the internet simply becoming too centralised.

I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.

  • Fnoord an hour ago

    Literally impossible? On the contrary; Geofencing is easy. I block all kind of nefarious countries on my firewall, and I don't miss them (no loss not being able to connect to/from a mafia state like Russia). Now, if I were to block FAMAG... or Cloudflare...

    • stroebs 6 minutes ago

      Yes, literally impossible. The barrier to entry for anyone on the internet to create a proxy or VPN to bypass your geofencing is significantly lower than your cost to prevent them.

chasing0entropy 2 hours ago

Spot on article, but without a call to action. What can we do to combat the migration of society to a centralized corpro-government intertwined entity with no regard for unprofitable privacy or individualism?

  • adrianN an hour ago

    Individuals are unlikely to be able to do something about the centralization problem except vote for politicians that want to implement countermeasures. I don’t know of any politicians (with a chance to win anything) that have that on their agenda.

    • turtletontine 14 minutes ago

      That’s called antitrust, and is absolutely a cause you can vote for. Some of the Biden administration’s biggest achievements were in antitrust, and the head of the FTC for Biden has joined Mamdani’s transition team.

  • DANmode an hour ago

    Learn how to host anything, today.

    • imsurajkadam an hour ago

      Even if you learn to Host, there are many other services that are going to get relied on those centralised platforms, so if you are thinking to Host, every single thing on your own, then it is going to be more work than you can even imagine and definitely super hard to organise as well

    • rurban 38 minutes ago

      If you host you are running on my cPanel SW. 70% of the internet is doing that. Also a kinda centralized point of failure, but I didn't hear of any bugs in the last 14 years.

    • randallsquared an hour ago

      Have you tried that? I gave up on hosting my own email server seven or eight years ago, after it became clear that there would be an endless fight with various entities to accept my mail. Hosting a webserver without the expectation that you'll need some high powered DDOS defense seems naive, in the current day, and good luck doing that with a server or two.

      • IgorPartola 41 minutes ago

        I have never hosted my own email. It took me roughly a day to set it up on a vanilla FreeBSD install running on Vultr’s free tier plan and it has been running flawlessly for nearly a year. I did not use AI at all, just the FreeBSD, Postfix, and Dovecot’s handbooks. I do have a fair bit of Linux admin and development experience but all in all this has been a weirdly painless experience.

        If you don’t love this approach, Mail-in-a-box works incredibly well even if the author of all the Python code behind it insists on using tabs instead of spaces :)

        And you can always grab a really good deal from a small hosting company, likely with decades of experience in what they do, via LowEndBox/LowEndTalk. The deal would likely blow AWS/DO/Vultr/Google Cloud out of the water in terms of value. I have been snagging deals from there for ages and I lost a virtual host twice. Once was a new company that turned out to be shady and another was when I rented a VPS in Cairo and a revolution broke out. They brought everything back up after a couple of months.

        For example I just bought a lifetime email hosting system with 250GB of storage, email, video, full office suite, calendar, contacts, and file storage for $75. Configuration here is down to setting the DNS records they give you and adding users. Company behind it has been around for ages and is one of the best regarded in the LET community.

  • card_zero an hour ago

    We could quibble about the premise.

timenotwasted an hour ago

"Embrace outages, and build redundancy." — It feels like back in the day this was championed pretty hard especially by places like Netflix (Chaos Monkey) but as downtime has become more expected it seems we are sliding backwards. I have a tendency to rely too much on feelings so I'm sure someone could point me to some data that proves otherwise but for now that's my read on things. Personally, I've been going a lot more in on self-hosting lots of things I used to just mindlessly leave on the cloud.

throwaway81523 11 minutes ago

Now just wait til every country on earth really does replace most of its employees with ChatGPT... and then OpenAI's data center goes offline with a fiber cut or something. All work everywhere stops. Cloudflare outage is nothing compared to that.

L-four 22 minutes ago

It's a tragedy of the commons. Even if you don't use Cloudflare does it matter if no one can pay for your products.

0x073 an hour ago

The outage wasn’t a good thing, since nothing is changing as a result. (How many outages does cloud flare had?)

tonyhart7 17 minutes ago

I don't like this argument since you can applied this argument to google,microsot,aws,facebook etc

Tech world is dominated by US company and what is alternative to most of these service???? its a lot fewer than you might think and even then you must make a compromise in certain areas

theideaofcoffee 2 hours ago

> They [outages] can force redundancy and resilience into systems.

They won’t until either the monetary pain of outages becomes greater than the inefficiency of holding on to more systems to support that redundancy, or, government steps in with clear regulation forcing their hand. And I’m not sure about the latter. So I’m not holding my breath about anything changing. It will continue to be a circus of doing everything on a shoestring because line must go up every quarter or a shareholder doesn’t keep their wings.

  • morshu9001 an hour ago

    That's ok though, not every website needs 5 9s

charcircuit 2 hours ago

>It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war

This is not true. The internet was never designed to withstand nuclear war.

  • chasing0entropy 2 hours ago

    Arpanet absolutely was designed to be a physically resilient network which could survive the loss of multiple physical switch locations.

  • bblb an hour ago

    Perhaps. Perhaps not. But it will survive it. It will survive a complete nuclear winter. It's too useful to die, and will be one the first things to be fixed after global annihilation.

    But Internet is not hosting companies or cloud providers. Internet does not care if they don't build their systems resilient enough and let the SPOFs creep up. Internet does it's thing and the packets keep flowing. Maybe BGP and DNS could use some additional armoring but there are ways around both of them in case of actual emergency.

  • anonym29 2 hours ago

    ARPANET was literally invented during the cold war for the specific and explicit purpose of networked communications resilience for government and military in the event major networking hubs went offline due to one or more successful nuclear attacks against the United States

    • charcircuit an hour ago

      It literally wasn't. It's an urban myth.

      >Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers.

      >The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim.

      https://en.wikipedia.org/wiki/ARPANET

      • oidar an hour ago

        Per interviews, the initial impetus wasn't to withstand a nuclear attack - but after it was first set up, it most certainly a major part of the thought process in design. https://web.archive.org/web/20151104224529/https://www.wired...

        • charcircuit an hour ago

          >but after it was first set up

          Your link is talking about work Baran did before ARPANET was created. The timeline doesn't back your point. And when ARPANET was created after Baran's work with Rand:

          >Wired: The myth of the Arpanet – which still persists – is that it was developed to withstand nuclear strikes. That's wrong, isn't it?

          >Paul Baran: Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.

          • oidar 18 minutes ago

            Read the whole article. And peruse the oral history here: https://ethw.org/Oral-History:Paul_Baran - the genesis was most definitely related to the cold war.

            "A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability."

      • anonym29 an hour ago

        The stated research goals are not necessarily the same as the strategic funding motivations. The DoD clearly recognized packet-switching's survivability and dynamic routing potential when the US Air Force funded the invention of networked packet switching by Paul Baran six years earlier, in 1960, for which the explicit purpose was "nuclear-survivable military communications".

        There is zero reason to believe ARPA would've funded the work were it not for internal military recognition of the utility of the underlying technology.

        To assume that the project lead was told EVERY motivation of the top secret military intelligence committee that was responsible for 100% of the funding of the project takes either a special kind of naïveté or complete ignorance of compartmentalization practices within military R&D and procurement practices.

        ARPANET would never have been were it not for ARPA funding, and ARPA never would've funded it were it not for the existence of packet-switched networking, which itself was invented and funded, again, six years before Bob Taylor even entered the picture, for the SOLE purpose of "nuclear-survivable military communications".

        Consider the following sequence of events:

        1. US Air Force desires nuclear-survivable military communications, funds Paul Baran's research at RAND

        2. Baran proves packet-switching is conceptually viable for nuclear-survivable communications

        3. His specific implementation doesn't meet rigorous Air Force deployment standards (their implementation partner, AT&T, refuses - which is entirely expectable for what was then a complex new technology that not a single AT&T engineer understood or had ever interacted with during the course of their education), but the concept is now proven and documented

        4. ARPA sees the strategic potential of packet-switched networks for the explicit and sole purpose of nuclear-survivable communications, and decides to fund a more robust development effort

        5. They use academic resource-sharing as the development/testing environment (lower stakes, work out the kinks, get future engineers conceptually familiar with the underlying technology paradigms)

        6. Researchers, including Bob Taylor, genuinely focus on resource sharing because that's what they're told their actual job is, even though that's not actually the true purpose of their work

        7. Once mature, the technology gets deployed for it's originally-intended strategic purposes (MILNET split-off in 1983)

        Under this timeline, the sole true reason for ARPA's funding of ARPANET is nuclear-survivable military communication, Bob Taylor, being the military's R&D pawn, is never told that (standard compartmentalization practice). Bob Taylor can credibly and honestly state that he was tasked with implementing resource sharing across academic networks, which is true, but was never the actual underlying motivation to fund his research.

        ...and the myth of "ARPANET wasn't created for nuclear survivability" is born.