The Cloud You're On Is Not the Cloud

Cloud platform commitment is the largest architectural decision most teams make, and the one most often made invisibly. Why "best service for the job" produces a system whose exit cost the vendor sets.

I. The Decision That Does Not Look Like a Decision

Picture the moment as it usually happens. A team is standing up a new service. They need a database, an event queue, an object store, a managed identity layer, a way to deploy. The cloud account already exists — the company has been on AWS, or GCP, or Azure, for years. Picking DynamoDB for the data store, SQS for the queue, S3 for blobs, Cognito for users, and Lambda for the handlers feels like configuration, not commitment. Each is documented. Each integrates cleanly with the others. Each has a free tier. The team picks them. The service is in production within the week.

Nobody at the table thinks of this as architecture. It feels like infrastructure plumbing — already-ratified at the company level, line items on a bill that already exists, services pre-approved by a security review someone else conducted years ago. But the team has just made six separate proprietary commitments, none of them portable, all of them compounding, none appearing on the architecture diagram as anything other than an arrow labeled "AWS." Three years from now, when a price change, a service deprecation, or a strategic pivot makes the question of leaving suddenly relevant, the team will discover that the cost of leaving is not what they thought it was.

This is the failure that does not look like one. The cloud platform is the largest architectural decision most engineering teams make, and it is the one most often made invisibly — not because nobody decides, but because the decision was already made years before by someone else and treated as settled. Each subsequent service pick is sorted into the procurement-shaped bucket: a feature comparison, a cost estimate, a security questionnaire, a sign-off. By the time the architecture is mature, exit cost has accumulated to whatever the long sequence of small picks made it, and that number is a property of the system that nobody designed and nobody owns.

The argument of this essay is that cloud platform commitment behaves identically to library lock-in: it accumulates silently across hundreds of small decisions, becomes visible only at the migration moment, and at that moment costs more than staying. The cloud you are running on is not "the cloud." It is a particular counterparty, with particular interests, on whose continued cooperation the system's architecture quietly depends. Treating that counterparty as a settled background fact rather than as an architectural commitment is the fundamental category error this essay is about.


II. The Anatomy of Cloud Lock-in: Where the Coupling Lives

To see cloud lock-in clearly, it helps to stop thinking of it as a single phenomenon — a sticky vendor, a high egress fee — and instead recognize it as a layered structure that compounds. Every proprietary cloud service couples a system to a hyperscaler across at least five distinct dimensions, each of which raises the eventual cost of leaving and each of which is harder to see than the one before it.

The first is service-API coupling: application code that calls AWS-, GCP-, or Azure-specific SDKs. DynamoDB query patterns, BigQuery's SQL dialect, Spanner-specific transaction semantics, Lambda invocation models, Step Functions state-machine definitions, Cognito authentication flows. This is the visible layer — the imports, the SDK initializations, the configuration objects that assume a particular provider's idioms. It is also the easiest layer to refactor, which is why it gets disproportionate attention and disproportionately understates the real lock-in. A team that wraps its DynamoDB calls behind an interface and feels good about its portability has, at most, addressed twenty percent of the problem.

The second is data gravity: the petabytes of operational data, analytics history, blob storage, and event archives that have accumulated in cloud-specific stores over years of running the system. This is the deepest layer of lock-in, because data outlives every other component. Code can be rewritten in a quarter. Operational tooling can be rebuilt in a year. Five years of accumulated business data cannot be wished into a different format and cannot be transferred over the public internet on any timeline a quarterly business plan tolerates. Egress fees, treated separately below, are the priced expression of data gravity — but the gravity exists independently of the price. Even with free egress, multi-petabyte migrations take months, degrade live systems, and surface every implicit assumption about where data lives that the architecture had silently absorbed.

The third is control-plane coupling: identity-and-access-management models, organization and folder and subscription hierarchies, key-management services, virtual-network and subnet topologies, service-control policies, log destinations, audit pipelines. This is what most engineers underestimate, and it is harder to see than the prior two layers because it is not technical artifact at all. It is the organizational shape of how the company operates the cloud — distributed across security teams and platform teams and compliance teams, encoded in policies that nobody can fully recite but everybody depends on. AWS IAM and GCP IAM solve the same problem with categorically different conceptual models; relocating a non-trivial workload means rebuilding the policy fabric of the company itself, not translating files from one syntax to another.

The fourth is operational coupling: CloudFormation templates, Terraform with provider-specific modules, Deployment Manager configurations, ARM templates, Bicep, monitoring built atop CloudWatch or Stackdriver or Azure Monitor, runbooks that assume specific consoles, on-call procedures that reference specific dashboards, disaster-recovery procedures that depend on specific regional architectures. This layer accrues invisibly through years of operating the platform. By the time it is substantial, it is also institutional knowledge — distributed across many engineers, documented incompletely, fundamentally entangled with how the team actually works rather than with any artifact a migration could relocate.

The fifth is skills and integration coupling: engineers who have spent three years internalizing AWS IAM's mental model, certifications attached to specific clouds, hiring pipelines that select for specific cloud expertise, and a sprawl of second-order systems built atop the platform. Reporting tools pointed at BigQuery. Identity federation routed through Azure Active Directory. Billing reconciliation tied to AWS Cost Explorer. Incident management integrated with native cloud event buses. Each of these is a separate piece of work to redo if the cloud relationship ends, and each was built without any explicit accounting of the dependency it introduced. The non-transferable portion of accumulated skills is, in expected-value terms, an investment in the continued relationship with the provider.

None of these layers is visible at procurement time. All of them compound silently. By the time the cost of leaving is large enough to be noticed, the team is already deep inside it. This is the architectural shape of cloud lock-in: a series of coupling decisions, each individually small, each individually defensible, accumulating into a system property — exit cost — that nobody designed and nobody owns.


III. Why "Best Service for the Job" Is the Wrong Frame

The dominant decision criterion in cloud architecture, when picking among managed services, is best service for the job. Pick DynamoDB because it fits low-latency key-value access. Pick BigQuery because it fits analytical scans. Pick Lambda because it fits eventing. Pick Step Functions because it fits orchestration. Pick Cognito because it fits user identity. This criterion is correct at the level of the individual decision and incomplete at the level of the system.

The incompleteness shows up as follows. Best service for the job, applied service by service over the lifetime of a system, produces a system whose total exit cost is the sum of every individual lock-in, taken jointly. Each decision was locally optimal. The aggregate is globally fragile. At that point the system can run only on the cloud it was built for, which means the cloud's pricing power over that workload approaches monopoly — and the customer's bargaining position in any future negotiation is whatever leverage they have left, which is typically none. This is the same systems-thinking failure that produced the library lock-in described elsewhere: the assumption that good local decisions compose into a good global outcome, when in fact the system property that matters — optionality — is emergent and is precisely the property nobody was tracking.

Donella Meadows formalized this distinction in Thinking in Systems: a system's behavior is determined by its structure, not by the quality of its individual components, and an obsession with component-level optimization while ignoring structural properties produces predictable failure modes [1]. The structural property at stake here has a name in finance — optionality, the value of being able to make a different choice later — and it is worth borrowing the term because it captures something engineering vocabulary tends to miss. Optionality on the cloud is preserved by which services you bind to, not by which cloud you happen to be on. The cloud you are on is not, by itself, the architectural decision. The architectural decision is how deeply you bind to its proprietary services, because that is what determines what migration would mean if it ever became necessary.

This reframing surfaces an observation most cloud-architecture conversations miss: there is a lock-in gradient within a single cloud. Running EC2 instances with EBS volumes, Postgres on RDS, and S3 for blobs is structurally different from running Lambda functions, DynamoDB tables, Step Functions state machines, Cognito user pools, and AppSync GraphQL endpoints — even when both are billed to the same AWS account. The first stack is portable to any IaaS provider in roughly the same time it would take to rewrite the surrounding business logic, because every primitive it uses has a direct equivalent on every other cloud and on bare metal. The second stack is not portable at all without rewriting the application's control flow, because each managed service encodes assumptions about how the application is structured. Same cloud bill. Same operational interface from the developer's seat. Categorically different exit cost.

The implication is that "should we be on AWS, GCP, or Azure?" is a question of decreasing relevance once a cloud has been chosen. The question of continuing relevance is "how deeply should we bind to this cloud's proprietary services?" — and that question is being answered, implicitly, every time a team picks a managed service for a workload. Most teams answer it by default toward maximum binding, because every individual managed service is locally cheaper and easier than rolling its open-source equivalent.

The architecturally correct frame, then, is not "best service for the job." It is best service for the job, weighted by exit cost, considered jointly across all cloud-service dependencies. Most teams do not work this way. Most teams cannot, because the second clause requires modeling a system property the procurement process does not surface and the cloud bill does not measure. This is the work that systems thinking demands and that the everyday rhythm of cloud engineering tends to skip.


IV. Lock-in Cashing In: Three Mechanisms, One Pattern

The argument above is theoretical. The last several years have produced a sequence of natural experiments in which cloud-platform lock-in has cashed in for customers in concrete, measurable ways. They are worth examining in detail, because the underlying pattern is more consistent than any individual case. Three mechanisms have produced most of the visible damage: pricing-based exit cost, service deprecation, and acquisition-driven repricing. Each works differently. The customer experience is structurally identical.

Egress fees as exit-cost-by-design. Cloud providers price ingress at zero and egress at non-trivial dollars per terabyte. For a workload that has accumulated petabytes over years, egress alone can run into seven-figure migration costs before any engineering work begins. This is not an accident of pricing. It is exit cost imposed by design, by the counterparty, and the cloud-pricing literature has documented for years that egress fees are the single most asymmetric component of cloud economics — they exist not because moving data costs the provider that much, but because the provider can charge for it.

The empirical anchor is what happened next. The European Union's Data Act — Regulation (EU) 2023/2854 — entered into force on January 11, 2024, with cloud-switching provisions in Articles 23 through 31 designed to remove the technical and contractual barriers to switching between cloud providers. The regulation's switching obligations became applicable on September 12, 2025, with a transitional period during which providers could still pass on egress costs at-cost, and a hard deadline of January 12, 2027 by which switching charges — including egress charges — must be eliminated entirely [2]. The regulatory pressure was, by design, asymmetric to the providers' incentives: Brussels had identified the same exit-cost lever the architecture community had been describing, and ruled it incompatible with a competitive cloud market.

The providers' response was the implicit admission. Google Cloud announced free egress for customers leaving the platform in January 2024 [3]. AWS followed on March 5, 2024, announcing free data transfer to the internet for customers in good standing who were moving all their data and workloads off AWS, subject to a 60-day exit window and a minimum of 100 GB of stored data [4]. Microsoft Azure followed shortly after with a similar policy [5]. Three hyperscalers, all introducing functionally identical free-egress-on-exit policies, all in 2024, all in response to regulatory pressure rather than competitive pressure or customer demand. The implicit admission is total: the providers themselves had been treating egress as a lock-in lever, and only relinquished it when forced.

Architectural lesson: a fee structure that prices the act of leaving differently from the act of arriving is, by definition, a lock-in mechanism. The fact that egress on exit is now contractually free in the three major clouds does not erase the lever — the data gravity remains, the migration logistics remain, the policy applies only to full exits rather than to the partial migrations most teams actually need, and the next exit-cost lever (whatever it turns out to be, whenever the regulation is renegotiated) is also under the provider's unilateral control. The regulatory victory bounded one specific fee; it did not bound the structural incentive that produced the fee.

Service deprecations and forced migrations. The second mechanism is service-level: a managed service the team adopted gets sunset, and the customer is forced to migrate on the vendor's schedule. Google Cloud's track record is the cleanest illustration, not because Google is uniquely guilty but because Google is uniquely public about it. Cloud IoT Core was announced as deprecated on August 16, 2022 and shut down on August 16, 2023, giving customers a one-year window to migrate production IoT fleets to a partner solution [6]. The service had been generally available since early 2018; it was retired after roughly five years, having developed enough adoption to be inconvenient to deprecate but not enough to justify continued investment. The pattern is not idiosyncratic. The "Killed by Google" registry, while focused on consumer products, includes a steady cadence of cloud-product retirements and recategorizations, and Google's broader history of product termination has shaped customer expectations in ways the company has actively tried to walk back.

The structural shape of this case is identical to vendor relicensing in software dependencies — the counterparty changes the terms. The trigger is different: the driver is strategic-portfolio rationalization rather than direct monetization. But the customer's experience is the same. The service you signed up for is no longer the service you have. Your options reduce to accept the new terms (migrate to the vendor's recommended replacement, which is itself a fresh commitment), migrate elsewhere (pay the migration cost on the vendor's schedule), or rewrite (pay a larger cost to abstract the dependency away). AWS and Azure have similar patterns at lower public profile — AWS's deprecation lifecycle is longer but not absent, and Azure has periodically retired services, with Azure IoT Central scheduled for retirement in 2027 being one recent example.

Architectural lesson: even within a chosen cloud, individual managed services are not stable counterparties. A managed proprietary service is a contract whose end date the vendor controls, and treating "the cloud" as a single decision rather than as the dozens of independent service-level decisions it actually contains disguises this risk from procurement-time analysis.

Acquisition-triggered repricing. The third mechanism is corporate: ownership of the provider changes, and the contract changes with it. Broadcom completed its acquisition of VMware on November 22, 2023, after receiving regulatory approvals from the United States, the European Union, the United Kingdom, and China [7]. Within weeks, Broadcom announced that VMware perpetual licenses and standalone Support and Subscription renewals would no longer be sold; the product portfolio collapsed from more than 160 individual offerings into four bundled subscriptions, sold per-core on multi-year terms [8]. Existing perpetual licenses remained technically valid but were stripped of the active-support relationship that, for any production-grade enterprise deployment, the perpetual license had effectively required.

The cost impact was immediate and large. AT&T sued Broadcom in the New York State Supreme Court on August 29, 2024, claiming that the support contract negotiated with VMware in August 2022 — which provided for support through September 8, 2024 with a two-year renewal option — was being repudiated by Broadcom's insistence that continued support require purchase of one of the new bundled subscriptions on a three-year term. AT&T claimed the new arrangement represented a 1,050% cost increase, applied to a fleet of approximately 75,000 virtual machines running across roughly 8,600 servers [9]. The dispute was eventually settled, but the public record of the lawsuit is the document the architecture community can read: a Fortune 100 customer asserting on the record that the post-acquisition contract was a tenfold deviation from the pre-acquisition contract, on a workload that had been running for years on the assumption of contractual continuity.

The cloud framing of this case requires care, because VMware itself is on-prem virtualization software rather than a hyperscaler service. But the architectural lesson generalizes directly. Cloud customers run on counterparties whose ownership can change. Counterparty ownership changes can produce contract changes that the original procurement decision did not contemplate and could not have priced. The IBM acquisition of HashiCorp, completed in early 2025, is the same shape on a smaller scale — and has already produced contractual changes for Terraform community-edition users, which the prior essay discussed in detail. The mechanism is acquisition; the effect is unilateral repricing or reterming on a schedule the customer does not control.

Architectural lesson: the cloud you signed up for is the cloud you have until ownership changes. The hyperscalers are large enough that outright acquisition is unlikely on any near-term horizon, but business-unit divestitures, strategic refocusings, and shareholder-pressure-driven repricings produce equivalent effects without requiring an acquisition event — and the historical record of the broader software industry, of which the hyperscalers are a part, contains every variety of these.

The three mechanisms are different in their proximate causes — regulatory exposure of a pricing lever, strategic portfolio rationalization, post-acquisition repricing — but they converge on a single pattern. The cloud relationship can change unilaterally, on a schedule the customer does not control, in ways the procurement decision did not consider. The customers who paid the cost of bounded coupling in advance — through abstraction, through portable data formats, through deliberate avoidance of deeply-proprietary services — paid the smaller bill at the migration moment. The customers who optimized for service-by-service convenience paid the larger bill, on the counterparty's schedule, in circumstances that turned the question of whether to migrate into the question of how much it would cost not to.


V. The Cloud Has Real Value; The Question Is the Tradeoff

The argument so far would be incomplete without honest treatment of the counter-position, because the counter-position has real force.

The cloud is not wrong. Hyperscalers genuinely deliver elasticity (capacity available in minutes rather than months), the capex-to-opex shift that startups and fast-growing companies need to survive, breadth of services that no internal platform team can credibly replicate, geographic reach for global products, and a labor market populated by engineers who already know how to use the platform productively. For many workloads — short-lived, spiky, geographically distributed, regulated under frameworks the cloud already certifies against — the cloud is the architecturally correct choice, and a team that decided otherwise would be paying real costs in capacity to acquire infrastructure expertise they could buy as a service. The argument of this essay is not that the cloud is wrong. It is that the cloud commitment should be made as an architectural decision with explicit accounting of exit cost, rather than as an inherited default that nobody re-examines.

For some workloads, the explicit-decision posture leads to leaving. The most public recent case is 37signals, the company behind Basecamp and HEY. The company published its 2022 cloud spending in detail: $3,201,564 for the year, working out to roughly $267,000 per month, on a workload that the company believed could run on owned hardware for substantially less. In February 2023, the company announced its plan to leave AWS, projecting savings of approximately $7 million over five years. By mid-2023, with the migration completed and the bulk of workloads running on owned hardware in colocation, the company reported annual cloud spend reduced from $3.2 million to $1.3 million, with continuing reductions as remaining services were repatriated [10]. The Andreessen Horowitz analysis "The Cost of Cloud, A Trillion Dollar Paradox," published in 2021 by Sarah Wang and Martin Casado, is the secondary anchor: at scale, cloud margin returns to the customer's profit-and-loss statement as cost of revenue, and a portion of that cost is recoverable through repatriation [11].

The 37signals case is not an argument that everyone should leave the cloud. It is an argument that for some workloads — predictable, mature, capacity-stable — the cloud is the wrong tradeoff, and the architecture should permit that decision to be made on its merits when the merits suggest it. The architectural failure mode the case illustrates is not "we were on the cloud." It is "we never re-examined whether we should be."

There is also an honest middle ground that the cloud-or-not-cloud dichotomy obscures. Managed open source on a hyperscaler — RDS for PostgreSQL, ElastiCache for Valkey, Managed Streaming for Apache Kafka, GKE-managed PostgreSQL, Azure Database for PostgreSQL — is structurally different from managed proprietary on a hyperscaler. DynamoDB, Spanner, Cosmos DB, BigQuery's proprietary execution model, AppSync, Step Functions, Cognito. The two categories appear similar at the developer interface and on the bill: a managed service, a connection string, a monthly invoice. The architectural commitment is categorically different. The first category is bounded by the customer's engineering capacity to relocate the workload, because the underlying technology has equivalents off the hyperscaler. The second is bounded by the cloud provider's discretion, because the underlying technology has no equivalents anywhere else. Many teams collapse this distinction and assume "managed" implies "locked in." It does not, and the distinction is one of the highest-leverage architectural choices available within a single cloud.

The discipline this demands is not absolutism. It is honesty about what each commitment costs and on what time horizon. A team that can articulate, for every cloud-service dependency, what it would take to leave and what it would cost, and that has actually written it down somewhere, is doing architecture. A team that cannot has an arrangement of services, no matter how sophisticated each service is.


VI. The Architectural Discipline: Designing for Exit on the Cloud

The constructive form of the argument is that designing for exit on the cloud is a discipline with identifiable practices. None of these practices is novel; what is uncommon is treating them as part of the architecture rather than as advice consulted after a problem has already occurred.

Treat each cloud-service choice as an architectural decision, not a procurement default. Document the choice at the moment of selection — what the service is for, what its known portable alternatives are, what would trigger migration, and what migration would roughly cost. The act of writing this down at procurement time is the highest-leverage practice on this list, because it forces the architectural deliberation that the procurement-shaped framing skips. Most teams do not do this for cloud-managed services because the choice "feels like" infrastructure rather than architecture; this is precisely the failure mode of Section I, and the antidote is to convert the feeling into a written artifact.

Isolate cloud-specific services behind interfaces you own. If DynamoDB is the data store, application code calls a KeyValueStore interface and a thin adapter calls DynamoDB. If Cognito is the identity provider, application code calls an IdentityProvider interface. If S3 is the blob store, application code calls a BlobStore interface. The wrappers feel like overhead at the time of writing. They are the cheapest insurance available against the day a service deprecates, prices change, or a region migration becomes necessary. The wrapper is also useful for testing: a fake implementation of KeyValueStore is easy; a fake DynamoDB client is not.

Keep data in open formats, even when the runtime is proprietary. Parquet over proprietary table formats. Standard ANSI SQL over Spanner-specific or BigQuery-specific dialects, where the workload permits. Avro or Protobuf over closed serialization formats. Cloud-specific data formats are the deepest commitment available, because data outlives the runtime that produced it — and the longer the data lives, the more the format choice compounds. Whenever a choice exists between a proprietary cloud-native format and a neutral one, the neutral format should be preferred even at some operational cost. The data is the asset; the format is the leash.

Maintain a written exit plan for each critical cloud-service dependency. The plan does not need to be detailed. It needs to exist. Its function is diagnostic: the act of writing it surfaces the integrations you forgot, the data gravity you underestimated, and the control-plane coupling you did not realize you had. A team that cannot write a credible exit plan for a service does not understand its dependency on it — and the absence of a plan is itself the diagnostic finding.

Prefer standardized interfaces; they are escape paths. S3-compatible object storage exists outside AWS (MinIO, Backblaze, Cloudflare R2) and is the difference between blob storage being a portable abstraction and blob storage being an AWS commitment. PostgreSQL-compatible managed databases exist on every hyperscaler and on every IaaS, which makes choosing PostgreSQL a structurally different decision from choosing DynamoDB. Kubernetes-compatible container orchestration exists wherever Linux runs. OAuth and OpenID Connect are standardized identity protocols that any provider can implement. OpenTelemetry is a vendor-neutral observability protocol. Each standardized interface bounds exit cost to the cost of swapping the implementation behind the standard. Each unique proprietary interface raises exit cost by the cost of writing the replacement.

Recognize the lock-in gradient within a single cloud. The choice is not AWS-or-not-AWS, GCP-or-not-GCP, Azure-or-not-Azure. The choice is which services within the cloud to bind to and how deeply. Compute primitives (EC2, Compute Engine, Azure VMs), storage primitives (EBS, S3, Persistent Disk, Cloud Storage, Azure Blob Storage), and managed open-source databases (RDS for Postgres, Cloud SQL for Postgres, Azure Database for PostgreSQL) are at the portable end of the gradient. Higher-order managed services (DynamoDB, Spanner, Cosmos DB, BigQuery, Step Functions, AppSync, Cognito, EventBridge) are at the bound end. Choosing within the gradient is the architectural decision the procurement-shaped framing makes invisible — and it is the decision with the largest effect on long-term exit cost.

Multi-cloud is not a strategy; portability is. Running production workloads simultaneously across two or three clouds is operationally expensive, organizationally complex, and rarely yields the resilience benefits used to justify it. The architecturally robust posture is single-cloud-with-portability: bind to the cloud you are on, but bound exit cost low enough that if you needed to move, you could, on a timescale measured in months rather than years. This is achievable and observable. Active multi-cloud parallel deployment, in most organizations, is neither.

These practices are not exotic. Most experienced cloud architects can recite them. The question is not whether they are known but whether they are applied as architecture — as part of the deliberation that produces the system, rather than as advice consulted after the bill arrives. A team that applies these practices consistently will, over years, build a system whose exit costs are bounded and whose optionality is preserved. A team that does not will build a system whose exit costs grow silently and whose optionality is gradually consumed, until the day arrives when there is none left to spend.


VII. The Bill You Don't See Yet

Cloud lock-in is not architecture's exception. It is architecture's largest single instance of the same problem the broader vendor lock-in literature has been describing for years. The decisions that produce it are made years before the trigger and look like infrastructure plumbing at the time. The trigger arrives on the counterparty's schedule — a price change, a service deprecation, an acquisition, a regulatory shift, a strategic refocus. By then the system property that mattered, exit cost, has accumulated to whatever the team's previous decisions made it. The discipline of designing for exit is the recognition that this accumulation is not weather; it is the consequence of choices, and the choices can be made differently.

The dependencies are not the asset. The optionality is. A system whose architecture preserves the freedom to choose differently later is a system that can adapt to whatever its environment becomes — provider repricing, service deprecation, acquisition, regulatory change, strategic pivot, the discovery that the workload would simply run better somewhere else. A system that has spent its optionality on short-term convenience can adapt only to the futures its remaining counterparties will permit. The first kind of system ages well. The second ages into a forced migration whose timing is decided elsewhere.

Every proprietary cloud service really is a future migration whose date the vendor will pick. The only open question is whether the architecture has bounded the cost of that migration to something the team can absorb on its own schedule, or whether the team will discover the cost on the day the bill arrives. Knowing how things work — how the service is structured, what alternatives exist, what the migration path looks like, what would trigger the need to take it — is what allows that question to be answered on the architect's terms rather than on the counterparty's. The same architectural reasoning explains the wave of software relicensing across 2021–2025. The mechanisms differ. The discipline is the same.


References

[1] Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing. ISBN: 978-1-60358-055-7.

[2] Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data (Data Act). Entered into force January 11, 2024; switching obligations applicable September 12, 2025; full removal of switching charges by January 12, 2027. Articles 23–31 (Chapter VI). https://digital-strategy.ec.europa.eu/en/factpages/data-act-explained

[3] Google Cloud (2024). Eliminating data transfer fees when migrating off Google Cloud. Announcement, January 2024. Industry analysis: TechCrunch, "AWS follows Google in announcing unrestricted free data transfers to other cloud providers." https://techcrunch.com/2024/03/05/amazon-follows-google-in-announcing-free-data-transfers-out-of-aws/

[4] Amazon Web Services (2024). Free data transfer out to internet when moving out of AWS. AWS News Blog, March 5, 2024. https://aws.amazon.com/blogs/aws/free-data-transfer-out-to-internet-when-moving-out-of-aws/ — InfoQ analysis: "AWS Waives Egress Fees for Customers Exiting the Cloud." https://www.infoq.com/news/2024/03/aws-egress-fees/

[5] Microsoft Azure free-egress-on-exit policy aligned with the EU Data Act, in effect 2024. Industry coverage: The Register, "Amazon agrees to waive egress fees for disgruntled users." https://www.theregister.com/2024/03/05/aws_data_egress/

[6] Google Cloud (2022). Cloud IoT Core deprecation notice. Announced August 16, 2022; service shutdown August 16, 2023. InfoQ coverage: "The Announcement of Discontinuing Google Cloud IoT Core Service Stirs the Community and Customers." https://www.infoq.com/news/2022/08/google-iot-core-discontinued/

[7] Broadcom (2023). Broadcom Completes Acquisition of VMware. November 22, 2023. https://news.broadcom.com/cloud/vmware-by-broadcom-business-transformation

[8] ITAM Review (2023). VMware stops selling perpetual licenses. December 12, 2023, with subsequent updates. https://itassetmanagement.net/2023/12/12/vmware-stops-selling-perpetual-licenses/

[9] AT&T v. Broadcom Inc. and VMware LLC, filed in the Supreme Court of the State of New York on August 29, 2024. TechTarget reporting: "AT&T sues Broadcom over VMware support licensing." https://www.techtarget.com/searchdatacenter/news/366610359/ATT-sues-Broadcom-over-VMware-support-licensing — Settlement coverage: CIO Dive, "Broadcom, AT&T reach settlement in VMware legal dispute." https://www.ciodive.com/news/broadcom-att-vmware-settlement-licensing-support-lawsuit/733763/

[10] 37signals (2023). Leaving the Cloud — Cloud Computing Isn't For Everyone. Documentation hub: https://basecamp.com/cloud-exit — Detailed 2022 cloud spend breakdown: Our cloud spend in 2022, 37signals Dev. https://dev.37signals.com/our-cloud-spend-in-2022/

[11] Wang, S., and Casado, M. (2021). The Cost of Cloud, a Trillion Dollar Paradox. Andreessen Horowitz, May 27, 2021. https://a16z.com/the-cost-of-cloud-a-trillion-dollar-paradox/

← Back to Articles