AI Is the Greatest Productivity Tool in History — If You Can Afford It
April 1, 2026
AI Is the Greatest Productivity Tool in History — If You Can Afford It
Last month I sat in a co-working space in Da Nang, Vietnam, and ran the numbers. A serious AI workflow — Claude Pro, GPT-4o, plus API credits for building products — costs $50 to $200 per month. For a developer in San Francisco, that is coffee money. For my colleagues in Hanoi, Jakarta, or Nairobi, it is a meaningful slice of rent, fuel, and family support. Same tools. Same prices. Completely different realities.
Worth saying upfront: this is not purely a Global South problem. A developer in rural Appalachia, southern Italy, or post-industrial Poland faces much the same purchasing power gap. The access divide is not a simple rich-country vs. poor-country story. It is a capital-access story that cuts across geographies in ways we rarely acknowledge openly. But it falls hardest, and most consistently, on the developing world.
And it is not going away on its own.
AI Is a Multiplier, and Multipliers Amplify Gaps
The uncomfortable math is this: AI does not create productivity from nothing. It multiplies what you already have. Capital, skills, and access get you 10x more efficient. Without them, you watch the gap widen in real time.
The data makes this impossible to ignore. McKinsey's 2025 Global Survey found that 88% of organisations worldwide use AI in at least one business function, up from 78% the year before and roughly 20% back in 2017. But that growth is overwhelmingly concentrated in wealthy nations and large corporations. Small businesses across Southeast Asia, Africa, and Latin America remain largely on the sidelines. [1]
The UN Development Programme's December 2025 report, The Next Great Divergence, put it plainly: without aggressive policy intervention, AI risks reversing decades of narrowing development gaps between countries. Their chief economist warned we are entering "a new era of rising inequality between nations" after fifty years of convergence. [2]
MIT economist Daron Acemoglu, whose research on AI and labour markets is among the most rigorous in the field, reached a sobering conclusion: even when AI improves the productivity of lower-skill workers in certain tasks, it may increase rather than reduce inequality overall. The reason is structural. AI widens the gap between capital and labour income. The gains flow disproportionately to those who own the tools, not just those who use them. [3]
The Pricing Wall, In Real Numbers
Abstractions hide the problem. Let's use numbers instead.
Claude Pro: $20/month
GPT-4o Plus: $20/month
Midjourney: $10 to $30/month
Serious API access for developers: $50 to $200+/month
Stack a proper workflow and you are looking at $100 to $300 monthly before writing a single line of code.
The real pain point is API pricing, the per-token cost developers pay to build AI-powered products. Here is what the gap looks like at current rates (March 2026):
DeepSeek-V3.2 is nearly 9x cheaper than GPT-4o on input tokens. With cache hits, input drops to $0.028 per million — 90x cheaper than GPT-4o. This pricing is consistent whether you access it directly via api.deepseek.com or through aggregators like OpenRouter. [4]
Importantly, this is not a budget compromise. On developer-relevant tasks — coding, logic, and Southeast Asian languages — DeepSeek-V3.2 delivers performance statistically tied with GPT-4o, confirmed on LMSYS Chatbot Arena and the SEA-HELM leaderboard (March 2026). On Vietnamese, Thai, and Indonesian benchmarks it actually outperforms GPT-4o. You are not paying less for less. You are paying less for the same thing, or better.
One access barrier does remain. Developers in countries with foreign currency restrictions or limited international banking — parts of Africa, Pakistan, Bangladesh — sometimes cannot pay at all, regardless of which platform they use. Thailand, Malaysia, Vietnam with an international card: generally fine. Nigeria, much of sub-Saharan Africa: a different story entirely.
Now look at purchasing power. That $20 monthly subscription carries a completely different weight depending on where you live:
These are national medians. Developer and knowledge-worker salaries are higher in each country, but still make $20 a meaningfully heavier burden than in the US or Western Europe. For an American knowledge worker, Claude Pro is a minor subscription. For a developer in Lagos building the same applications, it is a recurring financial decision with real trade-offs every single month.
The Agentic Multiplier: Tomorrow's Cost Problem
Here is the cost argument most people are missing, and it is more urgent than the chatbot math above.
The industry is rapidly shifting from single-prompt interactions to autonomous AI agents: systems that loop, self-correct, and chain multiple tasks together without human input. An agent researching a topic, drafting a document, checking its own work, and revising might execute 50 to 200 API calls to complete what looks like one task to the user.
Agentic workflows burn through tokens exponentially faster than simple prompts. A standard chatbot might consume 10,000 tokens per user session. An autonomous agent completing the same goal might consume 500,000. At GPT-4o pricing, that is the difference between cents and dollars per session, per user, per day.
For a developer in Silicon Valley backed by venture capital, this is a manageable infrastructure cost. For a solo developer in Da Nang or Accra building the same product on personal savings, agentic AI is not an upgrade. It is a financial wall. The pricing gap that exists today becomes dramatically more acute as the technology advances toward the workflows that deliver the most real value.
To put a number on it: OpenAI's newest flagship agentic model, GPT-5.3-Codex (released February 2026 and built specifically for long-running autonomous coding workflows), costs $14 per million output tokens. That is 40 to 80x more per token than DeepSeek-V3.2, for gains that matter mainly in elite, multi-hour autonomous sessions. For the majority of everyday developer tasks, the performance difference does not justify the cost. But it illustrates where the premium tier is heading — and who gets left behind as agentic AI becomes the new standard.
The cost of AI access is not just high today. For the most powerful emerging use cases, it is about to get significantly higher.
Understanding the Business Side: This Is Not Villainy, It Is Math
To be fair to the companies building these tools, the economics are brutal.
Training a frontier AI model costs hundreds of millions to billions of dollars. Data centers require massive ongoing investment in hardware, energy, and cooling infrastructure. The compute supply chain (GPUs, specialised chips, cooling systems) is expensive, globally constrained, and dominated by a handful of suppliers. AI companies are spending billions annually on infrastructure alone, before a single dollar of profit. Add investor pressure for returns and the conclusion is clear: you price for the markets that can pay.
American and European enterprise clients generate the revenue that funds the next generation of models. Anthropic, OpenAI, Google are not ignoring the rest of the world out of malice. They are following capital logic under enormous financial pressure. The infrastructure bills arrive every month regardless of how many users in developing countries have access.
The problem is not intent. The problem is that the incentive structure systematically deprioritises the majority of the world's population. When survival depends on revenue from high-income markets, everyone else becomes a secondary consideration. Not a conspiracy. Just the predictable outcome of private capital chasing the highest returns.
The Counterargument Worth Taking Seriously
Before going further, there is a counterargument worth taking seriously, because it is partially true.
AI may be the first major technology in history that dramatically lowers barriers to entry rather than raising them.
Think about what productive technology used to require: factories, shipping networks, capital equipment, distribution systems. Today, a solo developer in Da Nang needs a laptop, an internet connection, and sometimes $20/month. With that, she can write production-grade code, produce marketing content, design products, handle customer support, translate materials, and build prototypes. Tasks that previously required entire teams.
This is not trivial. Vietnam's e-commerce SMEs reached 42% AI adoption in 2025, many using free tiers or low-cost APIs. Vietnamese users show some of the highest AI trust and daily usage rates in Southeast Asia. Millions of regional freelancers now compete for USD-denominated contracts on Upwork and similar platforms, work they could not realistically access before AI tools compressed the skill gap with their higher-income competitors. [5]
Inference costs have fallen 100 to 280x in just two years. A further 5 to 10x drop by 2028 is widely projected. [6] The economics are moving in the right direction, fast.
So here is the real tension at the heart of this story: AI may compress the productivity gap between individuals even while widening it between countries.
A talented developer in Hanoi with free tier access is genuinely more competitive than she was five years ago. But the country she lives in, without large-scale compute infrastructure, sovereign AI investment, or representation in global AI governance, falls further behind the countries that have all three. Individual empowerment and national divergence are happening simultaneously.
That paradox is more interesting and more honest than a simple narrative of AI-as-inequality-machine.
The Walls You Cannot See
Pricing is the obvious barrier. The invisible walls compound the problem.
The education gap, upstream of everything. Before pricing even enters the picture, there is a curriculum problem. A developer in Accra who can afford Claude Pro still faces a disadvantage if her university AI curriculum is three years behind the frontier, her computing lab is inadequate, and the best technical resources are in English. The access gap starts before the first subscription decision.
Language bias, partially improving but slowly. AI models trained predominantly on English data perform measurably worse in other languages. IrokoBench, a rigorous benchmark developed by the Masakhane research initiative, evaluated 14 major language models across 16 African languages and found an average performance gap of around 45% between English and the African languages tested. In Southeast Asia, the 2025 SEA-HELM leaderboard shows real improvement: Chinese open models now match or outperform GPT-4o on Vietnamese, Thai, and Indonesian. Progress exists. It is uneven. [7]
Data colonialism. Users in developing countries generate data that trains these models. Their conversations, queries, and feedback improve the product. That value flows to US companies. The people contributing the data rarely share in the returns.
Talent drain. The best AI engineers from India, Vietnam, Eastern Europe, and Africa get pulled into US and Western companies with salaries local markets cannot match. The countries that most need local AI expertise consistently lose it to the countries that already have the most.
The capital wall. A developer in Da Nang trying to scale an AI-powered product does not only face higher API costs. She also faces higher capital costs, limited startup funding, potential inability to set up a US entity for payment processors like Stripe, and currency controls. Pricing is one wall. The financial infrastructure around building AI products is a second wall, less visible, that compounds the first.
The governance vacuum. The EU is regulating AI. China is aggressively subsidising domestic development. Developing countries have almost no seat at the table where the rules governing global AI are being written. They are consumers of decisions made without them.
The energy constraint. Running AI at scale requires massive, stable electricity. Data center power demand is growing exponentially. Developing countries with unstable grids, expensive electricity, and limited cooling infrastructure face a physical constraint that has nothing to do with subscriptions or pricing, and takes decades to solve.
The edge opportunity. One counterforce worth watching: the shift toward running capable AI models directly on consumer devices, smartphones and laptops with dedicated neural processing units. On-device AI bypasses the cloud entirely, removes per-token costs, and eliminates dependence on grid stability. It will not close the frontier capability gap, but it may meaningfully expand what is accessible to developers with limited cloud budgets.
China's Answer: A Completely Different Model
While American AI companies build behind paywalls and subscription gates, China took a fundamentally different strategic approach, and it is reshaping the global AI landscape.
In January 2025, DeepSeek, a Chinese startup based in Hangzhou, released its R1 model under a full MIT license. More precisely: it released open weights with commercial use permitted. This is technically distinct from fully open-source, since the complete training data and pipeline were not made public. But functionally, it was open enough for any developer worldwide to deploy, fine-tune, and build on without paying per token to a US company. [8]
DeepSeek claimed roughly $5.6 million for the official training run. External analysts at SemiAnalysis estimate the total infrastructure and R&D investment across the company's history was likely 50 to 100x higher, making the headline figure arguably misleading. Regardless, the result was real: a frontier-capable model, accessible to anyone, at a fraction of Western API prices.
The current direct API pricing reflects how aggressively DeepSeek has continued cutting costs. DeepSeek-V3.2, their latest model powering both chat and reasoning modes, runs at $0.28 per million input tokens and $0.42 per million output tokens. With cache hits, input drops to $0.028 per million — 90x cheaper than GPT-4o, and available at the same price whether accessed directly or through major aggregators. [4]
The pressure on the industry was immediate. On January 31, 2025, days after DeepSeek's release rattled global markets and wiped billions from Nvidia's valuation, OpenAI CEO Sam Altman stated in a Reddit AMA: "I personally think we have been on the wrong side of history here and need to figure out a different open source strategy." [9] The world's most powerful AI company, forced to publicly reconsider its core strategy by a startup operating on a fraction of its budget.
By late 2025 the downstream effects were visible in Southeast Asia. AI Singapore dropped Meta's Llama models in favour of Alibaba's open-weight Qwen for its flagship SEA-LION-v4 project, which now tops the SEA-HELM leaderboard for regional languages. [10] Chinese open-weight models reached 17.1% of global Hugging Face downloads, edging past the US share of 15.8%. [11]
China's Strategic Logic, and Its Limits
It is worth being clear about why China is doing this, because motivation matters.
Open-sourcing AI is China's national industrial strategy. Subsidise development through state and quasi-state funding, release models freely to drive global adoption, then monetise through cloud services, enterprise contracts, and ecosystem lock-in. Alibaba has confirmed this model explicitly: open-weight Qwen drives enterprises toward Alibaba Cloud. The free model is the acquisition funnel. It is the same playbook Google used with Android.
There is an instructive irony in the chip war. US export controls, designed to weaken China's AI by restricting access to advanced NVIDIA GPUs, appear to have backfired. Denied the best chips, Chinese labs innovated around efficiency, pioneering architectures that deliver competitive performance on constrained hardware. Scarcity became an engine of innovation.
Some analysts now argue that open-weight models may not just be an alternative to the subscription model. They may eventually dominate it. If they continue improving at their current rate and inference costs keep falling, the economics of closed subscription AI comes under serious pressure within the decade. Not guaranteed, but no longer implausible.
One geopolitical variable deserves a sentence: TSMC. The global AI supply chain, open or closed, American or Chinese, runs on advanced semiconductors manufactured almost entirely in Taiwan. That is a single geographic point of failure that no country in the Global South controls or can significantly influence. Sovereign AI strategies that do not account for chip supply chain fragility are building on a fragile assumption.
The Privacy Trade-Off Nobody Mentions
Using DeepSeek means data and queries potentially pass through Chinese infrastructure and fall under Chinese data law. For individuals writing articles or doing research, this may feel abstract. For businesses handling sensitive customer data, financial records, or proprietary code, it is a real compliance and security issue.
Replacing dependence on US tech companies with dependence on Chinese ones is a trade-off, not a solution. Different risks, different political implications, but still not sovereignty.
The genuinely independent path is locally-hosted open-weight models, running DeepSeek, Llama, or Mistral on your own infrastructure, where data never leaves your control. Technically possible and increasingly practical, but it still requires compute hardware, technical expertise, and ongoing operational costs that most developers in developing economies cannot easily assemble.
Is Open Source Enough? The Balanced View
Open-weight models are the most powerful counterforce available right now. But the full picture is more nuanced than either advocates or sceptics typically admit.
The floor has risen dramatically. Adoption gains across Southeast Asia show this is not theoretical. Inference costs have collapsed, access has expanded, and the gap between what is possible with free tools versus paid ones is narrowing fast.
The ceiling remains out of reach for most. Fine-tuning on custom data, deploying agentic workflows at scale, building reliable production applications with frontier-class models: costs climb quickly. Open-weight licensing removes the subscription wall but not the hardware wall. And for the agentic workflows that represent the next wave of real productivity gains, the cost gap between rich and developing markets widens rather than narrows.
Open-weight models narrow the gap. They do not close it. Even if they eventually do close it for individuals, the country-level infrastructure gap (compute, energy, governance, talent) takes decades to address regardless of what happens to model licensing.
AI as Infrastructure: The Deeper Question
Electricity was once a premium service available only to wealthy businesses. Then it became infrastructure, a public utility underpinning every form of economic activity. The internet followed the same arc. Access to both is now treated, in most countries, as a basic requirement for economic participation.
AI is becoming the same kind of foundational layer. Not a luxury software tool, but infrastructure that will underpin commerce, education, healthcare, manufacturing, and governance. If access to that layer is distributed unequally, if some countries own it and others perpetually rent it, the economic consequences will compound with every passing year.
Call it productivity capital: in the AI era, meaningful competitive advantage derives from controlling the four components of the AI stack (compute, data, models, and distribution). Countries that lack these must perpetually rent productivity from those that have them. They do not just pay more for the same outcomes. They remain structurally dependent on decisions made in Palo Alto or Beijing that they had no voice in shaping.
The real question is not whether AI will widen inequality. The IMF, the UNDP, and leading economists largely agree it will, at least in the short to medium term. The real question is whether AI becomes genuine public infrastructure, like electricity, like roads, or a privately rented productivity layer controlled by a handful of companies and states. The answer to that question will shape economic history for the rest of this century.
A Global South Response Is Taking Shape
Vietnam is not acting alone. Across the Global South, governments are beginning to recognise that AI sovereignty is a strategic question, not a technology procurement decision.
Indonesia published its National AI Roadmap in July 2025, targeting sovereign data centres, high-performance computing infrastructure, and 100,000 trained AI professionals annually. [12] Brazil committed over $320 million to an AI-capable supercomputer intended to rank among the world's top five. [13] Across Africa, at least fifteen countries now have national AI strategies, with Ethiopia, Nigeria, Kenya, and others publishing frameworks between 2024 and 2025, aligned with the African Union's continental AI strategy. [14]
The ambition is real. The scale gap is sobering. In the first six months of 2024 alone, Microsoft, Amazon, Google, and Meta collectively spent over $100 billion on AI and cloud infrastructure. Brazil's $320 million supercomputer represents roughly 0.3% of that figure. Sovereign AI is not a binary choice but a portfolio of decisions: where dependence is strategically unacceptable, where it is tolerable, and where partnership makes more sense than ownership.
What Would Actually Help
Since diagnosis without prescription is just complaint, here is what meaningful change would actually require.
Regional pricing. Spotify and Netflix adopted purchasing power parity pricing after sustained market pressure. AI companies have not moved there yet, partly due to real technical challenges around arbitrage and VPN circumvention. But the model exists and works. This is ultimately a business decision, not a technical impossibility.
Compute access initiatives. Projects attempting to provide shared GPU infrastructure to researchers and developers in underserved regions need sustained funding and political support to scale beyond pilot programs.
Open-weight models combined with local hosting investment. Governments and development organisations should be investing in the technical capacity to run capable models locally, building genuine independence rather than substituting one foreign dependency for another.
Education investment upstream. Lowering subscription prices matters little if the pipeline of developers capable of using those tools productively remains thin. Curriculum, computing labs, and English-language technical training are as important as API costs for closing the real gap.
Representation in AI governance. The rules being written now, about safety, data use, and what AI can and cannot do, will shape this technology for decades. Developing countries need seats at those tables, not just downstream implementation of decisions made elsewhere.
This is the instinct behind Vietnam's AI Law, passed by the National Assembly in December 2025 and effective from March 2026. As a regulatory framework, it establishes oversight requirements, risk classification, and human accountability standards as a foundation for local AI development rather than perpetual dependency on foreign platforms. [15] Indonesia, Brazil, and a growing number of African nations are pursuing parallel paths, each trying to convert consumer dependency into at least the beginning of self-determined AI capacity.
No Clean Ending, But a Clear Direction
There is no tidy resolution here. The business logic that drives pricing will not change without competitive pressure or policy intervention. Open-weight models are promising but compute-constrained. Regulation is fragmented and slow. China's alternative model is real but comes with its own dependencies and risks. Sovereign compute initiatives are dwarfed by private sector investment by orders of magnitude.
And the central tension remains: AI is simultaneously the most accessible productivity tool ever created and a technology whose deeper infrastructure (compute, data, governance, talent) is concentrating in fewer hands than any previous technology wave.
The people who most need productivity tools are being priced out of the most powerful ones in history. Individual access is expanding. Structural power is concentrating. Both things are true at the same time.
For developers and founders in emerging markets, the most important strategic response is not to wait for Western companies to lower prices or for governments to build sovereign compute. It is to build now, with open-weight models, on-device AI, and local infrastructure wherever possible, choosing architectures that do not lock you into renting your core capability from someone else indefinitely.
That gap between individual opportunity and structural inequality will not close itself. But it closes faster when people build toward sovereignty rather than waiting for it to be granted.
Footnotes
[1] McKinsey & Company, "The State of AI in 2025: Agents, Innovation, and Transformation," November 2025. [2] UNDP, "The Next Great Divergence: Why AI May Widen Inequality Between Countries," December 2025. [3] Daron Acemoglu, "The Simple Macroeconomics of AI," NBER Working Paper 32487, 2024 / Economic Policy, 2025. [4] OpenAI pricing page, March 2026; DeepSeek official API pricing (api.deepseek.com), March 2026. [5] Vietnam Ministry of Information and Communications SME AI survey, 2025; e-Conomy SEA 2025 (Google/Temasek/Bain). [6] Epoch AI & Stanford AI Index 2025. [7] IrokoBench / Masakhane research initiative, 2024; SEA-HELM leaderboard, November 2025. [8] DeepSeek official R1 release, January 2025. [9] Sam Altman, Reddit AMA, January 31, 2025. [10] AI Singapore SEA-LION-v4 announcement, November 2025. [11] Hugging Face + MIT download statistics, December 2025. [12] Indonesia National AI Roadmap, Ministry of Communications and Digital Affairs, July 2025. [13] Brazil sovereign AI supercomputer announcement, 2025. [14] Carnegie Endowment for International Peace, "Understanding Africa's AI Governance Landscape," September 2025. [15] Vietnam National Assembly, Law on Artificial Intelligence, December 2025.
Written from Da Nang, Vietnam. The author is a digital nomad, serial entrepreneur, and founder of Vita Crypt and Guide Phangan.
Want to build something like this?