Table of Contents
Countries like India are taking steps to democratise AI development by investing in sovereign cloud infrastructure, creating open data platforms, and supporting local startups. However, these initiatives may not be sufficient to counteract Big Tech’s dominance.
Challenges of Big Tech Dominance
- Computational Costs: Building deep learning models incurs enormous computational expenses, making it difficult for smaller players to compete.
- As of 2023, the Gemini Ultra model was reported to cost around $200 million to train, illustrating the financial barrier for new entrants who would need to rely on Big Tech for compute credits.
- Deep Learning Popularity: Deep learning has become the preferred method in AI due to its generalised capabilities, but its high computational demands reinforce the dominance of established companies.
- Big Tech firms advocate for larger models, which helps them maintain their market position and recover costs through primary revenue streams.
- Infrastructure and Tools: Proposals for public compute infrastructure or a federated model are emerging, inspired by India’s Digital Public Infrastructure model.
- However, merely providing alternative infrastructure is insufficient; it must also be competitive with offerings from Big Tech, which provide a suite of optimised developer tools that enhance workflow efficiency.
- Data Monopoly: Big Tech companies benefit from vast data streams across various domains, giving them a competitive edge through sophisticated “data intelligence.”
- Smaller AI firms often find themselves selling out to Big Tech, further entrenching the cycle of dominance.
- Academic Influence: The shift towards deep learning has resulted in commercial entities dominating AI research, with industry players outpacing academia in publications and citations.
A New Approach to AI Development
To address these challenges, a radically different approach to AI development is necessary—one that does not attempt to replicate the Big Tech model but instead seeks to change the underlying rules:
- Theory of Change: This model emphasises understanding causal mechanisms and developing targeted interventions rather than relying solely on big data.
- It advocates for “small AI” that leverages domain expertise and lived experiences to create purpose-driven models.
- Historical Precedents: Significant advancements in fields like medicine and aviation have historically relied on theory-driven models rather than sheer data volume. This approach should be revisited in AI development.
Missed Opportunities and Future Directions
- The recent Global Development Compact is viewed as a missed opportunity for rethinking AI paradigms.
- Although it promotes democratising AI, it risks falling into the trap of assuming that large datasets and computational access alone will solve issues related to Big Tech monopolies.
- Continuing down the current path of prioritising Big Data and deep learning only increases dependence on Big Tech, hindering genuine progress toward equitable AI development.