As artificial intelligence capabilities continue to advance, governments around the world are reassessing their dependence on large technology companies for critical AI infrastructure. By December 2024, open-source AI had become a central topic in discussions about digital sovereignty, national security and long-term technological resilience. Nations are increasingly asking how they can maintain control over essential digital systems while ensuring transparency, competitiveness and public trust.
Open-source AI models have matured significantly, offering performance levels comparable to many proprietary systems while giving governments greater control over data, deployment and customisation. This shift has created new opportunities for public sector organisations, as well as new responsibilities around risk management, security and ethical governance.
Why Digital Sovereignty Is Now a Strategic Priority
Digital sovereignty refers to a nation’s ability to shape and control its digital infrastructure, data flows and technological capabilities. The rise of large proprietary AI models has raised concerns about national reliance on a small number of private companies that are largely headquartered outside many governments’ jurisdictions.
For governments, this dependency creates challenges in several areas. There is a limit to how much transparency they can demand from proprietary systems, and without access to model weights or training data, public bodies cannot fully audit behaviour, detect bias or guarantee compliance. Vendor lock-in also restricts the ability to switch providers or develop domestic alternatives. Additionally, concerns about data privacy grow when sensitive public sector information must be routed through external platforms.
Open-source AI offers an alternative path. By allowing full inspection of model architecture, weights and documentation, open-source systems provide a level of transparency that aligns well with government accountability and security expectations.
The Rise of Open-Source AI Models in 2024
Throughout 2023 and 2024, several open-source models achieved strong performance in language understanding, reasoning and specialised domains. These advancements made it possible for public sector organisations to deploy AI solutions on their own infrastructure, with full control over data, security settings and update cycles.
In many nations, open-source models have supported national strategies aimed at building domestic AI capacity. They have empowered academic institutions, public research bodies and government departments to collaborate more freely without facing restrictive licensing terms. This trend has encouraged innovation while reducing reliance on single technology providers.

Benefits for Governments and Public Sector Organisations
Open-source AI brings several advantages that align with public sector needs.
Transparency and auditability
Governments can inspect the inner workings of open-source models, conduct independent safety evaluations and develop documentation necessary for compliance. This supports public accountability and helps organisations demonstrate fairness and reliability.
Security and data control
With open-source deployments, sensitive public sector data can remain within national borders and inside secure environments. This reduces exposure to external threats and ensures that government bodies retain full control over how information is processed.
Flexibility and customisation
Public institutions can adapt open-source models to local languages, policy frameworks and administrative processes. This reduces the risk of applying systems designed for different cultural, legal or operational contexts.
Reduced vendor lock-in
Open-source ecosystems allow governments to select from multiple implementation partners. They can change providers without risking operational disruption or excessive migration costs.
Implications for Law Firms and Insurers
As open-source AI adoption grows within governments, legal and insurance sectors face a new set of considerations.
For legal practitioners
Law firms increasingly advise clients on open-source licensing, model governance and compliance obligations. With greater transparency, lawyers have more material to review, such as model documentation, training processes and risk assessments. Open-source AI also raises questions about accountability when organisations modify or fine-tune models themselves.
For insurance organisations
Insurers must evaluate the risk implications of open-source AI deployments. While transparency helps reduce uncertainty, the responsibility for maintaining model integrity often shifts to the user. Insurers will need to adapt underwriting processes to account for models managed by the organisation rather than a vendor. This may include assessing internal governance maturity, security controls and the robustness of monitoring practices.
Open-source AI is not without its challenges, and governments must prepare for the responsibilities that come with increased autonomy.
Complexity in model maintenance may require new technical skills to ensure safe and reliable deployment. Security risks can arise if open-source models are not properly monitored or updated. The rapid pace of open-source development makes it difficult for regulatory frameworks to keep up. Accountability questions emerge when models are modified locally, raising uncertainty around liability for errors or harm. Interoperability issues can arise because different public agencies may adopt different open-source models or standards.
These challenges mean that open-source AI must be supported by strong governance and clear policy direction.
How Bold Wave Supports Digital Sovereignty Initiatives
At Bold Wave AI, we help governments, legal teams and insurers make informed decisions about open-source AI adoption. Our expertise covers strategy, governance, product development and operational deployment.
We support clients by designing open-source AI architectures that meet national security and compliance standards. We build transparent, explainable AI systems suitable for sensitive public sector applications. We create risk management frameworks that help organisations evaluate and control open-source model behaviour. We provide independent audits to ensure systems are safe, fair and reliable. We also assist with capability development through training programmes for staff working with AI.
Whether a government is aiming to enhance national resilience, build domestic AI talent or reduce reliance on proprietary systems, Bold Wave can help organisations move from aspiration to implementation.



