- By beiker
- 0 comments
DeepSeek and the real innovation of open-source AI.
After nearly seven years of systematic engagement with AI—having participated in various fascinating and innovative projects, as well as having read a substantial number of research papers—I believe I can share my perspective on a topic that is widely discussed today.
The real revolution in AI at the moment is not DeepSeek itself, but rather the emergence of open-source models, of which DeepSeek is a part. These models have been developed by scientists, experts, specialized teams, and companies, and they are released freely, allowing modifications and redistribution. They are based on the principles of open-source software, much like Linux, WordPress, OpenOffice, and many other foundational technologies that form the backbone of today’s internet.
The Open-Source Breakthrough
A key moment in this revolution was Meta’s strategic decision under Mark Zuckerberg to introduce LLaMA in February 2023—the most well-known and powerful open-source AI model. LLaMA is freely available for both commercial use and modifications, enabling the creation of new models based on it. At the same time, since 2016, the Hugging Face platform has played a critical role in supporting the open-source AI community by providing models and datasets. Thousands of AI developers freely share their work on Hugging Face, further expanding access and innovation in the field.
Alongside these advancements, new tools have emerged that facilitate Retrieval-Augmented Generation (RAG), a technique that enhances the accuracy and reliability of generative AI models by integrating information from external sources.
Additionally, Ollama, another open-source tool, enables large language models (LLMs) to run directly on personal computers. This feature is particularly attractive to AI developers, researchers, and businesses concerned with data control and privacy protection.
Democratizing AI with Localized LLMs
These developments have empowered companies, universities, and individuals to run large language models internally and train them with specialized knowledge and domain-specific data. While running or training LLMs does require significant computational power, many have turned to NVIDIA’s graphics cards, originally designed for gaming but now excelling in AI applications.
Just last month, NVIDIA announced Project DIGITS, a supercomputer that can run and train AI models from an office or home. With a size only slightly larger than a smartphone box, it comes at a price of $3,000. While this may seem expensive for an individual, it is negligible for companies considering the capabilities it unlocks.
The Future of AI: Decentralization and Ethical Considerations
Bringing all these factors together, we see a growing opportunity for decentralization and democratization of AI model usage. This shift reduces dependency on tech giants like OpenAI, Google, and Microsoft, allowing more organizations and individuals to leverage AI for their specific needs.
However, the widespread use of these technologies also raises significant ethical concerns that cannot be ignored. This is an area I am deeply engaged with at the European level, working to ensure AI tools are accessible and beneficial for both professional and personal use. In particular, I collaborate as an expert with the Council of Europe, where we are developing an innovative tool focused on building fundamental democratic competencies in an AI-dominated world.
Conclusion
I firmly believe that the future of AI lies in open-source models and the development of decentralized, smaller-scale models tailored to specific tasks with well-defined datasets. While massive models will continue to evolve, the shift toward more specialized AI solutions is inevitable.
This evolution marks a significant step toward greater accessibility, privacy, and user control—as long as we balance innovation with ethical responsibility.