business resources

AI, AGI, ASI, Singularity: Dinis Guarda Interviews Ben Goertzel, Founder And CEO Of SingularityNET

Shikha Negi Content Contributor

1 Jul 2025, 4:49 pm GMT+1

In the latest episode of the Dinis Guarda Podcast, Ben Goertzel, founder and CEO of SingularityNET, discusses the evolution of decentralised AGI through SingularityNET, the ASI Alliance, secure AI architecture, and the path toward human-level artificial general intelligence. The podcast is powered by Businessabc.netCitiesabc.comWisdomia.ai, and Sportsabc.org.

Ben Goertzel is a computer scientist, AI researcher, speaker, author, and entrepreneur who coined the term AGI in 2003 and has contributed significantly to the field. He is the founder and CEO of SingularityNET, a decentralised AI platform that leverages blockchain technology to provide open access to AI services, fostering collaboration across industries. With its AI marketplace, developers can monetise their innovations using the AGIX token.

He is also the vocalist and plays keyboard for the Jam Galaxy Band, the first-ever band led by a humanoid robot, Desdomena. Ben is also the founding member of the Artificial Superintelligence (ASI) Alliance that unites Fetch.ai, SingularityNET, Ocean Protocol, and CUDOS to build a decentralised, ethical, and accessible AI ecosystem.

During the interview, Ben Goertzel discusses the potential of decentralised AI systems and the importance of inclusivity in the development of artificial general intelligence (AGI):

"A few big companies did not take over the internet right, but the fact that it hasn’t happened to a total degree is quite important. The internet is an open and decentralised protocol. 

I think it’s entirely possible that a decentralised AI project can cobble together the data and compute power to make something smarter than anything big tech has done in the AI space.

If you’re looking at the growth of a decentralised global brain, the best chance to get the first AGI to be beneficial to our species is if the AI has its value system and its knowledge base infused by a great variety and diversity of human perspectives.”

Artificial General Intelligence (AGI)

During the interview, Ben compares the current state of Artificial Intelligence (AI) with Artificial General Intelligence (AGI):

"There's always different ways to interpret terms of this nature. According to the meaning that we laid out in the original book on AGI in 2005, we do not yet have human-level AGI. 

Generality of intelligence is a gradation. A dog has a greater ability to leap beyond its history and training data than a worm, and a human has more than a dog. Future superintelligences may have more than a human, maybe Einstein had more than the average person.

It’s a graded scale, not a binary distinction, but clearly current AI programs have substantially less general intelligence than a human being.

The nature of LLMs is that they can’t leave that far beyond their training data, but their training data covers so much of human endeavor that they can do a fairly broad scope of stuff without having to leap that far beyond their training data.

Even the smartest reasoning LLM or Alpha Zero, whatever you want to look at, don't have a level of generality of intelligence that a human does.

An LLM is pretty general in what it can do. It can write poems in different languages and styles, it can answer questions about many different domains, but it’s not able to make creative and imaginative leaps beyond its preparation in the way that a person can.

I think we're really, really close, not because I think LLMs can just be scaled up or tweaked to yield AGI, but because I think that putting together the modern computing infrastructure that enabled LLMs with a whole bunch of other AI technologies, I think we’re going to be able to create systems with AGI at the human level and then beyond within just a few years.

I think once we get human-level AGI, that human-level AGI will be a programmer, it'll be a computer scientist, it'll be an AI theorist, it'll be a mathematician, it'll be an electrical engineer. It will be able to rearchitect its own code and its own hardware infrastructure, so it’ll lift its level beyond the human and up further and further.

The human-level AGI barrier is a little arbitrary. It’s more like the escape velocity of Earth or something. It’s not like building a rocket that surpasses the escape velocity of Earth requires a fundamentally different architecture than making one that's a little less than the escape velocity of Earth."

The architecture of artificial superintelligence

Ben Goertzel describes the technological architecture behind SingularityNET:

"What SingularityNET does is somewhat unsexily described as, it’s a middleware layer between the hardware and the AI systems.

SingularityNET is trying to give you an alternative to deploying your AI algorithm on AWS or Azure or Google Cloud, instead you want to deploy your AI system on a decentralised infrastructure layer.

Blockchain is the best technology that we have today for coordinating a large global network of machines without a central owner or controller. The fact that you can deploy decentralised AI systems on blockchain infrastructure allows you to have this global, decentralised coordination.

The ASI chain will be really the first layer 1 blockchain designed specifically for decentralised AGI. We, SingularityNET, Fetch.ai, Ocean Protocol, and CUDOS, took our crypto tokens, merged them into one token and for the moment that token is still called the Fetch token. This year I believe we're going to proceed with the transition of the ticker symbol to the ASI token for artificial super intelligence.

The key is connecting decentralised AI systems with decentralised blockchain infrastructure. This is what will make AGI practical, scalable, and secure. By using blockchain, we can ensure transparency, accountability, and trust in AI systems, which are essential for AGI to gain acceptance in society.

The good news is we’re at the point now where AI tools make it faster and faster to do all the work, the AI tools are themselves now accelerators of AI progress. I think that’s an indication that the singularity is indeed near, we’re in the end game here, folks.”

Secure Generative AI architecture

As the interview continues, Ben and Dinis discuss the importance of cybersecurity in generative AI:

"The way people deal with LLM security is to try to bolt on security after the LLM is done, you're adding guard rails on top of it, trying to do final instruction tuning with security in mind.

If you have certain system prompts that you need to use over and over again, you would train a network for those system prompts and freeze that code so nobody could change it. Then that defends you against prompt injection attacks right away.

You need an intelligent reasoning component that can try to detect when anyone is trying to overcome this wired-in, hard-coded system prompt. You can use a cybersecurity knowledge graph to do some symbolic reasoning about that.

In the AGI infrastructure we’re building, security is backed in by design. Each block of code defining a function can be encrypted with a number of different parties' private keys so you can only see the local variable values if you have the right keys to log in.

This infrastructure will provide very secure by design infrastructure for large neural nets, along with all the secure stuff in the book you mentioned. I think this can be an advantage that the open-source and decentralised community has, Linux is quite secure, arguably more so than proprietary operating systems, you just have more people looking at bugs and trying to fix them.

These open decentralised networks not only will have more eyeballs on security but can actually incorporate more sophisticated secure-by-design mechanisms. There's an interesting overlap between what you want to do for traditional cybersecurity versus what you want to do for AI safety.

In both cases, you want to have more ability to track and observe and monitor what’s going on inside the mind of the AI system."

Share this

Shikha Negi

Content Contributor

Shikha Negi is a Content Writer at ztudium with expertise in writing and proofreading content. Having created more than 500 articles encompassing a diverse range of educational topics, from breaking news to in-depth analysis and long-form content, Shikha has a deep understanding of emerging trends in business, technology (including AI, blockchain, and the metaverse), and societal shifts, As the author at Sarvgyan News, Shikha has demonstrated expertise in crafting engaging and informative content tailored for various audiences, including students, educators, and professionals.