What Can't We Know About AI? | Knowing the Mind You're Working With | The Infinity Machine | and more...
Artificiality Summit 2026: What Can't We Know About AI? In the first of a series about our speakers
We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives. To truly grasp this transformation, our approach is rooted in engaging with core concepts such as critical thinking, logical analysis, and the scrutiny of underlying assumptions, principles that are essential in the realm of philosophical inquiry.
Explore the concept of open-endedness, a key trait for artificial superhuman intelligence. Learn how AI must transcend pattern recognition and develop unbounded creativity, setting its own goals for continual innovation. Discover the challenges in achieving true open-ended AI.
Discover how human creativity and AI collaborate in the face of advancements. Explore the unique qualities of human flexibility, diverse responses, and AI's ability to overcome creative blocks. Learn how architecture, analogical reasoning, and rap benefit from AI's divergent thinking capabilities.
AI and network analysis reveal innovation's complex structure, manage creative tensions, and amplify human potential by uncovering patterns in invention data. AI guides the process, but human intuition remains crucial in navigating the unequal market of ideas.
The current obsession with AGI, fueled by the hype from companies like OpenAI, is a dangerous distraction we must firmly reject. Don't fall for the red herring argument that we need superintelligent AI to save us from ourselves. It's an insult to human intelligence and agency.
Artificial intelligence will never understand why I found it important to write this piece today. Why and how I hope I can connect with readers through it. Because only you, my fellow humans, can have similar experiences.
We are witnessing the emergence of agentic and ubiquitous AI systems that will reshape the digital world. We will see vastly more machine content than human content: nothing will be comprehensible without a machine interpreting it for us. By machines, for machines is the new paradigm.
While considering the potential risks and implications of OpenAI releasing an AI voice that appears designed to draw people in, I found an interesting r/artificialinteligence Reddit thread of people sharing their preference for talking with AI over other humans.
I believe that accountability becomes more pronounced in the era of AI. While the accountability dynamics between humans and machines may differ, humans will invariably assert that if someone had access to such advanced AI capabilities, they had the means to make more informed decisions.
In the era of generative AI, characterized by seamless co-creation and the economics of intimacy, worries about AI encroaching on agency have more existential elements. They go to the essence of humanity, as machines now actively participate in significant aspects of original thought.
On a surface level, AI might seem like the perfect solution to the challenges of social learning. But is this really true? Is it actually what we want?
In science, traditional human search strategies are like wandering through a wilderness with limited visibility, relying on intuition and serendipity. AI, in contrast, can take in the whole landscape, quickly and effectively exploring the vast range of possible combinations.
Context is everything—whom you're with, where you're going, and why. Machines currently lack the ability to understand this context, but generative AI, especially modern large language models, hold the promise of changing this limitation.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.