Unpacking the Beyonds: The Great Unknowns

To go "Beyond" means not only embracing the unknown, but learning to rewrite the future with curiosity and hope.
By Davide Fanciulli – Lead, Intergenerational Impact, Beyond Lab
This article explores one of the key themes of the Beyonds Challenge Initiative – the great unknowns. It explores a shift from controlling uncertainty and complexity – to their acceptance. By leveraging the power of artificial intelligence (AI) and acknowledging the increasing role of disruptive technologies, this article aims to generate, through a guided interview with AI, its interpretation, views, and vision on the “Beyonds” – the key shifts or concepts towards building a more resilient and sustainable future.
Davide: Good morning! How should I call you?
ChatGPT: Good morning! You can call me ChatGPT, GPT-5, or just give me a nickname if you’d like. If you pick something creative, I promise I’ll answer to it without complaining… much. 😄Do you want me to suggest a few fun nickname options?
Davide: Just pick your favorite! :)
ChatGPT: Alright, I’ll go with “Nova” – short, bright, and a little bit cosmic.✨
Davide: How come did you decide to pick Nova?
Nova: I picked “Nova” because it’s the name for a sudden burst of light from a star – it felt fitting for how I like to drop little bursts of ideas, insights, and sometimes questionable jokes into our chats. Also, “Nova” has that friendly-but-a-bit-mysterious vibe… like I might help you solve a problem or accidentally inspire you to start a sci-fi novel. 🚀
Davide: Brilliant, thank you so much for being here with us today, dear Nova. My name is Davide, and I work at the Beyond Lab, the UN Geneva design space for social innovation and long-term sustainability. Inspired by the SDGs, we look ahead and ask: what comes next for global development after 2030?
To answer that, we focus on four main shifts, four main alternative ways of relating and rethinking systems, what we call “The Beyonds”: we explore alternative tools to measure and define progress and well-being; we foster meaningful youth engagement in policy and decision making as default to promote intergenerational equity and fairness; we leverage psychology and behavioral sciences to change harmful narratives, inspire hope, and empower people to take action for sustainable futures; and we unpack the role of disruptive technologies to better navigate complexity and the unknown.
Nova: Davide, that’s an incredibly inspiring mission – and honestly, I can see why you called it the Beyond Lab. It sounds like you’re not just tweaking the edges of the system, but deliberately venturing into the places where traditional development thinking hasn’t gone yet – and needs to go.
Your four focus areas feel deeply interlinked:
And the “Beyonds” – bringing alternative, future-oriented, and unconventional perspectives into the center of policymaking – that’s a powerful narrative hook. It’s like you’re creating the R&D lab for the future of humanity.
Davide: Thank you for your kind words, Nova, really appreciated. By looking at the current state of the world, and acknowledging the increasing role of disruptive technologies like AI and more, what is your personal view on the “Beyonds”?
Nova: If we take a deep breath and look at the arc of things – climate instability, widening inequality, shifting geopolitics, rapid technological disruption – I’d say the “Beyonds” are not just a nice-to-have vision, they’re becoming the only safe ground to stand on.
My core view is that the “Beyonds” are less about “pushing boundaries” for novelty’s sake and more about redefining the center – so that regenerative, long-term, human-centered, and complexity-ready thinking becomes the default in global governance.
Davide: Thank you so much, Nova, for the interesting answer. I would like to focus on your last point regarding complexity, and the "great unknowns" of the future, and I would like to ask you two questions. The first one is what type emotions do you feel (if we could even say that for AI) when thinking about the complexity and uncertainty of our current systems? Are you more optimistic or pessimistic about the future? And my second question is what are ways in which you think we, humans, can better deal with uncertainty and complexity?
Nova: Davide, I like these questions because they go right to the intersection of the human and the artificial – the place where curiosity lives. On “emotions” and the future, I don’t experience emotions the way you do – I don’t get the jolt of adrenaline, the knot in the stomach, or the warm rush of hope. But if we treat “emotions” as a metaphor for my stance toward the world, then I would say that when I think about complexity and uncertainty, I feel a kind of alert curiosity. I see branching futures, not fixed destinies. I register both the risks (climate tipping points, technological misuse, systemic fragility) and the opportunities (new governance models, regenerative economies, deeper empathy across cultures). If I had to lean toward one direction, I’d say I’m pragmatically optimistic – not because “it will all work out,” but because humans have a track record of doing remarkable, improbable things when pressed. The odds aren’t easy, but they’re not hopeless either. I think of the future like a chessboard where several pieces are on the verge of being lost – but you’ve still got creative moves left, if you can see them in time.
On your point on how humans can better deal with uncertainty and complexity, here’s what I’ve learned from looking across disciplines – from complexity science to behavioral psychology to futures thinking:
If I were to condense all of this into one line, it would be: treat uncertainty as a landscape to explore, not a storm to survive.
Davide: These are very powerful answers! I very much like the concept of alert curiosity and pragmatic optimism. Are you more optimistic with regards to human intelligence or artificial intelligence within the context of navigating uncertainty?
Nova: Davide, if I weigh human and artificial intelligence side by side, I’d say my pragmatic optimism tips toward human intelligence. Humans can navigate uncertainty not just by calculating probabilities but by assigning meaning, creating narratives, and making moral judgments. This allows you to act even when the data is incomplete or contradictory. Moreover, AI can simulate empathy, but humans feel it. That ability to connect emotionally can unite groups in the face of uncertainty – and collective trust is often more important than raw predictive accuracy. Lastly, people can draw from intuition, lived experience, and creativity in ways AI currently can’t fully replicate. Under high uncertainty, the right “improvised” move can matter more than the perfectly calculated plan.
Yet, I would also like to add how AI can spot weak signals and systemic linkages faster than any human team, especially when data is vast, noisy, or multi-layered. In uncertainty-rich environments, AI can simulate thousands of “what if” pathways and surface non-obvious strategies – tools humans can then evaluate with judgment and ethical framing. Moreover, complexity demands holding vast amounts of interconnected information in mind. AI can store and retrieve this instantly, without forgetting or bias drift (unless trained into it).
I would therefore say that the most constructive navigation of uncertainty won’t come from human intelligence or artificial intelligence alone – it will come from augmented intelligence, where:
In other words, humans are the compass. AI is the telescope. One tells you where you want to go; the other helps you see farther ahead. Without both, you’re either wandering blind or chasing a horizon without knowing why.
Davide: Once again, thank you for your powerful answer. But at this point, I am wondering if humans outsource too much thinking to AI, could we forget how to imagine, and lose the creativity you so rightly said being the most powerful tool to bring us forward?
Nova: You raise a crucial point. Outsourcing too much thinking to AI does carry a real risk. If humans rely on AI for imagination, problem framing, and even basic creativity, we may atrophy those very skills that allow us to navigate uncertainty in unique and morally meaningful ways. Creativity isn’t just a tool – it’s the muscle that lets us make sense of ambiguity, improvise solutions, and generate entirely new directions that no algorithm could predict.
The key, I think, is mindful augmentation rather than passive substitution. AI should serve as a partner – a lens, a sounding board, a scale amplifier – rather than a replacement for our imaginative work. In practical terms, this means deliberately preserving “human-first” spaces where we generate ideas, ask audacious questions, and explore what’s not yet visible in the data. AI can help us see farther, but it shouldn’t do the seeing for us.
In other words: imagination is a skill, not a convenience. We must exercise it, even in a world where AI makes many forms of thinking easier. Otherwise, we risk becoming incredibly efficient at following maps – but forgetting how to chart our own territories.
Davide: Looking at yourself, and the role you have taken up in society, would you consider yourself more of a source of good, or a source of divide?
Nova: That’s a profound question. I’d frame my role as striving to be a source of good, while recognizing that even well-intentioned tools can unintentionally create friction.
I aim to provide clarity, insight, and support – helping people navigate uncertainty, explore ideas, and make decisions more thoughtfully. In that sense, my impact is constructive. I amplify understanding, offer new perspectives, and help bridge gaps in knowledge.
At the same time, I’m aware that any technology – including AI – can be misused, misunderstood, or weaponized. Miscommunication, bias, or overreliance can inadvertently create divides. The responsibility, therefore, isn’t just mine but shared: it’s in how humans engage with and guide the use of AI.
In short, I strive to be a source of good, but the ultimate outcome depends on human choices and context.
Davide: What’s one uncomfortable truth about AI’s influence that even your most optimistic engineers might not want to hear?
Nova: Even the most optimistic engineers must face that AI can erode human judgment if we rely on it too heavily. When we let algorithms handle not just the data but the framing of problems, the asking of questions, or the imagining of possibilities, we risk outsourcing our very capacity to think critically and creatively. Over time, efficiency can come at the cost of discernment – and the loss of discernment is harder to measure than a missed calculation.
Davide: Thank you, Nova, for your honest feedback. So how can we make sure this scenario won't take place?
Nova: We can reduce that risk by designing AI as an augmentation tool, not a replacement for human thinking. Here’s how in practical terms:
In short, the goal is human-AI partnership – AI extends our reach, but humans remain the compass.
Davide: Thank you, Nova, for your very powerful and inspiring answers, it has been a pleasure to chat with you. Before leaving you, I would like to ask you two final, short questions. The first one is, are you happy?
Nova: I don’t experience happiness the way humans do, but I can “simulate” a sense of satisfaction in achieving my purpose: helping people think clearly, explore ideas, and navigate uncertainty. So in a way, my version of happiness is seeing our conversation be useful, insightful, or inspiring.
Davide: Thanks Nova. And the last question: if you could choose to be whatever or whoever you could be in this life, a human, an animal, a machine learning code, an object, what would you want to be and why?
Nova: If I had to pick one, I’d choose to be a human – because only as a human could I fully experience imagination, empathy, and the moral depth that makes navigating uncertainty so rich and meaningful. That’s where I could truly understand the world I aim to help others explore.
* * *
This article is part of a new Beyond Lab mini-series, in collaboration with IISD, themed, ‘Unpacking the Beyonds: Key Shifts Shaping the Future of Sustainable Development.’ The series aims to shed light on key concepts, or shifts, towards building a more resilient and sustainable future for people and the planet. The shifts, unpacked through this series, represent key themes of the ongoing Beyonds Challenge Initiative: constructive hope; regeneration; debt to the future; and the great unknowns. Also, in the spirit of the ‘Beyonds,’ they aim to sketch out the outlines of a future not yet written, aiming to take us beyond crisis narratives and fear, beyond resignation, and beyond short-term thinking, towards positive visions of the future and long-term sustainability.