top of page

Open Discussion on AI: The AI Doc: Or How I Became an Apocaloptimist

The AI Doc: Or How I Became an Apocaloptimist is not just a documentary about technology, it is a conversation about the future of humanity.


In The AI Doc: Or How I Became an Apocaloptimist, Artificial Intelligence is explored through two opposing perspectives: the pessimist and the optimist. One side warns of job displacement, misinformation, autonomous weapons, and even existential threats.

The other highlights AI’s potential to transform healthcare, accelerate scientific discovery, address climate challenges, and expand human creativity. Rather than choosing one side, the film invites viewers to sit with both.


The documentary introduces the idea of being an

“Apocaloptimist” — someone who acknowledges the serious risks AI presents but still believes in humanity’s ability to guide the technology responsibly. It challenges the extremes of panic and blind enthusiasm, arguing that neither fear nor hype alone will serve us well.

A central message of the film is that AI itself is not good or bad. It reflects the values, intentions, and systems of the people who build and deploy it. The real question, then, is not whether AI will change the world; it already is, but whether we are prepared to shape that change thoughtfully. Governance, ethical frameworks, public awareness, and inclusive participation are emphasized as critical factors in determining AI’s trajectory.


The risks associated with AI are not theoretical. They are already emerging in real time. These include:

  • The displacement of certain job categories without adequate workforce transition strategies

  • The spread of misinformation and synthetic content that can influence public opinion

  • Bias in automated decision-making systems that may reinforce inequality

  • Over-reliance on systems that are not always transparent or accountable

  • Security concerns as AI capabilities become more advanced and accessible


The documentary also raises urgent economic and social questions. As AI systems become more capable, certain forms of digital and cognitive labor may be automated.

This challenges existing models of education, employment, and skills development, requiring societies to rethink how they prepare people for the future of work. At the same time, the film shows how AI can become a powerful tool for empowerment if access and knowledge are distributed fairly.

Ultimately, The AI Doc is a call to responsibility. It urges technologists, policymakers, educators, and everyday citizens to engage actively in conversations about AI.


The future of artificial intelligence is not predetermined; it will be shaped by human decisions.

To be an Apocaloptimist is to understand the risks, believe in the possibilities, and commit to steering the technology in a direction that benefits humanity.


AI systems developed outside South Sudan


A critical dimension of this conversation is that many AI systems shaping daily life are developed outside South Sudan. These systems are built using foreign data, foreign contexts, and foreign assumptions, yet they are increasingly used in local environments such as education, communication, governance, and business.

This creates a governance challenge:


How do we regulate technologies we did not build, but increasingly rely on?


Regulation in this context does not only mean controlling the technology itself. It means strengthening the systems around its use. This includes establishing national AI governance frameworks, ensuring data protection and sovereignty, building local regulatory and technical capacity, and engaging in international partnerships that allow for responsible and context-aware deployment of AI systems.

It also requires an informed public that can question and understand how these systems influence decisions in everyday life. Without this, countries risk becoming passive consumers of technologies that may not reflect their social, cultural, or developmental realities.


Ultimately, the goal is not to reject external innovation, but to ensure that its integration strengthens national priorities rather than undermining them.


Lived experiences from the Discussion

During the post-screening discussion, these ideas became grounded in lived experience.



Together, these stories demonstrate a key insight: AI is already embedded in everyday life. It shapes how people prepare, learn, create, communicate, and preserve meaning — often in ways that are not immediately visible.


A call to action


Ultimately, The AI Doc is a call to responsibility. It urges all stakeholders to move from passive observation to active participation in shaping the AI future.


  • Governments must go beyond policy statements and develop enforceable AI governance frameworks, invest in regulatory capacity, and ensure that national development strategies include AI literacy, infrastructure, and ethical safeguards.

  • Developers and tech companies must design systems that are transparent, explainable, and adaptable to different cultural and social contexts. They must recognize that deployment in new regions carries responsibility, not just opportunity.

  • Educators and institutions must integrate AI literacy into learning systems, ensuring that students are not only users of technology but critical thinkers who understand its limitations, risks, and possibilities.

  • The public must remain engaged, informed, and critical. AI should not be consumed passively. Citizens must question how it is used, where it is used, and who benefits from its deployment.

  • Innovation ecosystems must ensure that AI development is not only imported but also locally informed; supporting solutions that reflect real community needs and challenges.


The future of artificial intelligence is not predetermined. It will be shaped by collective human decisions; by what we choose to build, regulate, question, and allow.

To be an Apocaloptimist is to understand the risks, believe in the possibilities, and commit to steering technology in a direction that benefits humanity.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page