2022–2025

Introduction

Conceived as a decentralised response to the proprietary infrastructures of virtual reality, the African Metaverse Framework interrogates the digital hegemony of the Global North. By eschewing prohibitive hardware barriers and relying entirely on open-source web technologies, the framework establishes an accessible, inclusive architecture for the Global South. At its core, a machine-learning pipeline translates 2D images into fully rigged 3D models, allowing people of colour to inhabit virtual environments in their authentic physical likeness. Tested through hybrid choreographic events, the project is simultaneously a technical experiment and a systemic intervention. It actively dismantles “representation latency”, the phenomenon by which a select few dictate the digital aesthetic, leaving marginalised communities unseen, yet continually exploited.

Approach

Firstly, I used machine learning to transform a photo of myself into a 3D model.

2D image to 3D model translation using machine learning

 

I then used Blender, a free and open-source 3D computer graphics software toolset to clean up any artefacts as well as to add a texture to the model. I headed over to Mixamo, a free online tool that uses machine learning to automate the steps of the character animation process to rig my model and add movement to it.

Metaverse avatar editing using Blender and Mixamo

Afterwards, I created a 3D scan of my living room using my iPhone’s lidar scanner and imported it into Blender.

Metaverse real-world environment scanning

In order to make the experience accessible to others I used three.js to bring it to life in any modern web browser. Three.js is an open source cross-browser JavaScript library and application programming interface used to create and display animated 3D computer graphics in a web browser using WebGL. Thereafter, I built movement controls for avatars and added the ability to communicate between avatars using socket.io, a free JavaScript library for realtime web applications.

Completed metaverse framework

Progress

In collaboration with The Hmm, affect lab, and MU Hybrid Arthouse this metaverse framework has been used to host hybrid dance events as part of a broader project called the Toolkit  for the Inbetween. We collaboratively crafted an immersive experience that brought together individuals both in-person and virtually.

Photo by Ho Ka Ho

Ahead of the event, we invited our guests to create their personalized avatar at our in-person or online workshops. Alternately, they could use our comprehensive 4-step guide to make their unique avatar. Users were able to connect with each other, whether in person or virtually or a combination of both, through the use of a “language of motion” developed specifically for the purpose of the environment. Users form connections in real time through dance and movement, no matter where they are in the world

Photo by Ho Ka Ho

The most important conversation that this metaverse experiment explored was that of human-to-avatar representation. It was important for me to define what that means for a platform developed specifically for people in under-resourced and digitally marginalized parts of the world for whom online representation remains an unfulfilled dream. Representation latency is political because a handful of people in the Global North choose what the internet should look like, leaving billions in the Global South unseen and yet still exploited.

Press

Share 𝕏