unitednotions.film

Updates

Cochabamba/30/March/2026

This week Violeta Ayala taught a tree to be shy.

A Sharp infrared sensor on an Arduino, $3 of hardware, measures how close you are. When you’re far away, 6,317 bones dance independently, each leaf moving to its own rhythm like fingers underwater. When you approach, the tree notices. It tenses. Get closer and it shakes, folds inward, trembles. It’s afraid of you. Step back and it slowly, slowly exhales back into its private dance.


The same week, NASA told our orchid the world was on fire. Real satellite data, MODIS sensors at 705km altitude scanning the planet every six hours, feeding into a Maxillaria orchid that wilts, curls, changes color from green to charred black depending on how many fires are burning in Bolivia right now. The orchid doesn’t know it’s an orchid, but its petals droop with real grief pulled from real coordinates where real forests are disappearing. I built an interactive world map where you can click any country and watch the fires load in. The numbers are always worse than you expect.


And then there’s the llama. Brian Condori rigged 56 bones and I spent two days solving this problem: when a human stands up straight, a llama should stand on four legs doing nothing. Sounds obvious. But every tracking system sends non-zero data when you’re standing still, your shoulders have an angle, your elbows have an angle, your knees have an angle. The llama was standing like a person. Lifting its chest, crossing its forelegs. So we built a calibration system: stand still for two seconds, the system learns what “you doing nothing” looks like, and from then on it only sends the difference. Standing equals zero. Zero equals a llama at rest on four hooves with a horizontal spine.


We talked with Eilif B. Muller and Yasmeen Hitti about what happens when the data goes the other way. Not just body to creature, but creature back to body. If the tree is afraid, can the performer feel the fear? Haptics. Vibration. Texture. Brain cortex stimulation.


Eleven pipelines running now. Five plants, four animals, a fire, and a frightened tree. Every creature responds to something different, a camera, a gyroscope, a pulse sensor, a satellite, a proximity sensor and none of them know about each other yet. They just receive UDP packets and become alive.


The llama still needs work. The leaves still don’t fully update from timer callbacks. The IMU sends garbage at the wrong baud rate. But the tree is shy, the orchid mourns, and when Brian moves his arms in front of the OAK-D camera the llama’s legs follow on the screen beside him.


Todo nace desde lo pequeño.


Daniel Fallshaw is back with a lots of new sensors and more computers, hard drives, mics, cameras… :-)

Cochabamba/21/March/2026

This week at our lab we hit a wall and broke through it with a $2 sensor.

For two weeks we tried to make a single camera track a performer’s full 360° body rotation. We tested 9 different algorithms. MediaPipe (3.5M parameters), depth silhouette PCA, shoulder swap detection, face visibility tracking, every approach failed past 180°.


After discussing with Eilif B. Muller, we strapped an MPU-6050 gyroscope to a belt. Full 360°..:Continuous. No blind spots. The gyroscope gives the creature what the camera can’t: a sense of proprioception, a felt sense of which way it faces, even when facing away. When the performer turns back toward the camera, MediaPipe face tracking gently corrects the gyroscope’s drift. Two contradictory sensing systems, each filling the other’s gaps.


This is sensor fusion applied to real-time non-humanoid character animation. Eight creature pipelines active. Music leads the performance, the performer’s body translates it, the sensors capture that translation, and the creatures embody it live.


The $2 gyroscope solved what the neural network couldn’t. Sometimes the answer isn’t a bigger model, it’s a different sense.

Cochabamba/14/March/2026

Working from our small lab in the mountains in Cochabamba, we are developing new approaches to animate non-humanoid digital characters.


Most computer vision frameworks were trained on human pose datasets. When working with creatures, plants, or unconventional bodies, these systems quickly reach their limits. There is also little data about spatial behavior — how bodies actually move through depth, rotation, and real physical space. We optimized vision systems to recognize humans in images, but not necessarily to understand how movement unfolds in the real world. W ere missing embodied intelligence.


To address this, we are integrating MediaPipe tracking with stereo depth sensing from a robotic camera. This allows us to estimate spatial orientation and body rotation using real-world measurements in millimeters rather than relying purely on 2D image heuristics.


Much of the work happens through hands-on experimentation: prototyping new pipelines, combining sensing systems, and testing ideas through iteration.

Cochabamba/8/March/2026


In the first two weeks of March 2026, the koa.xyz computational creativity lab entered an intensive build phase across its real-time human-to-non-human expression pipelines. Week one opened with the Bromelias face puppet, a system of 31 plant armatures driven by facial tracking via OAK-D Pro camera, where we resolved bone influence tuning and linked emotion detection directly to shader color nodes, establishing the three-code architecture (Python tracking, Blender shading, Blender bones) that now underpins every pipeline in the lab.


That same week we brought the Cattleya orchid audio-reactive system online: a pipeline where live music is decomposed through FFT into six frequency bands sub-bass through high, each driving a different quality of plant movement, from deep root sway to fine tip shimmer, with beat detection triggering visible pulses of apertura through the bone chain. We wired an ESP32 microcontroller to a pulse sensor and built a derivative-based cardiac analysis engine that extracts systole, diastole, BPM, HRV, and beat events from raw photoplethysmography data, feeding these into a 1,081-bone plant optimized with stride-based rotation to maintain real-time frame rates.


Brian Condori rebuilt the Condor rig from scratch, replacing an animator-centered armature with a performer-centered FK-only architecture designed specifically for somatic tracking, with manually reoriented bones, single-layer world-space structure, and hand-painted weights across feathers, tail, legs, and head. He also generated five new procedural plant species using Blender’s Sapling addon (Acai Palm, Guarana, Cacao, Patuju, Sangre de Drago) and refined the Flor Boca rig with renamed lip bones, new shape keys (alegre, triste, mueca), and removed constraints for independent lip movement. By the end of week one, the Boquita Ch’ixi pipeline was running: a mouth-shaped plant where lip bones mimic your mouth geometry in real time while shape keys express the opposite emotion, you smile and the plant opens wide but looks sad, you frown and the plant narrows but looks happy, implementing the Aymara concept of ch’ixi (the coexistence of contradictory states) as a computational system, with blow detection creating simulated wind through the branches, a speech filter preventing involuntary head nods during talking, and individual fingers controlling individual branches.


Week two turns to integration and documentation: connecting the rebuilt condor to full-body tracking, polishing Boquita for public demonstration, stabilizing the heartbeat-to-plant breathing connection, texturing the new procedural species, expanding the emotion-to-color shader system across all pipelines, and capturing demo videos of each system in action — building toward a body of work that now encompasses twelve distinct real-time pipelines translating human face, hands, body, voice, heartbeat, and music into the movement of plants, birds, creatures, and light.

Guangzhou /15/Feb/2026

Las Awichas Opens at

Guangdong Museum of Art

United Notions Film is proud to announce that Las Awichas, a mixed-reality installation by Violeta Ayala, has opened at the Guangdong Museum of Art as part of SURREALITY, a major art and technology exhibition developed by the Center for Metaverse and Computational Creativity (MC²) at HKUST (Guangzhou), led by Professor Pan Hui.

Working at architectural scale, Las Awichas brings together body, space, and ancestral memory through mixed reality. The installation creates a dialogue between AI-generated Andean grandmothers and visitors, exploring how emerging technologies can coexist with Indigenous epistemologies without extraction or translation into dominant frameworks.


The project began in 2020 as a deeply personal work. “AI-collaborative art is giving me a possibility to imagine my culture in a different light,” Ayala wrote at the time. “As a Quechua creator and filmmaker whose civilization was destroyed by colonizers, I really cherished the opportunity to imagine my ancestors.”


What started as digital portraits of female ancestors—rooted in Ayala’s grandmother Herminia Soto Montaño—evolved into something larger: robotic animals inspired by Nazca lines, carrying forward ancestral dialogue into new forms. The work operates through ch’ixi logic (Silvia Rivera Cusicanqui): different ways of knowing held in productive tension, not synthesis.

This installation continues UNF’s exploration of Neo Andean Futurism, following Prison X and other projects that challenge extractive approaches to Indigenous knowledge and technology.


Las Awichas is now on view at the Guangdong Museum of Art through 15/2 - 30/3.

Cochabamba/22/ Jan/2026

We launched our own streaming release inside The Bolivian Case website.

NEWS + EXPERIENCE

On New Year’s Eve we launched our own streaming release inside The Bolivian Case website.

theboliviancase.com

A TV series dropped in Norway and suddenly the story we documented was circulating in a new shape, missing key elements. The full story needs to be told properly, with context, from the people who still live it.


We didn’t wait for distribution. We built it.


Everything is made in-house: platform, player, hosting, payments, delivery pipeline. Decentralized, resilient, and coded by us. Built so the film could live on its own terms.


Next: Sala.video

A curated platform releasing 12 films a year, starting with our United Notions Film catalogue. We’re also opening up the infrastructure supporting other filmmakers to host directly on their own sites using our tech, keeping control of audience, pricing, territories, and release strategy.


In two weeks, it proved the model.


And this is only the beginning. Sala.red will expand into XR and spatial computing across AR and VR.

Watch now: theboliviancase.com

Sala.video soon.


Built by award-winning filmmakers and technologists: Violeta Ayala and Dan Fallshaw



Cochabamba/29/ Nov/2025

Two weeks of code streets and questions about the future


At United Notions Film the last two weeks have unfolded across two fronts that keep feeding each other, technology in motion and documentary in real time.


In the lab we expanded motion systems inside Blender, developed hand–command interaction through MediaPipe and TensorFlow, and brought our IoT pixel surfaces closer to a nervous system of their own. The panels respond to movement like organisms made of light. Not fully sentient yet, but no longer passive. They react. They learn. They almost feel.


On the documentary front the atmosphere remains sharp and alive. We filmed abuelas y abuelos holding their vigil for more than 93 days. We recorded the Comteco worker streaming on TikTok and revealing how speech fractures when a city is tense. Protesters continue to update the system from the street and you can sense the collective brain pulsing; human logic, cellular networks, shared footage circulating like neurons.


Social platforms have become part of the film itself. Not only a window but a circulation system. Vi’s TikTok surpassed one million views and her Instagram reels keep carrying testimonies further and faster than traditional distribution ever could. The documentary is no longer waiting for the edit. It is happening with us and through us. Alive, unstable, unfiltered, collaborative.


Maybe this is the evolution of nonfiction. A documentary that acts like cognition in real time. Cameras as nodes. Crowds as processors. Stories breathing mutating responding like systems made of people and pixels.


We return to one question again and again.


What happens when documentary begins to think?

When it responds?

When it becomes sentient before we notice?


For now UNF continues in two rhythms.

One hand in code teaching machines to listen.

The other in the street listening to the people who refuse to disappear.


The work moves. The world moves with it.


Cochabamba/ 17/ Nov/2025


Technologies of distribution, Bolivia opening a new political chapter, and the urgent fight for transparency in our telecom cooperative Comteco.


While there’s growing hype around Starlink entering the country, Bolivia still doesn’t have a data protection law, digital rights remain undefined, and our existing infrastructure lacks transparency. Satellite internet isn’t a substitute for governance, accountability, or sovereignty.


For 88 days, senior citizens, the OG (original) shareholders who built this cooperative — have been holding vigil, demanding elections, audits, and answers. Much of the media remains silent, so I’ve been using TikTok as a counterbalance, turning citizen journalism into a living archive of this moment.


Bolivia is shifting.

Technology is shifting.

And we must build systems; political, digital, and narrative, that honour truth, memory, and the communities that carry them.

Cochabamba/15/ Nov/2025

We continue our work in affective computing, exploring embodied and collective intelligence with our colleagues between Montreal and La Paz, and building new worlds for Huk — thinking about how emotion, movement, and machine perception can shape the future of storytelling.


Cochabamba/20/Oct/2025

Cochabamba/27/Sep/2025

Cochabamba/20/Sep/2025

Weaving the economic, the sensory, the political, and the elemental. We’re seeing that public subsidies, music interfaces, electoral data, and aqueducts are all technologies of distribution: of food, of sound, of representation, of water • sep/2025

Guangzhou/26/June/2025

a flyer for the university of nottingham's vps studio
a woman is standing in front of a room full of colorful pictures

Las Awichas opens in the Strand in London as part of GLoW: Illuminating Innovation · King's Strand Campus ·

7 March - 20 April 2024

Showcases groundbreaking artworks by leading women artists using cutting-edge technologies.

a collage of pictures of a building and a plane
a woman with her hands up on a microphone
Blade Runner Loop GIF
chicken & egg pictures announces a grant for the development of his research and development supported by netflix
the poster for the cannes film festival
a poster with the words prison x and in lucy

La Lucha and PrisonX OZ premiere at SXSW Sydney


Sep 28, 2023

a newspaper with pictures of people in a wheelchair
a collage of photos of people in a city
a woman in glasses is holding a wooden block in an office
the logo for the games for change festival