Cochabamba/25/April/2026
The Spider Learns to weave, the Gecko Learns to Feel, and SIGGRAPH Calls
This week had three bodies moving at once.
A spider.
A gecko.
A paper trying to explain them to SIGGRAPH.
The spider began as a rig. Which is a very cold thing to be: bones without intention. Forty FK leg bones, five pairs of legs, four segments each. A tiny mathematics of panic. The left side did not agree with the right side. Some bones wanted X. Some wanted Z. Some mirrored themselves like they had a secret.
So we went bone by bone, asking each one:
What do you do?
What direction do you believe in?
Why are you like this?
By the end of the week, she could walk.
Not perfectly, but in that way that feels alive because it is slightly inconvenient. Her legs now move in a wave:
1L → 3R → 5L → 2R → 4L → 1R → 3L → 5R → 2L → 4R
An ESP32 joystick gives her direction. Forward. Turn. Pause. Continue. A small robot animal deciding where to place her many feet.
Then she learned to listen to fingers.
Each human finger became a leg pair. Curl the finger and the leg lifts. Open it and the leg returns to the ground. The hand became a puppet controller, but also something stranger: a negotiation between human anatomy and spider anatomy.
And then she made silk.
As she moved, curves began trailing from her body and from the performer’s fingers. Blender drew them as visible tubes, thin lines becoming a web. Not a metaphorical web. A real one, made from code, motion, and little decisions.
There is now another version of her too: ara_tejedora.py, the robot auto-weaver. She builds an orb web by herself: 16 spokes, 8 rings, one state after another.
radial_out → arc → radial_in → next spoke
A spider with a schedule.
The gecko had a different week.
She was afraid of something that was no longer there.
For days, she kept reacting as if a fast hand was approaching. Even when the performer’s hand had left the camera, the system remembered the last velocity — often 0.66 — and held onto it like a bad memory. There was no hand_detected flag. The previous hand positions never reset. So the brain kept receiving the same message:
Danger is still here.
Danger is still here.
Danger is still here.
This triggered the ASUSTADA reflex every cycle. The gecko was not dramatic. She was haunted by stale data.
The fix was small: when no hand is detected, velocity now fades by 0.8× per frame until it reaches zero in about half a second.
And suddenly she could recover.
There were other ghosts. hand_near was saying a hand was close even when there was no hand. The red mood color was described as “blood and fire, danger,” which made Qwen overreact. The voice threshold was too high, so normal speech went unheard. is_talking was being sent as 0/1 but read as a boolean, which is the kind of tiny misunderstanding that can change a whole personality.
None of these bugs were huge. Together, they made a creature who could not relax.
Once the signals were cleaned, the brain could become less mechanical. The prompt changed. No more “Your personality is SPICY,” because Qwen took that very seriously. Now the gecko is guided as unpredictable, with mood hints, anti-repetition rules, and a new hold_seconds field.
This means she can stay somewhere emotionally.
If she is cansada, she holds for at least 15 seconds. She actually sleeps.
If she is asustada, she holds for 5 seconds. A freeze. A breath. Then she comes back.
Brian began the monkey rig, Pipeline 15. Dan worked on connecting the Maxillaria orchid to NASA FIRMS live fire data, and began moving the spider into Resonite VR.
And on Monday and Tuesday, we wrote, formatted, and submitted the SIGGRAPH 2026 poster for Huk’s World: Somatic Puppeteering.
Week 9 felt like this:
a spider learning how to place her feet,
a gecko learning the danger has passed,
and a research project learning how to describe itself without killing the strange thing inside it.
Cochabamba/18/April/2026
The Gecko Learns to Feel, Qwen 3.6 Finds Its Voice, and the Recording Begins.
Week 8 brought the gecko (Pipeline 14) from a skeletal rig into a sensing, tasting, listening, color-shifting creature with 12 emotional states, 500+ persistent flavor memories, and a brain that chooses its own moods. Brian Condori posed the gecko for each of her twelve moods; chill, spicy, curiosa, coqueta, celosa, cansada, enojada, hambrienta, fria, horny, asustada, juguetona, creating a movement vocabulary of full-body target poses translated into a POSES dictionary. The same architecture proven with the whale in Week 7 now drives a second creature. The gecko's sensory stack reached six channels: MediaPipe face and hand tracking, microphone voice detection, moondream 1.8B vision model, ESP32 optical pulse sensor, typed and spoken narration via local Whisper, and scene color detection. She sees you, hears you in English and Spanish, feels your heartbeat, tastes the color of your clothes, and reads your facial expressions. She responds by choosing a mood, inventing a synaesthetic flavor, generating an inner thought, shifting her skin color to match what she sees, and moving her 218 bones accordingly.
The brain architecture evolved through three stages: Qwen 2.5:7b (too small to vary moods), sensor-driven rule scoring (functional but mechanical), and finally Qwen 3.6 (23GB, latest generation, smart enough to choose moods contextually with sensor overrides only for reflexes). Qwen 3.6 produced poetic output: "They offer words that taste of dust and unspoken questions; I shall listen with my entire skin."
The week's most consequential technical move was the training data recording system. Every brain call now saves a complete sensor→mood→flavor→narration sample to ~/.gecko_recordings/. By session end: 274 training samples. This data will train two models: a LoRA fine-tune of Qwen 3.6 (personality,what she says and feels) and a tiny somatic neural net (body, how she moves, running at 1000Hz).
An ethics conversation with Eilif B. Muller and Yasmeen Hitti on April 13 grounded the week's technical work in the larger questions of affective computing, agentic AI regulation, emotional data sovereignty, and the difference between reading emotions and owning the dataset. A Resonite co-creation session on April 16 with Molly, Dan, and Yasmeen articulated the framework: platform-agnostic character brains, 15 creatures, 14 pipelines, real-time puppeteering across Blender and VR.
sensor→mood→flavor→narration
Cochabamba/17/April/2026
Character Puppeteering & Co-Creation in Resonite
Two months of building. Years of seeing where this could go. We’ve built a framework where Python drives characters in real-time, inside VR, inside Blender, across platforms. No motion capture suits. No animation studios. Just hands, a camera, sensors, and code.
The characters aren’t avatars. You don’t become them. You’re beside them. Your hand curls, a wing flaps. Your heartbeat pulses, a gecko grips tighter. Water touches a sensor, a whale starts to sing. Fire burns in Bolivia, an orchid wilts.
Each creature has its own brain, neural networks reading your face through MediaPipe, local AI models deciding how they feel, environmental sensors connecting them to the real world. The intelligence layer is the same Python whether the creature lives in Blender or in a shared VR world in Resonite.
That’s the breakthrough: the brain is platform-agnostic. Write the animation logic once. Run it anywhere. In VR, we can run as many characters as we want because we’re animating them in real-time, not playing back keyframes.
This is cinema. This is neural networks. This is co-creation between human bodies and digital beings. Built from Cochabamba. Rooted in Indigenous knowledge systems. Running on open-source tools.
15 creatures. 14 pipelines. Water sensors, pulse sensors, fire satellites, hand tracking, face tracking, AI emotion. And we’re just getting started, months of building ahead.
We won’t let others design the rules of the world we will live in.
unitednotions.film
Cochabamba/10/April/2026
This fortnight: the water lily learned to breathe, the sundew learned to count fingers, and the pipeline moved to WebSocket.
Two weeks ago the tree was shy. Now there are fourteen creatures, and they’re starting to know each other.
The Victoria Regia came first. A waterproof sensor dropped into a glass of water, an ESP32 reading capacitance every 40 milliseconds, and a giant Amazonian water lily that opens when the water is still and closes when you disturb it. Dip your finger in, the leaves contract. Pour a little more water, the whole pad tilts. It took a day to get the baseline calibration right, the sensor drifts with temperature, so we made it re-zero every time the system boots. Pipeline 12. The lily breathes now.
The Drosera was harder. The sundew has thirteen tentacles and each one needs to curl toward a specific fingertip, which meant finally fixing the MediaPipe hand-landmark mapping we’d been half-ignoring for months. Index finger to tentacle 3, thumb to tentacle 1, and so on. When your fingers splay, the sundew opens. When you pinch, it snaps shut around an invisible insect. Brian rigged it so the curl comes from the base, not the tip, which is how real sundews actually move. You can feel the difference immediately.
The Colibri got her head back. The old version tracked only the body, so the hummingbird’s head pointed wherever the torso pointed, which is not how hummingbirds work. We moved to MediaPipe FaceMesh, 468 landmarks, and now the head tracks independently, tilts, follows your gaze. She looks at you when you look at her. It’s unsettling the first time.
And then the big architectural decision. For weeks we’d been running everything through UDP into Blender, which worked for the studio but wouldn’t survive deployment into Resonite VR. So we rewrote the spine. Python handles all the external sensor logic now, cameras, ESP32s, satellites, proximity, and instead of sending raw bone rotations we send about fifty creature-state values through a ResoniteLink WebSocket. “Tree fear level: 0.7.” “Orchid grief: 0.4.” “Llama stillness: 1.0.” The creatures decide for themselves how to express those states inside VR. It’s the difference between puppeteering and relationship. Dan built a standalone hand-tracking client for his PC that plugs straight into Resonite, no Blender in the middle, so we can finally test in headset without the full studio rig.
On the side, I’ve been getting ComfyUI and Hunyuan3D running locally on the M5 Max, 48GB of unified memory, generating creature assets without sending a single frame to a server. The first test was a moth. It was a terrible moth. The second was better.
Eleven pipelines became fourteen. The tree is still shy. The orchid is still mourning. The llama still stands at rest when you stand at rest. But now the water lily breathes, the sundew counts fingers, and the hummingbird looks you in the eye.
Todo nace desde lo pequeño. Y lo pequeño está empezando a hablar entre sí.
Cochabamba/30/March/2026
This week Violeta Ayala taught a tree to be shy.
A Sharp infrared sensor on an Arduino, $3 of hardware, measures how close you are. When you’re far away, 6,317 bones dance independently, each leaf moving to its own rhythm like fingers underwater. When you approach, the tree notices. It tenses. Get closer and it shakes, folds inward, trembles. It’s afraid of you. Step back and it slowly, slowly exhales back into its private dance.
The same week, NASA told our orchid the world was on fire. Real satellite data, MODIS sensors at 705km altitude scanning the planet every six hours, feeding into a Maxillaria orchid that wilts, curls, changes color from green to charred black depending on how many fires are burning in Bolivia right now. The orchid doesn’t know it’s an orchid, but its petals droop with real grief pulled from real coordinates where real forests are disappearing. I built an interactive world map where you can click any country and watch the fires load in. The numbers are always worse than you expect.
And then there’s the llama. Brian Condori rigged 56 bones and I spent two days solving this problem: when a human stands up straight, a llama should stand on four legs doing nothing. Sounds obvious. But every tracking system sends non-zero data when you’re standing still, your shoulders have an angle, your elbows have an angle, your knees have an angle. The llama was standing like a person. Lifting its chest, crossing its forelegs. So we built a calibration system: stand still for two seconds, the system learns what “you doing nothing” looks like, and from then on it only sends the difference. Standing equals zero. Zero equals a llama at rest on four hooves with a horizontal spine.
We talked with Eilif B. Muller and Yasmeen Hitti about what happens when the data goes the other way. Not just body to creature, but creature back to body. If the tree is afraid, can the performer feel the fear? Haptics. Vibration. Texture. Brain cortex stimulation.
Eleven pipelines running now. Five plants, four animals, a fire, and a frightened tree. Every creature responds to something different, a camera, a gyroscope, a pulse sensor, a satellite, a proximity sensor and none of them know about each other yet. They just receive UDP packets and become alive.
The llama still needs work. The leaves still don’t fully update from timer callbacks. The IMU sends garbage at the wrong baud rate. But the tree is shy, the orchid mourns, and when Brian moves his arms in front of the OAK-D camera the llama’s legs follow on the screen beside him.
Todo nace desde lo pequeño.
Daniel Fallshaw is back with a lots of new sensors and more computers, hard drives, mics, cameras… :-)
Cochabamba/21/March/2026
This week at our lab we hit a wall and broke through it with a $2 sensor.
For two weeks we tried to make a single camera track a performer’s full 360° body rotation. We tested 9 different algorithms. MediaPipe (3.5M parameters), depth silhouette PCA, shoulder swap detection, face visibility tracking, every approach failed past 180°.
After discussing with Eilif B. Muller, we strapped an MPU-6050 gyroscope to a belt. Full 360°..:Continuous. No blind spots. The gyroscope gives the creature what the camera can’t: a sense of proprioception, a felt sense of which way it faces, even when facing away. When the performer turns back toward the camera, MediaPipe face tracking gently corrects the gyroscope’s drift. Two contradictory sensing systems, each filling the other’s gaps.
This is sensor fusion applied to real-time non-humanoid character animation. Eight creature pipelines active. Music leads the performance, the performer’s body translates it, the sensors capture that translation, and the creatures embody it live.
The $2 gyroscope solved what the neural network couldn’t. Sometimes the answer isn’t a bigger model, it’s a different sense.
Cochabamba/14/March/2026
Working from our small lab in the mountains in Cochabamba, we are developing new approaches to animate non-humanoid digital characters.
Most computer vision frameworks were trained on human pose datasets. When working with creatures, plants, or unconventional bodies, these systems quickly reach their limits. There is also little data about spatial behavior — how bodies actually move through depth, rotation, and real physical space. We optimized vision systems to recognize humans in images, but not necessarily to understand how movement unfolds in the real world. W ere missing embodied intelligence.
To address this, we are integrating MediaPipe tracking with stereo depth sensing from a robotic camera. This allows us to estimate spatial orientation and body rotation using real-world measurements in millimeters rather than relying purely on 2D image heuristics.
Much of the work happens through hands-on experimentation: prototyping new pipelines, combining sensing systems, and testing ideas through iteration.
Cochabamba/8/March/2026
In the first two weeks of March 2026, the koa.xyz computational creativity lab entered an intensive build phase across its real-time human-to-non-human expression pipelines. Week one opened with the Bromelias face puppet, a system of 31 plant armatures driven by facial tracking via OAK-D Pro camera, where we resolved bone influence tuning and linked emotion detection directly to shader color nodes, establishing the three-code architecture (Python tracking, Blender shading, Blender bones) that now underpins every pipeline in the lab.
That same week we brought the Cattleya orchid audio-reactive system online: a pipeline where live music is decomposed through FFT into six frequency bands sub-bass through high, each driving a different quality of plant movement, from deep root sway to fine tip shimmer, with beat detection triggering visible pulses of apertura through the bone chain. We wired an ESP32 microcontroller to a pulse sensor and built a derivative-based cardiac analysis engine that extracts systole, diastole, BPM, HRV, and beat events from raw photoplethysmography data, feeding these into a 1,081-bone plant optimized with stride-based rotation to maintain real-time frame rates.
Brian Condori rebuilt the Condor rig from scratch, replacing an animator-centered armature with a performer-centered FK-only architecture designed specifically for somatic tracking, with manually reoriented bones, single-layer world-space structure, and hand-painted weights across feathers, tail, legs, and head. He also generated five new procedural plant species using Blender’s Sapling addon (Acai Palm, Guarana, Cacao, Patuju, Sangre de Drago) and refined the Flor Boca rig with renamed lip bones, new shape keys (alegre, triste, mueca), and removed constraints for independent lip movement. By the end of week one, the Boquita Ch’ixi pipeline was running: a mouth-shaped plant where lip bones mimic your mouth geometry in real time while shape keys express the opposite emotion, you smile and the plant opens wide but looks sad, you frown and the plant narrows but looks happy, implementing the Aymara concept of ch’ixi (the coexistence of contradictory states) as a computational system, with blow detection creating simulated wind through the branches, a speech filter preventing involuntary head nods during talking, and individual fingers controlling individual branches.
Week two turns to integration and documentation: connecting the rebuilt condor to full-body tracking, polishing Boquita for public demonstration, stabilizing the heartbeat-to-plant breathing connection, texturing the new procedural species, expanding the emotion-to-color shader system across all pipelines, and capturing demo videos of each system in action — building toward a body of work that now encompasses twelve distinct real-time pipelines translating human face, hands, body, voice, heartbeat, and music into the movement of plants, birds, creatures, and light.
Cochabamba/29/ Nov/2025
At United Notions Film the last two weeks have unfolded across two fronts that keep feeding each other, technology in motion and documentary in real time.
In the lab we expanded motion systems inside Blender, developed hand–command interaction through MediaPipe and TensorFlow, and brought our IoT pixel surfaces closer to a nervous system of their own. The panels respond to movement like organisms made of light. Not fully sentient yet, but no longer passive. They react. They learn. They almost feel.
On the documentary front the atmosphere remains sharp and alive. We filmed abuelas y abuelos holding their vigil for more than 93 days. We recorded the Comteco worker streaming on TikTok and revealing how speech fractures when a city is tense. Protesters continue to update the system from the street and you can sense the collective brain pulsing; human logic, cellular networks, shared footage circulating like neurons.
Social platforms have become part of the film itself. Not only a window but a circulation system. Vi’s TikTok surpassed one million views and her Instagram reels keep carrying testimonies further and faster than traditional distribution ever could. The documentary is no longer waiting for the edit. It is happening with us and through us. Alive, unstable, unfiltered, collaborative.
Maybe this is the evolution of nonfiction. A documentary that acts like cognition in real time. Cameras as nodes. Crowds as processors. Stories breathing mutating responding like systems made of people and pixels.
We return to one question again and again.
What happens when documentary begins to think?
When it responds?
When it becomes sentient before we notice?
For now UNF continues in two rhythms.
One hand in code teaching machines to listen.
The other in the street listening to the people who refuse to disappear.
The work moves. The world moves with it.
Cochabamba/ 17/ Nov/2025
Technologies of distribution, Bolivia opening a new political chapter, and the urgent fight for transparency in our telecom cooperative Comteco.
While there’s growing hype around Starlink entering the country, Bolivia still doesn’t have a data protection law, digital rights remain undefined, and our existing infrastructure lacks transparency. Satellite internet isn’t a substitute for governance, accountability, or sovereignty.
For 88 days, senior citizens, the OG (original) shareholders who built this cooperative — have been holding vigil, demanding elections, audits, and answers. Much of the media remains silent, so I’ve been using TikTok as a counterbalance, turning citizen journalism into a living archive of this moment.
Bolivia is shifting.
Technology is shifting.
And we must build systems; political, digital, and narrative, that honour truth, memory, and the communities that carry them.
Cochabamba/15/ Nov/2025
We continue our work in affective computing, exploring embodied and collective intelligence with our colleagues between Montreal and La Paz, and building new worlds for Huk — thinking about how emotion, movement, and machine perception can shape the future of storytelling.
Cochabamba/20/Oct/2025
Cochabamba/27/Sep/2025
Cochabamba/20/Sep/2025
Weaving the economic, the sensory, the political, and the elemental. We’re seeing that public subsidies, music interfaces, electoral data, and aqueducts are all technologies of distribution: of food, of sound, of representation, of water • sep/2025
Guangzhou/26/June/2025
Copenhagen/ April /2025
Fortaleza/ July /2025
Nothingam/ February /2025
Sydney/ Jan /2025
Bringing 3D-Printed Jaguars to Life with open source AI & Interaction! 🐆✨
We’re building the system in Touch Designer, Stable Diffusion, and a dash of magic to create a film ecosystem where 3D-printed jaguars come alive! 🎨🤖
Fukuoka/ Nov/ 2024
Violeta in Itoshima performs for the robotic camera as Huk, reacts and voices perceptions, a live, multilingual experience to Dan in Sydney.
Montreal/ July- Sep/ 2024
Taipei/ August/ 2024
Invited by economist Glen Weyl to speak at the Asia blockchain summit in Taipei, Violeta discussed her thoughts on tech and free speech.
Salzburg/ April/ 2024
London/ March/ 2024
Sydney/2023
Cochabamba / Sep / 2022
La Lucha and PrisonX OZ premiere at SXSW Sydney
Sep 28, 2023
La Lucha Receives Support for Outreach
Aug 19, 2023