Week 1

The Danger of a Single Story

When Chimamanda Ngozi Adichie was a child she could only tell stories about white, blue-eye people. She also only read British and American story books. That changed when she connected with the works Chinua Achebe and Camara Laye, these works caused a mental shift in her because she found out that black girls could exist in literature. The discovery of these black African authors saved her from the single story of Africa. A single story based on catastrophe, for which there is no room for feelings more complex than pity.

How does a story relates to the power structure in which it is distributed in?

Adichie presents the concept of Nkali as "to be greater than another". How stories are told, who tells then, when they are told and how many stories are told .

What is the significance of narrative in constructing reality?’

Pauline Oliveros

Interesting Quotes:

  • Recording was an after thought.

  • Play and learn. Respect the sound that was being made.

  • Each space is different and couples with instruments and voices to sound differently.

  • Acoustic space is where time and space merge as they are articulated by sound.

  • Humans have developed consensual agreements on the interpretation of the soundwave delivered to the brain by the ears.

Pauline Oliveros says that the practice of deep listening explores the difference between hearing and listening. The ear hears, the brain listens, the body senses vibration.

She defends that we know more about hearing than listening. For her, deep has to do with Complexity, boundaries or edges beyond ordinary or habitual understanding. As a consequence the level of awareness of soundscape brought about by deep listening can lead to the possibility of shaping the chaotical sounds of technology and of urban environments and machinery.

I admire Pauline asking for us to develop new contexts for sonic experience. At the moment, I am not that interested into the awareness aspect of sound, but rather into the different contexts that sound can help create. The balance between changing the actual sonic experience and changing the way we relate to it is a delicate one and often ignored by the sonic creators. The idea of a one fits all form of musical experience or attitude is complex. Of course the pure awareness that Pauline points to is unbiased, but how accessible is that? Wouldn`t it for most be narrowing down the complex play of intentionally in the listening process to the apparent surface of sound.

Response to Everyday Listening

http://www.everydaylistening.com/

I think the work of Fedde ten Berge is very interesting.

It combines an interest in the surfaces of sound, liquid, wooden, synthetic, organic, mechanical, metallic, resonating, non resonating. With the physicality of the tactile interfaces.

In De Stronk Ocean samples are connected to small metal containers with water, that also serve as buttons.

In The Shroom the seamless integration of raw captured sound with digitally processing is interesting. And the way the interacting of the system logically transitions between physical and digital controls is also interesting.

For me one of the potencies of sound is how repetition friendly it can be. Being able to go back to the same sound either through memory or humming or with a non electric object or with electric sound source. That repetition can be imbedded with different types of intentionality and through that process the sound can become a force. I am also interested in the connection between sound and breath. Sound as a generic form of organizing time and space.

Week 2

Self-Portrait

Informed by the significant location feature of the IOS 15, the duo decided on 5 places each to field record sounds.

The interactive system allows the user to select one of the 10 different looping soundscapes.

Interactive Web System:

Tap on a location to start the journey.


Week 3

Research at NVIDIA: The First Interactive AI Rendered Virtual World

I personally don’t think there is any intrinsic value in how technologies are going to be shaped in the future, but at the same time thinking about the transition from our current framework to a new, groundbreaking one might seem interesting.

We have been experience solid years of computer generated 3D graphics comming from meshes. I understand our current framework of 3D rendering as coordinates that are connected with faces using shaders, but what is beyond it?

Can we image an interactive experience/game that looks like a video but we have control over the character and can interact with it? Is the 3D mesh look something that will be associated with the genre of arcade games forever? Is changing that element the same as changing what we consider a game to be?

Week 4


Runaway Machine Learning

I am extremely afraid of bugs. Ever since an early age I would start to feel bad even by thinking about then, to the point I would be afraid to be present at certain classes at school that could involve the topic. As years went by, I am now able to manage this fear much better.

Insects are such interesting beings, the variety of shapes and colors that these species come in is might be the closest thing we have to fantastic or alien creatures. I would be really interested in using this infinite generation done with the same bug principles to develop an intuition on what the core elements of a beetle are.

I grew up on a heavy pokemon diet, books, anime, gameboy games, narration based RPGs with friends, the card games, physical artefacts of all sorts. The ubiquity of the experience of relating to these characters is something that till this day amazes me.

Thinking about generating our own animal crossing villagers somehow made me think about how that process could integrate into a childs life. How children now create avatars and other types of representations of themselves and their identities and how AI generated images could bring that to another level.

The limit between what a direct representation of yourself is to some other type of self representation or self augmentation process, like an interlocutor for an inner dialog is something that might relate to this.

We are not the villagers in Animal Crossing but we are the main character that we play with. Would personally generated AI characters blurry the line of this dualistic perspective?

Training my own model is something that would sound amazing.

Maybe doing an Ruben Valentim generator. Or going through the known process of converting musical notation into images to be able to generate Afro-Brazilian rhythms.

The relationship between AI and memory is something that fascinates me. The ideas of the lack of black and black culture representation in databases and pre-trained models is something that worries me.

Week 5

“The Coin Lost In The River Is Found In The River” - Zen Koan + Speech to Text + Runway + AttnGAN

 

For this exercise, I made a video capture of me reciting a Zen Buddhist koan using the window`s dictation tool to try text into Runaway as a AttnGAN text to image algorithm generates images in real time using the transcribed text.

A little bit about Koans:

In his Lion`s Roar article titled “How to Practice Zen Koans"  John Tarrant defines Koans as:

“A koan is a little healing story, a conversation, an image, a fragment of a song. It’s something to keep you company, whatever you are doing. There’s a tradition of koan study to transform your heart and the way you move in the world.”

He also lists 8 key points to working with the Koan that was picked for this exercise:

1. First of all, don’t try too hard.

2. You show up.

3. Trust what you don’t know.

4. Experiment.

5. The koan can be your friend.

6. Any part of the koan is all of the koan.

7. You don’t need a special state of mind.

8. Have confidence in yourself.

Some of my thoughts:

I though that the unpredictable and everchanging nature of the AttnGAN text to image generator was a good match to what I understand a Koan pratice to be.

Considering I am currently not a Zen Buddhism practitioner, I decided to come up with a couple questions as a way to approach the description of the exercise.

  • How to design contemplative practices more compatible with our current ergonomics?

  • In which ways the use of smartphones and personal computers has made access to completive practices more democratic?

  • How much of the architecture of our computer mediated experiences invite the user to contemplation?

  • Is it possible to design generic contemplative experiences?

  • Does computer mediated experiences have to be generic?

  • In which ways the outer shape of contemplative practices is linked to the effective act of contemplating reality?

  • As we recite a koan and have a 2021 text to image algorithm transform the recognized portion of the speech into a synthetic image in real-time how does that amplify or attenuate our contemplative practice?

Week 6

Exploring Emotions and Breathing in Stop Motion

Melting Clay Blob + Paper Eyes + Yellow Sand

Mohamed and I started by with the idea of portraying different emotions that related to our childhood and school life using the materials that related to these emotions - anxiety > test paper and pencils.

We found clay at the Floor and cut the eyes from a Founding Nemo book. We then recorded the Gif in a green screen desk + background and added the moving sand loop in the background;

Guided Breathing Gif - Colorful Paper + Cardboard

Thinking about the ways gifs are shared to express emotion or share a certain attitude, we decided to create something that could be used by users who want to share care with one another.

After iterating, we arrived at the animation of a tree that gains and loses its leaves as a way to graphically guide the users breath.

Week 7

Story Board - AR Altar - Desire Spirit

This was one of three AR altar characters my team designed.