Seeing / Listening to Metadata

It’s strange how statistics can represent something like a relationship. I analyzed text messages with my girlfriend and saw stories in the data.

I first used a program called iMessage Analyzer to poke around the data set. Here’s our total texts over time.

You can see a clear trend upwards to when we started dating at the end of December. Days when we sent pretty much 0 texts in the last month were ones where we were together. I took a look at how it broke down by sender; green is her and white is me

I wanted to dig into this further so I wrote my own script to get net texts and plotted it using a spreadsheet program. Positive numbers mean I was sending more texts, and negative ones are her.

It’s pretty even overall, though I could probably invent a story about some of the peaks and valleys.

In general this analysis was kind of interesting. It got me thinking about what the stats might say about the health of a relationship, so I pulled out a few old ones I could find. Here’s my sent/received breakdown from a girl I met online who was not into me.

And one for a relationship that was fairly steady until we decided to break it off.

I don’t know if I’ve found it here, but this kind of analysis on a broader/automatic scale could produce some strange statistical prosthesis for relationships.

The analysis above is a pretty typical ‘quantified self’ perspective on data. If you present the data visually, you unfold time, and you see it all at once. This is nice for getting insight into trends, but it feels too abstracted from experience. Sound is bound to time, so I want to see what it might be like to encounter me through listening to my data – Hearing the rhythm of me, via my metadata.

So I wanted to turn all my metadata into a song. Starting from 9/5/17 (first day of ITP) to now, I set out to convert my iMessages and Google search activity into tones. Searches would be one tone, texts from other people would be assigned a unique tone per person, and I would get my own tone. The resulting track would be ~233,000 minutes long. Using an ancient programming language called CSound, you can render songs in the terminal in fractions of the time the actual audio plays.

I began by getting the times of all my searches into a list of times. Alden wrote some python to extract the necessary info from my google takeout html page. I then wrote some javascript that used p5 to import the text file of dates and process them accordingly. Each search became a note in a csound project, with no manipulation to the timing. The track was supposed to be 159 days long, and I successfully created a 2.5 TB file, but it had a problem rendering the audio properly.

I kept going and added my text messages to the mix. Using some sql, I got timestamps and an id for the sender of every text message, going back several years. I removed a bunch of the data so it’d start in September of 2017, and imported each message as a note in my ‘song.’ Again, I had a problem rendering audio so this didn’t really work. I tried rendering some smaller segments but it’s difficult to know where the action might be.

This problem rendering audio is likely solvable, and I hope to crack it and add my foot steps (via Apple Health) to the data set.

Leave a Reply