Scrape Elegy

Scrape Elegy is practice based art research conducted by Gabby Bush, Willoh Weiland, Monica Lim and Misha Mikho.

A lament for what we give over to the bots. A mourning poem for the late capitalist hell that makes even the worst of us valuable. A cringe tour of the digital graveyard we make day by day. A sweet little drown in the doom scroll. A comedic monologue starring you and only you. All you need to hand over is your handle. All you will leave with is the OMG echo.

As part of the CAIDE research stream Art, AI and Digital Ethics, Willoh S. Weiland, Gabby Bush (CAIDE) Monica Lim, Misha Mikho (CAIDE) have created an interactive installation in the Science Gallery’s SWARM exhibition. The exhibit is designed to provide a comedic audio journey into your online presence. A private journey through the SWARM of data we create for ourselves.

Scrape Elegy in the Media

This Intimate Melbourne Artwork Airs the Skeletons in Your Digital Closet. Sabrina Caires, Broadsheet,  19 October 2022

The sound of your internet history. Gabby Bush, Dr Ryan Jefferies and Willoh Weiland, Pursuit, 23 November 2022.

Weird, unique and unusual things to do in Melbourne. What's on, 9 September 2022.

The Object

The work is a large 2.5m by 2.5m cubicle in pink acoustic fabric from Autex. The cubicle hosts an ipad for inputting information at the entrance and inside, a pink toilet with a dome speaker above it.

Users are taken through a series of prompts in the iPad. Once information has been shared, users are invited into the structure where they will find the pink toilet. The installation is designed so that the user sits on the toilet while listening to their audio journey. The work is private, the subversion of the experience of Instagram, you experience this on your own.

A pink toilet in a pink cubicle with soft, warm lighting

Privacy Statement

This algorithm is designed to scrape the media data from a users Instagram account with the information provided by the User before entry into the exhibit. The exhibition cannot access any Instagram account without the provision of a handle and the data won't be accessed after this process. The media data collected will be used only for the purpose of providing a personalised soundtrack within the exhibition and will not be used again. No record of handles will be kept and the data used for the purposes of this exhibit will be deleted from the server at the conclusion of the exhibition.

Project Team

with Lauren Steller (Design) Will Loft and Loft Studios (Fabrication) Sullivan Patten (Voice Artist)

Thank you also to the team, Niels Wouters, Kobi Leins and Marc Cheong who contributed to the original proposal in 2020.

Ceiling of pink toilet cubicle, with circle cutouts through which the ceiling of the gallery can be seen

How it Works

The exhibit's iPad interface will display the privacy disclaimer, to obtain your consent to use your Instagram data. Upon giving your consent, you will then be prompted to enter your Instagram handle. This handle is then sent to the 'backend' server of the exhibit application. The server will then send your handle to the Instagram server, using the Instagram API (an API or 'application program interface' provides a set of software tools that a developer may use in their own code, to interact with another application).

The Instagram server will respond with your Instagram media data (comments and photo descriptions) that the exhibit intends to use. The exhibit will store these in a simple database that exists on the iPad. The exhibit will then randomly select several of these medias, and send them to an Azure server, which performs the task of converting them into synthesised speech. The Azure server will then send back a .mp3 file, which is also saved locally to the iPad.

The audio software is then told where to find this .mp3 file in the iPad's file directory, so that it can play it back to you. Your task is then to enjoy the exhibit!

The technology stack and architecture

The application uses docker, and has five containers:

  1. Our frontend (which just builds static files during docker-compose build using webpack and shoots them off into a volume, to be picked up by nginx, immediately exiting during docker-compose up);
  2. Our backend (which runs Daphne, an ASGI Django server with support for Channels to facilitate our use of websockets);
  3. Our task queue (Huey, which is similar to Celery);
  4. Redis (an in-memory database used both by Django Channels to facilitate websocket connections and by our task queue Huey); and
  5. Our web server (Nginx, which serves all our static files, which are: a) our optimised frontend (react) production build b) our backend static files, e.g., for the django admin site, and c) our "static" not very static generated audio clips, which are generated by our Huey task queue and passed onto Nginx)

Note the omission of a dedicated database. Instead the app is using sqlite, which is only read/written to from 2 and 3 above. Though a postgres instance may be used, there were no discernible advantages to doing so.