Making the Visible Invisible

Making the Visible Invisible

  • 4 May 2022, 1–2pm
  • Mark Andrejevic contributes expertise in the social and cultural implications of data mining and online monitoring. He writes about monitoring and data mining from a socio-cultural perspective, and is the author of three monographs and more than sixty academic articles and book chapters. He was the Chief Investigator for an ARC QEII Fellowship investigating public attitudes toward the collection of personal information online (2010–14). Mark is particularly interested in social forms of sorting and automated decision making associated with the online economy. He believes regulations for controlling commercial and state access to and use of personal information is becoming an increasingly important topic, and that the academy has an important role to play in finding new ways to take advantage of new technologies while preserving a commitment to democratic values and social justice. Mark is Professor, School of Media, Film, and Journalism, Monash University.

    Mishka Henner is a visual artist born in Belgium and living in Manchester, UK. His varied practice navigates through the digital terrain to focus on key subjects of cultural and geo-political interest. He produces books, films, photographic and sculptural works that reflect on cultural and industrial infrastructures in a process involving extensive documentary research combined with the meticulous reconstruction of imagery from materials sourced online. Mishka’s work has featured in group shows at the Museum of Modern Art and Metropolitan Museum of Art, both New York; Centre Pompidou, Paris and Centre Pompidou Metz; Victoria & Albert Museum, London; Pinakothek der Moderne, Munich; Hasselblad Foundation, Gothenburg; Ullens Center for Contemporary Art, Beijing; FOAM Amsterdam; and Turner Contemporary, Margate. He holds a Masters degree from Goldsmiths College in London and in 2013, was awarded the Infinity Award for Art by the International Center of Photography. He was shortlisted for the Deutsche Börse Photography Prize in the same year and in 2014, was on the shortlist for the Prix Pictet for his large-scale works focusing on landscapes carved by the oil and beef industries of America.

    Jahkarli Romanis is an artist and researcher based in Narrm/Melbourne. Raised on Wadawurrung Country in Torquay, Jahkarli moved to Melbourne to continue her tertiary studies in 2018. After completing an honours degree in photography at RMIT University in 2020, she commenced a PhD at Monash in 2021 through the Wominjeka Djeembana Research Lab. Jahkarli’s work is inextricably intertwined with her identity as a Pitta Pitta woman and explores the complexities of her lived experience and the continuing negative impacts of colonisation in Australia. Jahkarli’s practice aims to subvert and disrupt colonial ways of thinking and image making, obtaining agency over her representation as a Pitta Pitta woman. She utilises her research and artwork as tools for investigating biases encoded within imaging technologies. Her PhD research explores ways in which visual systems of cartography omit Indigenous knowledges of place, sustaining colonial narratives within Australia and the myth of ‘Terra Nullius’.

  • Online here and Monash Caulfield Big Screen

Click on the image above at 1pm for the video to play

Machinic vision systems have long carried the potential to expose the unseen—or unnoticed. For Walter Benjamin, film, for example, transformed our experience of the world through the ‘dynamite of the tenth of a second’. Automated image systems rehabilitate and reconfigure seeing in a range of ways that promise to augment, surpass and displace human vision, thanks to their ability to capture and process visual data at unprecedented speed, scale and resolution. This presentation focuses on sensing and imaging systems ranging from mapping, surveillance and AI-derived imagery to biometric data capture. In conversation with Professor of Communications and Media Studies at Monash, Mark Andrejevic, artists Jahkarli Romanis and Mishka Henner discuss how such systems contribute to the prospect of automated forms of governance, and the transformation of physical spaces and territories. Once physical territories can be platformised, they can be rendered malleable and customisable, facilitating new strategies for control and governance.

Presented as part of PHOTO2022.

Spiros Panigirakis: Hi, my name is Spiros Panigirakis, and I'm the Interim Head of Fine Art at Monash University. I'd like to first like to acknowledge that I'm Zooming from Bunurong and Wurundjeri Country of the Kulin Nation, that I'd like to pay respects to Elders past, present, and future. And with that also acknowledge Indigenous friends and colleagues here today.

At Monash University, at MUMA and at MADA, we consider the art that we make, that we write about, that we present here on this land is always contextualised along a very rich tradition that is 60,000 years old. On behalf of PHOTO 2022, MUMA, Monash University Museum of Art, and MADA, Art Design and Architecture at Monash University, I've got the great pleasure of introducing our audience to today's Form x Content talk, "Making The Visible Invisible" with the artists Jahkarli Romanis, Mishka Henner in conversation with Mark Andrejevic. Mark is a professor in communication and media studies at Monash University. Jahkarli Romanis is a Pitta Pitta woman, artist, and researcher, and PhD candidate at Fine Art Monash University, and a researcher at the Wominjeka Djeembana Lab. And Mishka Henner is a visual artist based in Manchester, England, and in 2015 was an artist-in-resident at the Mildura Arts Centre with Daniel Crooks, Julie Gough, and a series of other really great artists. Over to you, Mark.

Mark Andrejevic: Thanks so much. I'd also like to acknowledge and pay respects to the traditional custodians of the unceded lands where I'm speaking to you today, the Bunurong people of the Kulin Nations. I pay my respects to their Elders past and present, and acknowledge the Indigenous people gathered with us today who may be viewing and, of course, participating in this discussion. As we consider how to build a better and more just society, may we honour and pay respect to the knowledge embedded forever within the Aboriginal custodianship of Country.

It's a great pleasure for me to have the opportunity to be in discussion with today's artists. I'm a media researcher, and I suppose in a gesture that's, I hope not too expansive, I think of art as, of course, participating in the media. I'm incredibly interested in the ways in which the folks who are working in fine arts anticipate and raise all of the issues that we're engaging with today around questions of visibility and invisibility.

I might preface this by saying we had some discussions as we were preparing for this event around how to frame it. Initially, we'd started with this notion of making the invisible visible, and we ended up deciding to flip that around. I think it's a really interesting decision given the contemporary media environment. We live amid a proliferation of screens and images. It seems as if everything is being made visible all the time. But, of course, at the same time, the very proliferation of those screens serves as a form of screening. What we don't see very often is what takes place behind the screens and all of the processes for sorting and shaping the mediated world that seems to be increasingly immediate. At the same time, I'm also a scholar of surveillance, and I'm really interested in the ways in which surveillance moves in some respects from being a spectacular event where we see the mechanisms of surveillance to becoming embedded around us ubiquitously to the extent that monitoring almost disappear years through its very proliferation.

I was looking at some of the discussions of augmented reality and the idea of what it would take to be able to make an automated reality system work. And one of the claims made by the founding editor of Wired Magazine was we would need to have cameras everywhere recording all the time. That's a fantasy of total exposure of everything being captured all the time. But at the same time, of course, what he was envisioning was a platform that would mediate and sort and make sense of those images in ways that are really beyond the capacity of humans to absorb that much imagery. And that fantasy of total capture seems to characterise so many of the media formations that are arising around us if you think about the forms of framelessness, the disappearance of the frame, of the border of what's captured associated with things like virtual reality, augmented reality, 360 degree imagery in all of these ways, this ambition of total capture.

I encountered a few years ago an advertisement for some technology that was designed to address the shortcomings of humans. It was technology that advertised itself by saying that human's memory is flawed. It asked this question, "How much of your life do you remember?" The answer is statistically around 0.001%, and that's if you have a good memory. And that was meant to frame us as flawed. We can't capture everything. We can't see everything. The product they were selling was called Life Vlogger. It's a camera that you wear that captures your life in its entirety. The idea is your own flaws, your own limits as a human who can only process so much information would be overcome by this technology, which they described as the next big thing in the high tech industry.

And, of course, what that overlooks is the very function of our existence as finite beings is based on not capturing everything. It's based on what we leave out as much as what we experience or remember. In fact, memory isn't anything if it's everything. And the same might be said of describing or narrating an event. It's always constructed in part by what you leave out. And so, I think this question of making the visible invisible is a really nice way to think about what are all of the ways in which the avalanche of images and the ambition of total capture raises the question of what these leave out and how to think about what gets left out and also what takes place behind the screen.

So, I thought that was a really interesting and provocative way to set things up. I think I'll ask Mishka to kick things off because he was the one who originally proposed flipping it around. And so, Mishka, if I could get you to talk a little bit about why you made that move of going from making the invisible visible, which is what we often think of as a critical work or investigative work to making the visible invisible, and how that connects with the projects that you're working on.

Mishka Henner: Sure. Yeah. Thanks. And thanks for inviting me, everyone, by the way. I didn't mention that before, but it's a real pleasure to be able to meet you all and talk to you across such a vast distance. I worked as a photographer, as a documentary photographer for many years before I became a visual artist. I think one of the great frustrations for me with photography was the power imbalance and the power relationship between the photographer and the photographed, so the photographer and the subject. I think I became much more interested in flipping it around and thinking much more about the viewer but also about the act of capturing and the complicity in that act.

And so, when I started working like this, it was the beginning of the Facebook social media era. Google Earth had been going just a few years, and it seemed to me that suddenly it was possible to really flip that relationship around in some way, that the observed could become the observer, if you like, that it was possible to take the optical tools of the military industrial complex which had been made available in the civilian domain through the internet, through tools like Google Earth and so on. It was possible to use those tools to look at the very instruments of that military industrial complex.

As soon as you do that, obviously, you start to realise that there's a lot of effort that goes into hiding stuff, right? So there's lots of things out there that people don't want you to see. I think that the idea of making the invisible visible as being a function of art has become almost a cliché because in parallel to that, there is this whole other infrastructure, this whole other paradigm, which is very much about, yeah, this seeing everything but making that eye, making those optics invisible, ubiquitous almost in a way.

Like I said, when I started working like this in around 2009, 2010, I really wasn't sure how long this window of opportunity would last for, that you would be able to use screenshots for example, that Google Earth would have the ability to save visual material that appeared on your screen. For example, on your keyboard, there's a print screen button, for example, which I always thought was really fascinating. It's a relic of early computer technology which has remained. It survived on our keyboards, and it does allow us to capture anything that appears on our screens. And so, I thought that was really a fascinating tool that I should try to use.

I guess I can start by sharing some of the earliest projects that I started to work with in that way. One of the earliest ones, which was inspired by the work of Ed Ruscha really in America in the 1960s. I started out making artist books, print-on-demand artist books, which I thought was always a great structure within which to work. I made a work called "Fifty One US Military Outposts," which used Google Earth and other satellite imaging providers actually like Bing Maps at the time and a few others. I can't remember which others. But these are platforms that basically aggregate satellite imagery often from the US military or from civilian satellite imagery providers, commercial satellite providers. And it's a remarkable tool, really, because is a total vision, if you like, of the world. I sometimes refer to the world as an image of infinite detail now and Google Earth as an example of that. You can zoom in and zoom in and zoom in and zoom in. In fact, you can go so far, you can reach a limit, and then it turns into Google Street View.

But basically, you have this totality of the globe represented in visual form. But there are different degrees of visibility in relation to high resolution imagery and low resolution imagery. What I found really interesting was that if you could pinpoint US military outposts across the world, almost all of them were in really good high resolution imagery, which gives a signal to the optics and the relationship between the optics and the military in the first place. So what I did was I collected all these different locations. So there's 51 US military outposts in 51 different country is in this book, and you can see them here. They're usually airfields or storage depots or military bases. This is commander fleet activities in Japan over here. This is the air expeditionary group in Iraq, another air base. And this is Ronald Reagan Missal Defense Test Site in the Marshall Islands.

So just working from home, it was remarkable to me that I could create this map just as a civilian, this global map of military empire. The one on the left here is an interesting one. This is the CIA Predator drone launch site in Pakistan. Now, at the time, well as would be now, there was great sensitivity about whether the US military were involved or were based in any way on Pakistani soil. Somebody had actually spotted the outlines of a US Predator drone on this airfield in Pakistan. And so, the very act of Google Earth itself, the imagery from Google Earth, which had been created in the first place by military satellites which had then been publicly released years later, that very act of photographing the world had revealed to the world the existence of US military hardware on the soil of a country that was extremely politically sensitive to the presence of the US military on its soil. I really loved those contradictions. Is there anything you want me to add to this?

Mark Andrejevic: I'm really interested in the fact that the resolution was so high on the military installations. Did you find that since these were military satellites, they were actually focusing on getting high resolution images? Did it drop off if you went to other spaces or was it just uniformly high resolution?

Mishka Henner: No. Well, those areas were high resolution, and I was always surprised about that. I mean, there's a different project that I worked on, a couple actually, which are a kind of counterpoint to that in a sense or reveal a different level of visibility and invisibility, if you like. I'll share those with you now. I think I worked on this in 2010 actually, maybe even before Fifty-One US Military Outposts, but I released it in 2011. This is called Libyan Oil Fields, and it was basically the time that NATO... I could sense that the narrative was beginning to form around NATO forces invading Libya. I'm a child of 2003 with the invasion of Iraq. I marched against that invasion in London, as did millions of other Brits. I sensed that actually we were preparing again to invade another country, and nobody was talking about the natural water resources in Libya or the oil reserves. Libya was the only remaining country in the world that had a nationalised oil infrastructure, and it also happened to have the largest supply of natural water reserves in Africa.

Mishka Henner: I was interested in using these tools to do an almost sort of preemptive photo journalism, if you like by trying to get in the heads of the military strategists and trying to think about how would they look at Libya in their preparation for war. So I turned to Google Earth, which obviously was the tool that I was using at the time. Interestingly, I would find on Google Books these giant textbooks about oil fields in Libya and across Africa. And so, what I would do is I would superimpose the maps from those textbooks onto Google Earth, and the maps in those books would show you where the oil fields were and what the names of those oil fields were. And then I would zoom in into the Libyan landscape. What was really fascinating is 99% of the Libyan landscape is desert. I mean, it's just desert. There are very few urban areas. So almost the entire country is low resolution. There's very little high resolution imagery of Libya in Google Earth to this day, actually.

But if you superimpose those oil fields onto the landscape and Google Earth and zoomed in, you would find that the oil fields with these little pinpricks of higher resolution imagery. These are some of the examples here. You can see all of these on my website, by the way. I found that really fascinating because, on the one hand, almost the entire country was invisible really, but these tiny little gold mines, if you like, of capital-generating resources were in really good resolution. There was really good quality imagery of all these locations, which again was a signal to the interests that lay behind this kind of totalitarian vision of the world. I found that really, really fascinating.

And then finally, a separate counterpoint was after working on "Fifty One US Military Outposts," I couldn't believe how much was visible, and I turned my attention to the censorship of satellite imagery. Obviously, there's lots of censored imagery on Google Earth, but basically different intelligence agencies in different countries use different techniques. Usually they're not consistent at all. The Russians, for example, might simply white out areas in the country or pixelate them, or the imagery might just be missing. You might just have streets of black across an image. And so it's missing. But I found that in Holland and the Netherlands of all places, there was this remarkably consistent aesthetic choice in how to censor the landscapes. I made a book called "Dutch Landscapes," again printed on demand. The interesting thing for me at the time was, yes, the censorship is really fascinating, but also the kind of echo between a digital artifacting of the landscape with the actual man-made alteration of the landscape to protect the Netherlands from the natural enemy of flooding.

I found these patterns between digital imposition of Photoshop-induced shapes and filters onto these sensitive sites. I found a really lovely echo with the actual... yeah, the entire manufacturing of the landscape to control irrigation and for flood defenses. So most of these locations that were censored were royal palaces, fuel depots, and military barracks, and so on. But obviously, there's a fantastic contradiction in these visual examples. On the one hand, you've got this very clear urban landscape photographed in really high quality, great detail, and that is just completely punctured by the imposition of these Photoshop filters. This is an effect called crystallise. It reduces detailed imagery into just a series of polygons of colours that kind of... Yeah, they're equivalent to the palette of the image underneath. And that collision for me I thought was really fascinating because on the one hand it signified that kind of paranoia, that post 9/11 paranoia of on the one hand everything being visible but on the other terrorists could use that visibility against the very system that created it. And then on the other, the transition from an analog age into a digital age, and I found this to be an aesthetic equivalent of that. Yeah.

Mark Andrejevic: Thanks so much. That's so fascinating, and it nicely highlights the ongoing tensions between, on the one hand, total exposure and, on the other hand, what gets hidden or backgrounded or low resolutioned. I know that Jahkarli has also been interrogating some of the ways in which Google while on the one hand offering total information capture is at the same time leaving many things out, rendering other things invisible. And so, maybe we'll bring Jahkarli into the conversation and if you could speak a little bit about your thoughts in relation to that tension between visibility and invisibility.

Jahkarli Romanis: Yeah, for sure. Thanks for having me. I'll also acknowledge that I'm on Wurundjeri, Bunurong land. It's a privilege to be on Wurundjeri Country as a Pitta Pitta woman. I extend my respects to Elders past, present, and future.

Mishka, that last series you were talking about, "Dutch Landscapes," was the first body of work of yours that I came across and I think for me a nice segue, I suppose, into my work is thinking about censorship as well, but again, through making the visible invisible. So I'm coming at my practice and looking at Google Earth technologies from an Indigenous standpoint. I'm thinking about the politics of mapping, but also the politics of imaging and how we think about place when it's being imaged in the way that Google Earth is.

I'll just share my screen. Cool. Initially, my practice was centred around imaging self and subverting the colonial gaze through portraiture and layering myself into images of Country in that way as an exploration of disconnect from Country itself. I grew up about two hours south of Melbourne, and Pitta Pitta Country for me is in the Western inland region of Queensland, not far from the Simpson Desert.

But my introduction to Google Earth was during my honour's year. I really needed to go back to Country and make work. This was during the beginning of the pandemic, and so I couldn't travel, and, of course, decided that I'd utilise Google Earth as a tool to go there. When I went there, what I saw was quite fascinating in terms of, as we were talking about, this discrepancy between high and low resolution, but also what information was actually available about place. The catalyst for getting this train of thought going was I went into Google Earth to find this particular tree, which is a Waddi tree, and it's an important gathering tree for my people.

The image itself or the way that it had been represented within Google Earth was reduced to this dark shadow of pixels. So this work here is highlighting that. I've superimposed an image of the tree that I'd made on Country a couple of years prior on top of this Google Earth representation of the same tree.

But yeah, I suppose thinking about mapping as historically being used as a form of colonial control and thinking about historically within Australia the myth of terra nullius. I think with my work I'm really trying to highlight that Google Earth itself is a form of colonialism and it is further ingraining this myth of terra nullius within Australia by not including Indigenous knowledges of place. Yeah, thinking about acknowledgement of the many different countries that are making up Australia, but also thinking about how we follow or lean towards this Western way of mapping because ultimately there are many ways of understanding place and mapping. So yeah, just considering all of the different hierarchies that Google Earth is upholding within the technology.

But I guess I'm also interested in how the technology itself is dysfunctioning and why it doesn't work. These images that are scrolling by are a transition between the Google satellite view and street view and what happens when the technology degrades and disintegrates within itself. Pitta Pitta is a rural area, and a lot of these images haven't been updated for the last 10 to 12 years. So again, coming back to that idea of, yeah, placing value on certain spaces and places within Google Earth in a nutshell.

Mark Andrejevic: Thank you so much. That historical set of connections is so interesting. I think about the European arrivals in new territories, one of the first thing they did always was mapping and, of course, other forms of extraction, but it's really interesting to see those Google cars driving around and imagining the digital forms of extraction and capture that they're engaged in. I'm curious if either of you have some reflections about the capability of taking what Google's doing and finding the types of creative uses that you're putting it to, how do you think about what it means to work on that platform that they've created or work with it? What are the, I don't know, potential hazard or opportunities that it creates? If you've got any thoughts on that.

Jahkarli Romanis: I think for my work I'm slowly starting to consider copyright and permission. So through making this work, obviously, Google Earth has a set of permission guidelines and copyright guidelines. I find it interesting, of course, that I'm on a technicality not really meant to be using these images in the way that I am, but they are images of stolen land. And therefore, I am taking back Country in this sense or basically doing what Google Earth is doing but just subverting it. So I think creatively it's got a lot of potential and I love that artists are utilising it as a tool to bring things to the fore and highlight things that people can access at their fingertips but just maybe haven't thought about. Yeah.

Mishka Henner: I mean, Google is a trillion-dollar company who have basically pretty much taken over our lives. I think it's a fair social contract that artists, or anyone really, uses their services in a way that can suit them as individuals in exchange for Google seeping into every pore of our lives and trading our information and commodifying our personal data. I never doubted or questioned for a second my right as a civilian and artist to make use of their material to make work. And up till now, I've never had any issues really.

One of the things for me is I think of myself as a critical viewer really. I'm not a very good consumer, right, to start off with. I'm not good at buying stuff. I don't need much. I'm much more interested in looking at the world and reflecting on it and critiquing it really. I think I love the possibility that these engineers and designers and these blue sky thinkers at Google never thought that somebody like me or Jahk might use their tools in these ways. I mean, obviously, they have enormous blind spots. It's really fascinating to see how Jahk's using the technology, because obviously these are huge cultural blind spots that expose power dynamics and ways of thinking about the world that are very specific to those engineers that design the software. It's become so ubiquitous, and it's become such a dominant paradigm that people don't think about those things, people don't consider those things.

Even when I refer to the world as an image of infinite detail, I guess I'm drinking the Kool-Aid, if you like. That would be the dream of these Google engineers. But actually I think what I'm also really interested in is these blind spots, these glitches and breaks that reveal a deeper ideological underpinning. And probably for me, one of the most key aspects really is the rampant capitalistic urge to commodify and exploit, and the imagery is just one more tool for that.

Mark Andrejevic: What I found really profoundly moving and also really just socially and philosophically important about both of your work is the way in which it highlights and thwarts... let's put it this way, highlights the impossibility of the fantasy of total information capture. There's a kind of smoothness to these digital platforms that's underwritten by the ideology of the tech proselytizers, which is, yes, we can capture everything. We'll get it all. And it's so impossible. Even Kevin Kelly's vision of cameras distributed everywhere that would capture everything, you'd need then a camera for every camera to capture the camera, right? There's some kind of impossibility to this fantasy, and yet it persists, right? This idea of total information capture is so key to these online information economies that imagine they can gain certain types of control over what's taking place if they can just get everything. And the solution is, in a sense, always more data to the point of total information. I think what all these projects that we've looked at, they highlight the seams and the blind spots and the deadlocks in that fantasy. That seems super important to me in this data world that we're entering to highlight that.

Mishka Henner: Yeah, I just want to add, there's this really fascinating story. It's another glitch of sorts really. There's this guy called Jim Gray who was one of the key architects of Google Earth. He was an amazing engineer who found a way to basically aggregate all of these different satellite imagery systems into a whole, which eventually resulted in Google Earth. Actually, there was a platform that preceded Google Earth, but that was basically what Google Earth was built on.

He set off in a little schooner, in a little boat to go and scatter his mother's ashes off the coast of San Francisco. This is a set of islands, I forgot what they're called now, about 30 miles from San Francisco. It was actually a really simple journey to make for an experienced sailor, which he was. So he was traveling on his own and his boat was equipped with the most sophisticated, advanced beacon tracking systems and so on. He set off on an absolutely beautiful blue sky, still waters day for these islands. And he disappeared, he was never seen again. There was this extraordinary effort by friends and colleagues who worked at Google Earth, who worked for lots of different satellite imaging companies to... It was in the early days of Amazon Turk actually. Is it Amazon Turk? Is that-

Mark Andrejevic: Mechanical Turk.

Mishka Henner: Mechanical Turk, that's right. It was in the early days of Mechanical Turk, and there was this extraordinary project to basically see if they could find Jim Gray's boat in the ocean, right, using all of this different satellite imagery. So what they did is they literally recalibrated satellites to make passes over the space between San Francisco and these islands. They studied hundreds of thousands of images looking for a single, white pixel, which represented Jim Gray's boat. And they never found his boat. They never found it even though it was equipped with the most sophisticated tracking systems. He had a family. There was never any sign that he wanted to disappear, but he completely disappeared. He vanished, his boat was never found, and nor was Jim Gray.

I find that to be a really fascinating kind of, I don't know, moment that points to the limitations of this technology by the disappearance of one of the very architects of the technology and Silicon Valley's attempts to find him using these technologies. But actually, the ocean swallowed him up. I find that really pertinent in some way. I don't know, I thought I'd share that story with you.

Mark Andrejevic: Yeah, no, it's very interesting and, again, highlights the blind spots. The anxiety, of course, is that the response is, "Well, we just need more satellites recording all the time, and we could have reverse-engineered it."

There was a city, I think it was Baltimore, that hired a company called Total Information Systems. Actually, I think it pioneered during the Iraq war, this guy who claimed to be able to fly a plane over the city 24 hours a day and capture in high resolution the entire city in 24 hours. This was an anti-crime move with the idea that you'd have a complete record that could be rewound and fast forwarded. So if something had happened the night before, you could go and rewind to that night before and rewind before to see what led up to that event and then fast forward after to see where everybody went. It was not just a spatial capture, but a temporal capture, right, if you could capture the sequence as well as the space.

And it was some, of course, millionaire who was funding this to imagine the possibility of total capture. But of course, your story, again, I think highlights the emissions and the deadlocks. Jahk, I think what you were talking about, alternative forms of mapping, Indigenous forms of knowledge is different ways of thinking about the relationship between time and space. Also, I think against that background, highlight that it's a very particular model of information capture that those fantasies in vision, a very particular way of knowing space and knowing time. I think it's great to have your thoughts about what might alternatives could that be, because I think we need alternatives, right? The direction doesn't seem to be a constructive one that they're headed.

Jahkarli Romanis: Definitely. I think, to touch on what Mishka was saying before, constantly critiquing, just not accepting these technologies as neutral tools. There's always an agenda. They are subjectively made. And so, I think as well, a lot people take these on face value and think of them as scientific and neutral, but ultimately, yeah, everything has been designed and engineered, there's a particular reason why something been done the way that it has been done. So, yeah, I think just through my work bringing that awareness of Indigenous histories in Australia, but also considering if a collaboration was to happen between Google Earth and different Indigenous nations, is that possible? I don't think it is largely because of issues of things like data sovereignty and who would have the ultimate ownership and control over these technologies if Indigenous knowledges were incorporated. Yeah, very interesting, all the data stuff. Yeah.

Mishka Henner: Jahk, can I ask you? What you just said there, the issue of ownership seems to be a real key thing around which the whole subject would pivot, is that right? I'm just interested.

Jahkarli Romanis: Yeah. I mean, there's different layers of ownership. There's ownership of intellectual property, there's ownership of data, ownership of land. I think ultimately all of this stems from just acknowledging the histories and acknowledging that Indigenous peoples are the traditional custodians, that we are the owners, in a sense. And through these technologies like Google Earth, this is constantly reiterated as false or there's no opportunity for that to be acknowledged. So I'd say it definitely is an issue of ownership on many levels. Yeah.

Mark Andrejevic: I think that question of how it's put to use is also... So ownership of the systems of sense making that absorb and digest these huge volumes of images, one of the things, again, that I really appreciate about the work that you're doing is, in a sense, it's a counterpoint to the automated processing of the images or the treating of the images as operational in the way that Trevor Paglen describes them. What if you de-operationalise them and even took those ones that are difficult to read or the ones that jam the system and de-operationalise them?

I remember reading about a researcher who became quite well known I think subsequently, who was doing research on using Google Earth to make inferences about socioeconomic levels of neighbourhoods by looking at the types of cars that were in the neighbourhoods. It was very much a marketing logic, right? There are ways in which you could automatically process these images to make inferences that might have, again, monetary value or some instrumental use. I think she eventually got hired by Google and then turned back against them and got fired. But that question of the difference between creative, artistic, human sense-making and the machinic processing of these images seems to be a really interesting point of struggle and critique. I don't know, I thought I'd see if you had any thoughts about how these images can be used in automated ways.

Mishka Henner: Yeah. I mean, I guess the human interpretation gets in the way maybe of the smooth, automated machinery. Yeah, it's interesting, isn't it? Because all the issues that Jahk's discussing really go to the crux of that, which is, what else is going on? What other ideological forms exist that run completely counter to that Silicon Valley paradigm? And there are so many. I mean, in a way, those blind spots that Jahk mentioned, you wonder whether actually there should be real efforts to maintain those as blind spots in that infrastructure, right? Because actually, so long as it remains low resolution, hidden, not well documented, there are great opportunities and possibilities for alternative things to grow maybe. I don't know, I don't know what you think about that, Jahk.

Jahkarli Romanis: Yeah, no-

Mishka Henner: Hoping to see-

Jahkarli Romanis: Yeah, I think definitely there's agency in withholding details, and so I think it's important also to acknowledge that even if this out-of-world collaboration between Google and Indigenous peoples was to happen, there could be or there very likely would be... There would be a case of pretty much being like, "No, I don't want Country imaged. I don't want it published. I don't want it shared or accessed by everybody." There's pros and cons, or a lot of cons actually, between having all of this accessible and leaving it as it is. Yeah, I think these images will exist within a context, and I think to not acknowledge that context is damaging, I suppose. Yeah.

Mishka Henner: You know what would be fascinating? I'm wondering everything that that would expose. That collaboration between what you're describing in Google would explode all of those taken-for-granted, hidden assumptions about ownership and about rights. That would be really fascinating because it would be two very different ways of thinking about the world colliding, and the failure of that collaboration would reveal so much about the dominant paradigm. You know what I mean?

Jahkarli Romanis: Mm-hmm (affirmative).

Mishka Henner: I'm sure people would be fascinated in the failure of that collaboration because of everything it would teach us about what Google expects from its social contract with its users. You know what I mean?

Jahkarli Romanis: Yeah, absolutely. I feel like for Indigenous knowledges to be correctly shared, I suppose it would not be a collaboration. Collaboration would not be possible, I don't think. Yeah, I'd love to see it fail because ultimately that teaches us a lot about the way Indigenous knowledges actually work. Yeah, they are just so opposite. We cannot comply with Western ways of knowing or Western ways of understanding place. Yeah, they can't really coexist.

Mishka Henner: Well, I would love to be a fly on the wall in that meeting though between the Google executive and someone like yourself and to hear the negotiation between what would Google expect. Because, on the one hand, you can totally imagine the Google mindset would be, "Oh, this is fantastic. This is amazing. This is a way for us to broaden our appeal, to think out of the box, to think differently." Right? So you take all the aspirational stuff about Silicon Valley culture and apply it to a such scenario like this where ultimately it would completely fail based on that kind of aggressive, capitalistic, colonial impulse, right, which would still be at the heart of everything they do. But is this just completely unconscious, taken for granted, but expressed absolutely concretely in terms and conditions? I think that would be really fascinating. Yeah, it would show the chasm between what they're doing and alternative visions of the world.

Jahkarli Romanis: Absolutely. I'll give them a call tomorrow. We'll set up a meeting and I'll have you on the phone somewhere hidden so you can overhear what's happening.

Mark Andrejevic: It sounds, in a way, coming full circle that the failure, it would be more than a glitch, it would be, as you say, a chasm would reveal, make something visible that I think was already visible to some, but would make it more widely visible and perhaps visible to Google. It would be interesting to see how they tried to fold that back in. But it's a really interesting point. There's no third position from which these two positions can be mediated. They're non-compatible.

I'd like to give a huge thanks to both Jahk and Mishka for sharing their work with us and for the open conversation that you've had about it. It's been fascinating speaking with you. And thank you also for the work that you're doing. This type of work, it's one of the locuses of hope in an era that looks pretty bereft in some respects. So thank you for your work.

Jahkarli Romanis: Thanks so much for having me. I appreciate being able to chat with Mishka. I'll be honest and say that I admire his work so much, so to have the opportunity to speak with someone who's been involved in this work for a better part of 10 years, I suppose, it's very, very cool. So thank you.

Mishka Henner: Oh, thanks, Jahk. Well, I'm thrilled that you can find something in it, and equally, I'm really interested in what you're doing. I hope you'll keep in touch because I'd love to see more of what you're doing and see how you progress.

Spiros Panigirakis: I'm just going to pop in. I'd like to thank Jahkarli and Mishka. What was interesting about the talk was, I guess the insidious control mechanisms of the state, but the paradox of that is that the images come out so aesthetic. I guess when looking at your respective artworks, I'm thinking about those systems of control and surveillance, but also what is being produced and the outcomes are so aesthetic and so beautiful, so is very complex artworks. So thank you very much, Mishka and Jahkarli, and Mark for facilitating this really fantastic conversation.

Mark Andrejevic: Thanks to MUMA and MADA for organising and hosting it. Really appreciate the work that went into that. It's been fascinating for me. I really am inspired by the creative work that you're doing. For the things I think about, it connects with all of them, so it's been wonderful for me. Thank you for that.

Semester 1: On Connection

Form x Content is a program of online and on-campus talks delivered during Monash’s teaching semesters. Thematically driven, the series features the voices of renowned First Nations, Australian and international artists, designers, architects, curators and academics, and aims to stimulate new thinking and encourage debate and discussion around contemporary ideas. The program is delivered every Wednesday lunchtime during Monash University teaching semesters, both online and broadcast on the Big Screen at Monash Caulfield.

In 2022, Monash Form x Content considers ways in which individuals and organisations are changing and adapting in response to the disconnection and alienation experienced as a part of the pandemic.

The Semester 1 theme, ‘On Connection’ considers the importance of relationships and the ways in which these sustain us, with several talks presented in partnership with PHOTO2022 and Melbourne Design Week.

In Semester 2, the program theme, ‘On Care’ explores how the disciplines of art, design and architecture can engender and embed principles of caring, inclusivity, safety and wellbeing through research and practice.

Form x Content is free and accessible to all.

Join us Wednesday lunchtimes at 1pm—online and on the Big Screen, Caulfield campus.

Form x Content Presented by Monash Art, Design and Architecture, programmed by Monash University Museum of Art | MUMA.