[00:00:00] Ceasar: This is The Move.
Ayushi: I'm Ayushi Roy.
Ceasar: I'm Ceasar McDowell. You're listening to us. (both laugh over background music)
So with us today Sasha Costanza-Chote associate professor at MIT in civic media and also co-founder of the Design Justice Network.
Sasha: Design Justice explicitly says: Well, not only do we want to pay attention to the values in the objects and systems that we're designing, but here's the set of values that we believe in.
Ceasar: So Sasha, Hi. Welcome.
Sasha: Hi. How are you?
Ceasar: It's good to have you here when we were starting out this series. I was telling Ayushi that we needed to invite you here. We were doing a whole list of people we wanted to have on our initial series and you are at the top of the list.
Sasha: An honor.
Ayushi: It's true. (crosstalk and laughter) You still are at the top of our Excel spreadsheet.
Sasha: Well now I'm embarrassed. (all laugh)
Ayushi: Well part of it [00:01:00] is because, you know, you may not remember this. But one of the ways I came to work with you is because uh, the platform you had developed called Vojo, which we use for this campaign in Cambridge on domestic violence. And that whole Vojo platform is you know, really it's a voice based media platform that allows people to call in to a number of these text messages and organize information and stuff.
But what I loved about is that where it came from, right. Because it came from your work in LA. Uh, you had built this software called VozMob that you actually did through a participatory design project with day laborers and household workers and the Institute for Popular Education. And just that whole frame that you have or working both with the end users around the design and creating systems that really bring design and you know, I mean digital and analog together are really right on point for what we're trying to think about here today.
Sasha: And what was really interesting about that project, in addition to that whole process, was the core group, which was called the Popular [00:02:00] Communication Team, would take stories that were coming in and being posted to the platform, review them, edit them, find the most powerful ones, download them and then follow up with the creators to expand them into stories that would go into the print newspaper.
Ayushi: Oh cool.
Sasha: That would then be printed and circulated throughout Los Angeles at the day labor centers, at community events and on buses. And then the print newspaper, of course, had the number that you could call to post news stories or comment on the stories in the print publication.
So it was really a cross-platform project that was deeply linked with a community-based organizing process that had deep roots in this particular neighborhood in Los Angeles. It's not just a software project.
Ceasar: This is really, really important because we talk a lot about the importance of working and designing for the both analog and the digital world how we call dealing with this is "cross-platform."
I'm curious just even from doing that and from where you are now, do you see a lot of people really working that way? Trying to bridge [00:03:00] purposely those two worlds together?
Sasha: Not as much as we really need. I think that a lot of the energy and excitement around design work and certainly software and technology development remains very lab-based. Remains very linked to an imaginary or discourse of the sort of brilliant creators who are going to come up with a wonderful idea and implement it, then get Venture Capital backing and roll it out upon the world. Which is not to say that processes can't work that way. Sometimes they do that is a widespread type of design practice.
But I think that there is an interesting shift happening over time where even in the private sector and even in the Silicon Valley spaces, you do have an increasing turn to people thinking about human-centered design, lean design and development processes. The idea that you really need to get products in front of the intended end users as early and as frequently in the process as possible to validate the assumptions that you have about what people are going to want to do.
That's good. And I see that as a positive turn [00:04:00] so away from waterfall to agile. That's important.
The challenge is that that process is still often used in an extractive design structure. So, communities may be included earlier on and more frequently in design processes, but they still become spaces to gather raw material for potential product ideas that then will be sold back to them, rather than thinking about the full spectrum of what would it mean to not only get ideas from end users, but actually figure out: how are they going to materially and symbolically benefit from the products that we develop together?
So what would it mean to share credit in terms of who's an innovator or a creator with quote, unquote "end users" or with citizens or with people - with human beings. Or what would it mean to share the material benefit? So is there going to be a profit-sharing arrangement with this community or community-based organization that you've included in a design process to come up with a new [00:05:00] product idea.
Myself and a growing network of people who talked about this in terms of Design Justice are interested in pushing that idea forward. And growing a community of people who would say, it's not only that we want human-centered design and lean development processes because it will produce better, more usable products - although that's true.
It's also that we believe that we have an ethical commitment to share the discursive and material benefits and ownership over products and projects and processes that come from design more broadly.
Ayushi: Wow. I really like the idea of graying the line between the Creator and the user. Or the designer and the designee if that's - I don't think that's a real word. But (crosstalk and laughter) I really like that because something we've been talkin about in a previous episode was about the relationship between public servants and residents of that community. And really thinking about the way in which sometimes, or often, the people who [00:06:00] work in those municipal government agencies are members of the community, and so then are both residents and those serving.
And to create this dichotomy really simplifies the degree to which a lot of social issues permeate communities or how they impact communities and how they should be treated with - accordingly. And I think the way that you're getting to that from the design perspective is just so powerful.
I'm from Silicon Valley. Um, a San Jose native from before it was called Silicon Valley. And it's frustrating to watch a lot of the ways in which designers or engineers or coders now see themselves - kind of like you said - as the master of the rollout, right? As opposed to a co-creator. Even though human-centered design is becoming a thing and design thinking is becoming a thing and systems thinking but there's no ethical component to that necessarily.
Sasha: Yeah, absolutely and to some degree, even in - and then this is an interesting conversation - so even in values driven design or in the values in design movement. On the positive side, you have more people starting to explicitly talk about [00:07:00] what are the values that we're encoding in the affordances of the objects that we're creating? Or in the interface design? And a little bit people talking about who's going to benefit from the designed object or system.
At the same time many people who use that framework stop at saying: what's important here is that you make the values explicit. In other words, it's descriptive framework, more than a normative framework. It doesn't say these are the values that we should hold as designers.
And so one of the differences between Design Justice as a framework and the values in design conversation is that Design Justice explicitly says: well, not only do we want to pay attention to the values in objects and systems that we're designing, but here's the set of values that we believe in. And so people using Design Justice framework are talkin explicitly about the structures of oppression and marginalization that shape people's life chances. [00:08:00] And access to resources, power, visibility, livelihood, health outcomes and so on.
So in other words, we're talking explicitly about what Patricia Hill Collins who wrote Black Feminist Thought names as the Matrix of Domination, which is the intersecting lines of white supremacy, patriarchy, capitalism and settler colonialism - and to which we could also add ableism, discrimination based on citizenship status. And so on and so forth.
The idea is we know a lot from sociology and from a number of other fields.There are a number of large structures that intersect, that shape people's lives. And we've talked about that a lot in terms of how that shapes people's access to education, how it shapes the way different people might pass through or not pass through the criminal justice system or the prison system, and so on and so forth. But I think what's happening now is a lot of people are beginning to talk about well, what does this mean in terms of [00:09:00] design? As process and also as outcome.
So, how does the Matrix of Domination structure both design processes and products. And how could we rethink processes and products so that they would actively challenge rather than reproduce - usually unintentionally -
Sasha: - white supremacist, hetero–patriarchy, Capitalism, settler colonialism and so on and so forth.
Ceasar: Yeah, I really do too. And I was just gonna say that your last comment about sometimes unintentionally people create this - I think there's a lot of intentionality around it now. Unfortunately, you know, people really are trying to create the systems of separation in our society. I mean, there's purposeful movements around it now. And I think as we get clear about what the other side of that story is, these other ways of looking at things, that rears his head more because I guess we've creative the context for the dialogue. Is what we've done, right?
Sasha: Absolutely. That's so [00:10:00] interesting. So yeah, I always try and nod to the way that like most of the design processes that reproduce these structures - it often is unintentional. But I think that you're absolutely right to pause that for a moment and say well, we can talk about ways that people are intentionally engaging in design to reproduce inequality. We could talk about, I don't know, let's talk about the construction of the border wall, right? (group laughter)
So here's a situation where the Trump Administration and Department of Homeland Security put out a bid with requirements for firms to say. They had a design Showcase of I forget the exact number but no 8 to 10, or maybe it was a little more, firms created demo design segments of the border wall.
And each one actually built a segment and then they had a review - they had a crit, right? (group laughter)
Sasha: The had a conversation about -
Ayushi: Urban Design skills coming back.
Sasha: - about which of these is the best. And it's the mix of the price point with the features of the object that's been constructed. And so here's the place where you could even say, let's do [00:11:00] this border while construction from a human-centered design approach, right? And so what would that look like?
You would say, well, let's map out the ecosystem of stakeholders. We have, like, border patrol agents are tasked with doing a certain thing. We have migrants who are trying to cross. We have a whole set of stakeholders - you could imagine running human-centered design workshops to create a border wall that would be human centered and that would probably produce a bunch of features that would make it more usable by the border patrol agents who have a difficult, grueling job to be out there tasked with implementing their their work.
Again, so Design Justice is a framework is invested in saying: how can we build an approach to design that wouldn't let you do that? That would say: if you're using this approach and following this set of steps, it will force you to step out and question what is the larger structure that this thing that you're building is going to be reinforcing?
Now the particular that would be to say - well, you know, maybe you would say given that there's going to be a border wall [00:12:00] built, we want to build the wall with long distance sensors that can see people's body heat and can tell when someone's about to die and would them send an alert and would thereby somehow save lives, right?
But that would be narrowing your scope to saying well, what is the function of the border wall initially? It's political theater that pushes people's Crossing Point further out from cities and more easily accessible locations out into more dangerous crossing locations and to increases death rates as migrants come seeking both asylum and economic opportunity.
And so you would say, your design solution here is to not build a wal.L not to build a more human-centered wall. That's an extreme example, right? But that framework of saying: to what degree is the scope limitation not allowing the design team to understand how the thing they're working on is reproducing systems whose values we shouldn't be enforcing and extending.
That's something that I'm [00:13:00] very interested in developing. And so is this whole sort of design Justice Network.
Ayushi: That's gonna stick with me. The human-centered border wall. That's amazing as - as a catch phrase alone. (crosstalk and giggles)
Ceasar: That's a catch phrase alone.
Sasha: The place where you see this popping up is: we could talk about prisons.
Sasha: And detention facilities.
So there are people talking about applying human-centered design approaches to prisons.
Ayushi: That's right.
Sasha: Better prisons. There's a whole conversation about building trans pods for people in Immigration detention facilities. Where so you know trans femme people, especially, have higher rates of assault across the prison system. And by prison system I'm including the detention facilities here, many of which are now increasingly built and operated by the same for-profit private companies that have built uh, prisons.
And so there's this conversation. That they're having about how will how do we reduce the rate of assault against trans people? Let's build what basically amounts to solitary confinement pods within the detention facilities.
Now, their [00:14:00] argument is this will reduce the rates of harm for trans people, to which our response - and a Design Just- so a human-centered designer might say, okay, I'll take that contract, we'll build the friendliest pod that we can given this difficult condition and that will reduce harm rates, you know, among this population.
A Design Jutice approach would say why are we holding people in cells in the first place? Why are there detention facilities at all? Why are we locking trans people up or locking anybody up? And our goal should be to - not to invest more money time and resources in building or human-centered prisons tension facilities and trans-specific quote-unquote, "pink prisons."
It should be to reduce as much as possible the number of people. That are locked up in these cages.
Ceasar: Yeah, I wish we could have that same frame of conversation a critique and really kind of looking at - to this Design Justice approach and applying it to the way people are actually designing online civic engagement tools. [00:15:00] Because I think a lot of those things just - they don't question at all what they're doing. And they kind of lift up certain voices without paying attention to others and then they convey a lot of mobilized power and stuff without paying attention to like, well, what do we reinforcing here?
You know. I think that's part of this challenge that we're in right now in the society is that: if we're really trying to build society that works for everyone, where people actually have voice and authority and their own lived experience.
We have to be bringing this kind of Design Justice perspective to what we actually invest our time in designing. And I think that's part of what this comes up to. It's like: what is worth doing in the world, given a situation, what we have to undo? Where do we need to put our talents in our energy and our time and work with people on? And what do we not do? Because right now we're free-for-all, like, if you can make it happen you might as well do it.
That's kind of where I feel like we are in the world right now.
Sasha: I mean one thing we need to not do is continually reproduce these [00:16:00] processes where we have even really well-meaning and well-intentioned designers, they can just like make something for a particular population without having them at the table. Right?
And so I'm very inspired by the slogan that was popularized by the Disability Justice movement of, "Nothing About Us Without Us." Which I think, if there's one takeaway from this interview or one takeaway from everything that the Design Justice Network is trying to do, I think that that probably encapsulates it best.
But in terms of the question about online spaces, right? So as we're sitting here talking, you know, Mark Zuckerberg is (group laughter) either right now or is about to be testifying to Congress about how did Facebook get to the space that it's at? How did it become a space where Cambridge Analytica could scrape the millions of people's data and use it to run targeted ads to promote the Trump candidacy?
How did it become a space that was and continues to be rife with white supremacist and white nationalist and explicitly fascist and [00:17:00] KKK people on there? How did it become such a breeding ground for misogyny and transmisogyny and misognoir and all of the - yeah, exactly what we were just talking about. It's a discursive space for the reproduction of the Matrix of Domination.
Ceasar: That's exactly it.
Ayushi: Ugh. Just thinking about that makes my hair - I don't know. I have so many feelings. (laughs)
Ceasar: Well, what are you feeling?
Ayushi: Ah - just I don't ... "unsafe" doesn't even get to the start of the kind of feeling when I'm thinking about him testifying or thinking about how material about me is being used without me. I think that's so powerful.
"Nothing about us without us." And somehow in the way that we're educated -not by popular education - we completely lose that. I mean, I'm still within the education system and this might be my last degree, so it might be my last touch with it. And it's just so interesting to think about starting from all of my California public school education, K through 12, and then college and now grad school.
I've been so fortunate to have an education. And yetm I [00:18:00] think a lot about. The Koch brothers funding my textbooks in fourth grade. Or things I didn't know when I was a fourth grader. And just - it scares me to think about my nieces going through that now and what they're being taught and what social media is teaching them.
And - ugh! If you can, they will. (laughs uncomfortably) it's just a lot of feelings. (whispered) A lot of feelings.
Ceasar: Well, I mean, yeah. I concur with you on a lot of those stories. I have a lot of them, too. And I went to school a lot longer than you and the same things are still happening.
What it calls it for me is this issue of how we keep going thanks of knowing all of that, right? Because we do.
Ceasar: Right? We keep going we keep fighting we keep trying to create a different kind of world, a different kind of reality out there.
And so for me, we do that it's not just because we believe we're right. Because we move out of love.
Ceasar: Right? if you move out of love, you know, you have a place to stand and say, those other things over there aren't right. Because they don't come out of love. They may have different approaches, you know, people who honor love and really use love as a guiding force can have different ways of doing things.
[00:19:00] But people who come out of stuff from hatred and from not having a kind of way of thinking about others in terms of love, create really bad things. And they could be named. And you can see them and they should be called on.
Sasha: I think the Facebook conversation is so interesting to me right now and maybe we could all get into it a little bit more.
So, part of Facebook's response and also Twitter's response and a lot of the platforms sort of creators and maintainers - so first of all, most of the ownership and decision-making power at most of these companies is taken by white, cisgender, straight men for the most part.
I mean, if there's a mix of some other people in leadership at these firms, but that sort of like dominates. And I think that one of the things that I've really noticed about how that often ends up limiting their understanding of what's happening on their platforms. And they create this discourse of the filter bubble conversation. The echo chamber conversation.
Sasha: I think that we really need to think about that more carefully. Because to me a lot of times the technical solutions that are [00:20:00] proposed to deal with, say, hate speech on a platform like Facebook come from an idea that the problem is you have radicalizing groups of people who are just talking to each other. And so the framing of the problem is this echo chamber one.
Sasha: And so that's important. But one of the challenges with that framing is that it creates what I believe is a false equivalency between all sub groups of people on a platform.
Sasha: And it means the types of solutions that you proposed technically what they're about is disrupting the bubbles of all the users on the platform. But from political theory, for example, from Nancy Fraser's critique of the idea that there is one universal public sphere.
So, Nancy Fraser - feminist theorist who developed this idea of subaltern and counter-publics - that said, there's never been just one big public sphere. There's always been a lot of different discursive spaces. So, even at the time that [00:21:00] the public sphere that Habermas was talking about in coffee shops was a space where these white males, middle class, ropertied people were coming into their own as political actors through public conversations, where they put aside their emotions and articulate their political views and the best views would win.
So Fraser said well, while that was happening there was also the space of the salon. Where, well, middle-class white women were organizing these like in home spaces to sort of discuss early women's rights and feminist ideas and feminist theory.
And simultaneously there were working class public spheres. And so on and so forth so we could talk about subaltern and counter-publics as being the spaces where people who don't occupy the dominant positions in society gather to form ideas, to build community, to come up with strategies for how can we build power and shift this broader unequal system?
So, fast forward to now. Zuck is in front of Congress talking about what Solutions he'll have to [00:22:00] this problem of echo chambers and radicalization. The problem is whatever solution you've come up with has to take into account that all of the bubbles are not equal. That some of the bubbles are occupied by people who have more privilege and positions of structural power in society.
Those bubbles need to be burst. But the bubbles of, say, trans women - our bubbles don't need to be burst. Because we already know what people who don't think like us think of us, right? So when - if I go on Twitter and I post something about trans visability, I will unfortunately often get you know, a bunch of people in my mentions telling me that I don't exist. Telling me that trans people aren't real thing. Telling me - on a bad day - that they'll commit sexual violence upon me, rape me, murder me, Etc.
So I have a pretty good idea of what the people who don't think like me think. Whereas the people who occupy the positions of power never have to learn, right, what the subaltern [00:23:00] thinks. So, classically this is the idea from the black radical tradition of double consciousness, right? So we know this from Frederick Douglass and from black radical thinkers throughout the ages.
This is the thing, right? So, people in subordinate positions have to understand what the oppressor thinks and believes.
So we don't need our bubbles burst, right? Or maybe there are certain bubbles we might be in that we do want to be exposed. Because of course it's complicated and it's intersectional and all of us occupy a particular subject position. But a universalizing, technical solution to the quote-unquote "problem: of the filter bubble is always going to systemically disadvantage people who are already in positions of - of structural marginalization. Because it's going to treat their bubbles the same as the bubbles of the powerful.
And just even saying that and having that conversation - it seems so obvious when you talk about it, but none of these conversations -and I'm sure that none of the testimony that'll happen in [00:24:00] front of Congress today is going to take this into account.
So the solutions that get proposed around like, we'll tweak the algorithm of the news feeds that people will see stuff by people who don't think like them. But they're never gonna say well what we need to do is make sure that we expose people with racist ideas the ideas of people who can counter those racist ideas. Or expose cisgender people's feeds to like, opinions and visibility and ideas from like, trans people so they can get a better sense of what that's like. Or expose white people to more content from POC because they may not have been following people of color.
So to me, a Designed Justice approach would take those things into account. And that would literally change the algorithm for busting the bubble.
Ceasar: That would actually really great. You know, I can imagine now that your your whole ad fees would actually depend on what bubble you're in.
Ceasar: And opening it up to others that actually when you're in the dominant role, can't get away from the fact that they're these others. And you have to see it and you have to contend with it.
[00:25:00] Sasha: But these platform owners will never do that. And the reason why is because that would require them to articulate what values they hold.
Sasha: Whereas what they want to do is be as profitable as possible, which means that they want to maintain a position of platform neutrality.
Ceasar: Their in is the Issue, right?
Ceasar: Can you really, actually -
Ceasar: And a space that we're talking about, if we talk about this issue of democracy and civil engagement. At some point in time, you can't be value-neutral.
Ayushi: Right. And it makes me think a lot about what we've been been formerly thinking about regarding bias. And especially unconscious bias. Because I think Ceasar something you were saying earlier about wanting to come to organizing out of love. And come to a lot of this work out of love.
I think a lot about the way that I hold myself accountable is questioning my own personal biases. And that's how I feel like I'm trying to show love to those around me, right, and be ever-inclusive.
But I'm thinking about when you start thinking about platform neutrality or other sorts of things like this, how would [00:26:00] you, Sasha, recommend someone else be more conscious of the bias they might have in dealing with these platforms? Either from the creator's perspective or the user's perspective.
Sasha: Oh, this is such an interesting question. I mean, so the whole conversation about bias and equity - I think we also need to like expand the framework both along the lines of looking at larger structures of inequality. And also we need to talk about history and time scale. And what's the time scale across which you're trying to produce equity.
It's also going to lead you to do a whole bunch of things in terms of the decision-making algorithm. Or probably the decision support system, [00:27:00] right? So presumably you're not building algorithm that just going to make the decisions completely, it's going to like surface the best candidates and recommend them to some committee of humans who are going to make those admissions decisions.
So, the time scale of the data set upon which you're trying to produce a non-biased outcome or problem which you're trying to produce equitable admissions decisions is going to dramatically impact the outcome in any given particular year.
So for example: one way to approach that socio-technical problem is to say what I want to do is make sure that my admissions next year to MIT's undergraduate class, I want it to mirror as closely as possible the demographics of the United States. Or you could say the world - actually, that's another decision point, right? What geographic scale are you trying to seek equity across?
So both time scale and geographic scale are going to deeply influence what targets you're trying to reach with the algorithm. So for now, let's just say we're talking US.
Sasha: So you're saying among [00:28:00] US applicants to MIT, we want them to mirror according to a number of - actually, let's keep it really easy.
So, we'll use the federally protected categories, non-discrimination categories and we want to then mirror that. So for example, if gender discrimination is federally protected, we would say we want it to mirror the US population. Roughly 51 percent of the admitted students should be women.
So, one structure is to say well, my algorithm is not biased if it recommends 51% women because we know that there's going to be a pool of excellent candidates that's greater than the number of available spots. So within that we're going to try and have this target.
Another way to think about bias is to say well, we want among candidates of equal test scores and a bunch of other factors. We want to make sure that the males are not recommended more frequently than the females, right? Or the men from the women.
Another way to think about it is to say well our data set that we're comparing it to - [00:29:00] let's say our current class of undergraduates across all four years is heavily skewed towards men. And so what we want to do is correct that as quickly as possible. So we're gonna have to actually admit 70% percent women this year so that the entire four years of classes is going to reach that 51% parity point.
And another way to think about is to say: well, if our data set is the entire data set of the institutional history, I would say well W]what proportion of undergraduate throughout MIT's lifetime were men versus women? Well in that case, you might end up with a hundred percent, (group laughter) you know, female admittance rate because you're trying to reach as quickly as possible party across the data set of the institutional lifetime.
And you could mirror that across all the other federally-protected categories. In that case you'd probably end up with a class that was, like, nothing but black women.
Now I say that not because I'm advocating that we should only admit black women in next year's class. Although I don't think that would necessarily be a [00:30:00] terrible outcome. I say that to point out that the designers of any algorithmic decision support system have to make decisions about geographic and historical scale in terms of what data you're even including as you try and train an algorithm to be quote-unquote, "unbiased."
And that implies that you're making some decisions that are values-based and historically informed rather than the hegemonic version of what an unbiased algorithm is. Which just says: no matter what demographic categories someone belongs to the algorithm will assign them to A or B without taking that into account.
Ceasar: There's a design challenge.
Ayushi: That's a design challenge.
Ceasar: That's a design challenge that really is at the front of really thinking about doing things in this digital world as we go forward, right?
I mean. And it is not something that people consider at all, right. It really is just it's just not at the forefront.
Sasha: Absolutely. So what does algorithmic reparations look like?
Sasha: No, really.
Ceasar: That's a good question.
Sasha: Or if you want to answer it in [00:31:00] less contentious way, I mean again, so as - as undergrads are learning how to develop and train algorithm in their machine learning classes, I think that they need to have problem sets that asked them to - for example, you could say you're doing an admissions algorithm.
How do you build it and train it if your assumption is this version of a non-biased algorithm where it's just - it's blind to any demographic factor? How do you train it if your version is this version, which says you're trying to reach parity with the general population? How would you have to train it if your goal includes the data set across this larger time scale or geographic scale?
I mean, that would be a very powerful thing to do to have people who are learning how to do machine learning and how to develop algorithms and train them to be required to do that with a number of different fundamental assumptions or constraints in that process. Even to have to think about that would be a big advance.
Ceasar: Sasha, I know I told you only 30 minutes. We could stay for another [00:32:00] four hours and talk with you. I know I can.
Ayushi: You're incredible. I'm so glad I met you.
Ceasar: Yeah, this has just been really, really exciting and fun and I really want to thank you. You know, one of the things we always end the show with, we ask people - because we're calling this show The Move. And it's about a move to change things in society.
We always say they don't people have moves that they're doing, you know, out there in the world. So, you know, that's a question we ask people: so what's the move you?
Sasha: What's the move. Well, for me the move as I'm trying to finish the manuscript for my second book, which is about Design Justice.
Sasha: To have a lot of it written and hopefully that'll be coming out and really 2019 and the other move - and I encourage everyone to make this move is to go to the Allied Media Conference in Detroit.
This is gonna be the 20th anniversary of Allied Media Conference. It's in late June. And it's this incredible space where people who do different types of media and design and technology work with different social movements gather every year and learn from each other. And advance both practice and [00:33:00] theory of what it means to develop media work in allyship across many different intersections of movements and movement networks.
And all based on love and all with a vision or an eye towards well, 1: building a planet that we can survive upon and one that we can not only survive, but also thrive within. I mean the full pluralism of Human Experience.
Ceasar: So nice.
Thank you so much for joining.
Ayushi: Thank you.
Sasha: Thank you for inviting me.
Ceasar: One thing I got out of that interview with Sasha, was really great conversation about why we have this notion of design for the margins instead of human-centered design.
Ayushi: We're in a time when cities and in our environments are becoming increasingly complex and increasingly diverse. And are in need of catering to a lot more voices and experiences than has [00:34:00] been ever the case historically.
Ceasar: Yeah. And one of the things we had to pay attention to in doing that. I mean, this is a whole concept on this notion of design for the margins is: as our cities become more complex, people, you know, kind of inadvertently think that well, if I just designed for those folks in the middle -
Ceasar: - which will be complex in and of itself -
Ceasar: - that we've taken care of the issue.
Ceasar: What we're saying here is that look real places and where you have to move on design for being inclusive is actually designing for those folks who are at the margins first.
Ceasar: If you do that you take care of the people in the middle.
Ceasar: And this is the principle, you know, that we think should guide all community engagement processes.
Ayushi: Thank you all for joining us today. We hope you enjoyed it and learned as much as we did. Ceasar and I will be back again next week with a new person and a new voice.
In the meantime, please catch us online on Facebook, on Twitter and on our website: [00:35:00] the move.mit.edu.