More Than the Sum of the Parts - Episode 7 of Unbuffered

Unbuffered Logo - Two text bubbles

In this episode of Unbuffered, Chris is joined by Neil Chilson, head of AI policy at the Abundance Institute and author of Getting Out of Control: Emergent Leadership in a Complex World, for a conversation about AI policy, technology, systems, and complexity.

Chris and Neil begin by discussing technology, law, and policy, and how people who work in tech policy often reach very different conclusions while still trying to grapple with the same underlying issues. From there, the conversation turns to “bottom-up complex systems,” emergent order, and the idea that systems are often “more than the sum of the parts.”

They also explore the role of markets, manufacturing, and distributed knowledge, including the idea that “nobody controls that process,” even as systems continue adjusting to changing conditions and choices. Along the way, they discuss Hayek, metis, ant colonies, “I, Pencil,” and why the things that are easiest to measure “might not be the right things to measure.”

The episode closes with a broader conversation about education policy, tax policy, AI, and what it means to build systems that can respond to complexity rather than pretend it does not exist.

This show is 54 minutes long and can be played on this page or via Apple Podcasts or the tool of your choice using this feed

You can also check out the video version via YouTube.

Transcript below.

We want your feedback and suggestions for the show-please e-mail us or leave a comment below.

Listen to other episodes (formerly Community Broadband Bits) or view all episodes in our index. See other podcasts from the Institute for Local Self-Reliance.

Thanks to Riverside for the music. The song is Caveman and is licensed under a Creative Commons Attribution (3.0) license

Podcast Audio Embed
Transcript

Christopher Mitchell (00:15)
Welcome to another episode of Unbuffered. I'm Christopher Mitchell at the Institute for Local Self-Reliance. I'm in St. Paul, Minnesota. And today I'm excited to be talking with Neil Chilson, who is head of AI policy at the Abundance Institute. And also the reason that I asked you to come on Neil is published a great book in 2021 called Getting Out of Control: Emergent Leadership in a Complex World. So the first question is, I didn't ask you beforehand, did I just butcher your name?

Neil Chilson (00:45)
No, you got it. Perfect. Nailed it.

Christopher Mitchell (00:47)
Excellent. Do you want to share anything about like your, your background, what you've been up to before we dive into kind of the power of distributive thinking and solutions.

Neil Chilson (00:56)
Sure. So ⁓ I am the head of AI policy at Abundance Institute. As you mentioned, ⁓ AI policy has swallowed tech policy world. ⁓ And I've been in tech policy world for about 20 years, right out of law school doing telecom law. Initially spent some time at the Federal Trade Commission, including as the Chief Technologist and, and have then moved into the think tank world since then. And all of these jobs, I sort of bridge the

technology and law policy ⁓ spaces. I have a background in computer science, a master's degree and a law degree as well. And so I try to blend those two realms to say interesting and useful things about tech policy to people who make decisions about how technology reaches us in the everyday world.

Christopher Mitchell (01:50)
And I always try to figure out how to address these sorts of things. ⁓ I think you've worked often for ⁓ Koch brothers, Koch related enterprises. I've read ⁓ Charles Koch's book, ⁓ The Believing People, The Solutions from the Bottom Up, and I loved it. And I feel like you and I, I think are gonna agree on a bunch of stuff, disagree on a few other things, but then often you're at odds with me and my allies on a number of things. So that's what makes this fun, I hope.

Neil Chilson (01:57)
Mm-hmm.

Christopher Mitchell (02:18)
But just so people have a sense, I think even if you and I end up agreeing through most of this, we often disagree on other policies throughout the tech world and things like that.

Neil Chilson (02:27)
That's one of the great things about ⁓ tech policy. I actually find it very refreshing. Honestly, the people who grok the issues that you and I, think, are going to spend a lot of time talking about today are, they're not that common, ⁓ honestly. even though people can reach different decisions, maybe different policy approaches from the ideas of emerging order and bottom-up complex systems. ⁓

I think it really shows some, it shows some real strong insight into understanding the world. And so I like people who are willing to grapple with complexity rather than ignore it and pretend it doesn't exist.

Christopher Mitchell (03:10)
Yeah, I agree. And I think it's a sign for folks that are interested in trying to figure out where they can really dig in. There's a lot of work to be done in this area. this, this book that you wrote, ⁓ Getting Out of Control, which is a, I think a great title. ⁓ it's not a policy book. You're not like, you know, you're not like sort of setting it up and, you ended somewhat humorously by with, ⁓ think realizing in a talk that you were perhaps not.

Neil Chilson (03:18)
Absolutely.

Christopher Mitchell (03:35)
⁓ connecting with the audience in the way that we always hope we do when we're in front of a podium. But I think it's fair to say that, ⁓ that perhaps there's too much policy going on because we don't recognize how things work. Like we're trying to impose too many of the wrong solutions in areas because we're not appreciating kind of like how systems really work. Is that kind of the theme you think?

Neil Chilson (04:00)
I think that's generally the theme of, it's one of the themes of the book. I might put it more descriptively that there's lots of ways for us to try to affect the world. And sometimes we try to do it through a design of a sort of comprehensive system from the top. And that can really miss a lot of nuance and the practical problems that people face. ⁓ Even if it's simpler for the people who are

administering the system. And the other alternative is more bottom up. It's much more emergent. It's allowing norms and rules to emerge. And that looks really messy. And so I can totally appreciate that from the people who are trying to solve big problems, it lacks a certain appeal. But it does take into account often the nuances, the individual

needs of the people who are interacting with the system in a way that's more flexible and more durable over time. And so I think of it in terms of trade-offs. It's not that one system is always better than the other, but we do, I think as humans, we do tend to gravitate towards the design system because we are problem solving individuals. And that's sort of what we need to do in our own lives. ⁓ And so I would say my book is trying to

correct maybe for that, not to say that that's never the right approach and not to say that planning is always wrong. fact, planning's important, but plans are kind of useless. think that's Eisenhower's quip.

Christopher Mitchell (05:39)
Yeah, planning.

think plans are useless. Planning is essential. Something along those lines.

Neil Chilson (05:43)
planning is essential,

yeah. And so ⁓ maybe my book is a corrective in the direction of think about other approaches, think about other systems. More importantly, look around you and see what exists before you try to raise it and put a new structure on.

Christopher Mitchell (06:03)
think that's right and I think one of the issues is time in that a lot of times there's some issue, it's kind of percolating for a while and by the time it actually raises to that level of salience where a state government or a federal government solution is coming, it's been around so long people can't think about waiting five more years to solve it. They're trying to figure out how do we do this quickly and I think a bottom-up solution like this is inherently slower and more iterative.

in ways that doesn't suit the way we make rules right now in many cases.

Neil Chilson (06:34)
Yeah, there's obviously, there's very much a different cadence to it. The question is, it's actually interesting because the top-down solutions that we can have can have lots of different forms. And some of those actually work worse with fast moving systems and some of them work better with fast moving systems. And so I think of things like common law, which is an emergent process, obviously, but a single case can be resolved. ⁓

relatively quickly if the rules are clear ⁓ and you just need to get at the facts. And so maybe for any one individual, you can get quick solutions through a system like that, but you don't get an overall solution. On the other hand, if you put a big structure in place like some big telecom law and then the technology moves on,

you don't gain that much, right? Like you may have lot of uncertainty in the future because the tech doesn't match the rules anymore. And so these cadences of the systems are in part about how fast they can be changed, but also how rigid they are. Like how often do they need to be changed, right? Like something like Tort law, actual, basic principles don't have to be changed that often because they're flexible enough and they're applied, you know, to facts.

Whereas something that's very detailed and specific, it might be very clear at the time it's adopted, but it might need to be changed much more often because it's so detailed and specific to a specific technology.

Christopher Mitchell (08:10)
And as an example, think perhaps is the way the Federal Communications Commission has buckets of authority for different technologies. And even as those technologies shift, we have these massive arguments about which bucket are we in because the buckets are set there and they are immovable, more or less.

Neil Chilson (08:25)
Yeah, it's a great

example. The convergence of communications technology has made the, you know, even the various titles of the communications act seem sort of strange and silly and a little bit arbitrary. no surprise that ⁓ lawyers and advocates on all sides, you know, try to take advantage of whichever bucket they think advances their move the most. Yeah, it might be time to update those buckets.

Christopher Mitchell (08:53)
So I wanna talk a little bit about ⁓ what this actually is in terms of where we draw the inspiration for thinking in an emergent manner. And as I was reviewing my notes, I was thinking that it would help to talk about this term, this European term, which I think I say European just because it has a letter I don't recognize. like a Latin alphabet, yet it's got this long E, ⁓ metis. And so... ⁓

You're right, and I think I may have clipped this immediately, directly. Much relevant knowledge in an emergent system is latent, meaning it cannot be described to another party. ⁓ And use the example of ⁓ riding a bike, ⁓ which is like, you know, something you learn to do, and yet if you tried to tell someone how to do it, it's kind of pointless. Like, you just gotta learn it by doing.

Neil Chilson (09:46)
Yeah. Yeah. I think it's actually a Greek term. ⁓ and, and it, it's used a lot by James C. Scott, who is a, ⁓ inspiration for big chunks of my book and my thinking. He wrote a book called Seeing Like a State, which is about big grand schemes to do big things and, ⁓ usually driven by government and how they fallen apart or not worked well or what their side effects have been. ⁓ and

Christopher Mitchell (09:49)
Okay, of course.

Neil Chilson (10:15)
Metis, think is really important because when we are trying to say ⁓ capture or we're trying to solve a problem, as James C. Scott puts it, like people at the top, they see a problem that they are trying to solve. ⁓ They don't see all of the problems that all the people below them are trying to solve, nor can they really collect all the information about what those problems are, in part because of this idea of.

Metis, which is, know, there's a general knowledge problem in solving problems, which is if I'm solving problems ⁓ for a bunch of different people, how do I get all the knowledge that they have about how they solve the problem or what they're experiencing? That's very difficult by itself, even like the sort of on the logistics of it. How do you go and poll all these people? We might think that information technology might help a little bit with that.

It may, it may, it may not, it can make things complicated. It can make things even more complicated in some ways. ⁓ But for sure, the types of knowledge where people don't even know how to describe it themselves, you cannot collect that way. And in fact, often that type of knowledge has to be revealed through actually running the simulation. guess that's, you you actually have to watch the person do the job.

Christopher Mitchell (11:14)
Nope. I don't think so.

Neil Chilson (11:39)
before you can see what choices they would make in different situations. And I use the example of riding a bike, but I think there's giant chunks of all of our jobs where it's actually hard. You couldn't really put it on a resume. All those parts of your job where you're like, I can't really put this on a resume or I couldn't really write it down and pass on a list of instructions to the person who follows me.

That's the type of thing that James is talking about. That's the type of thing that I'm talking about in this book. And it poses a real challenge for ⁓ designing systems that try to solve a wide range of problems. I should say anecdotally, ⁓ maybe not anecdotally, but as an aside, it's also an area that I think has really come back in.

to the forefront of industrial organization and design because of artificial intelligence. Because these types of metis tasks are they're quite challenging, honestly, to get an AI system to do because you can't really describe what it is. so you can't, there's different types of tasks. So maybe some of them you can test the output, but.

If you can't describe what you're trying to do, it can be very hard to get an AI to replace it. So when we're thinking about the types of tasks and jobs that can be automated, these types of metas tasks are actually can be quite challenging often for AI to automate.

Christopher Mitchell (13:11)
So that's a piece that I hope the audience gets. And what's funny is if we move to the next piece that I think is important, it's actually kind of not metis related. Maybe you'll correct me, but that's the ant. And the idea of the emergent system is more than the sum of the parts. And you can't figure out how an ant colony works by studying a couple of ants. And ⁓ that is, I think, really helpful. And this is something that I individually came to.

to a greater extent from ⁓ reading Hayek. And ⁓ I want to come back to him a little bit, but I think it's this idea of like, we keep using the word systems, like it's a simple thing, but people should appreciate that like, the term actually means it's something that is like, it's so complex because of both metis, but then also ⁓ other qualities as well.

Neil Chilson (14:02)
Yeah, so metis is sort of the ingrained knowledge of an individual, but the knowledge of the system can actually be quite a lot bigger and even more harder to identify than a system. And it's about the interactions between systems or between individuals or individual components of a system. so, the ant is a great example. The ant itself is not a very complicated creature.

compared to lots of things. It certainly isn't like cognitively complicated, but the ant colony can exhibit extremely complex behaviors because of the simple rules that each ant is following. It's the interactions across all the ants that create the complicated ⁓ results. the kind of classic example of this in...

Christopher Mitchell (14:34)
Right.

Neil Chilson (14:59)
human society is markets or manufacturing. So, you know, there's the eye pencil ⁓ essay that talks about how no one person actually knows, can write down all the things that it takes to make a single pencil because there are so many millions of humans who contribute some little piece to the graphite, to the paint, to the wood, to the shipping and

And so if you tried to gather all that information, it would be hard to do. The emergent process that happens in markets that is way more complicated than any one individual is the balances that happen when one of those resources becomes more scarce or more expensive relative to something else. ⁓ Nobody really knows exactly why the prices are changing ⁓ often in almost all the cases.

but they can react to it and their reaction means that across the whole economy, you get this ripple effect of price changes so that people and their revealed preferences about what they would choose. How much do they want pencils versus cars might determine where aluminum goes. Does it go to the pencil industry to make the little thing that holds the eraser in or does it go to the cars? so, or.

does it change relative to that? And so that's the sort of human society example of nobody controls that process, but it is very sophisticated and it solves a really complicated set of problems for a wide range of people ⁓ in a sort of process that's ongoing. So when I say solve, it solves it for this particular choice, ⁓ but tomorrow there will be another set of choices that everybody's faced with.

Christopher Mitchell (16:29)
Mm-hmm.

Neil Chilson (16:50)
And again, the system will adjust.

Christopher Mitchell (16:53)
Right. This for me is one of the things that really leads to my extremely powerful negative emotional response when someone declares that they are a socialist or a capitalist or one of the anything in between. Because my first response and it's very much from reading Hayek when someone says they're a socialist, I'm like, how much wheat should we plant next year? Like, you know, which is like to me the most ridiculous question because no human knows that. And in a hundred years, the most powerful computers

would not be able to come back and tell us today how much wheat we should plant because no centralized entity can know that. And now obviously there are people that call them socialists that would never dream of dictating the price of wheat or how much to plant. But for me, it just gets at that question and appreciating the power of the system.

Neil Chilson (17:38)
Yeah, and on the flip side, ⁓ there's ⁓ a stereotype sometimes leaned into by capitalists that all actors in a market are rationalists, right? Most economists, if you push them on that, they would say like, that's actually not what we think. We think in the abstract, you can think of them that way, but ⁓ yeah, it's like a good model. ⁓ But honestly, it-

Christopher Mitchell (18:01)
It's convenient.

Neil Chilson (18:06)
it erases a lot of the mystery and the complexity of markets if you assume that. And so I think it's actually a model that maybe it's mathematically convenient and Hayek has a lot to say about mathematically convenient things that aren't true. ⁓ It's mathematically convenient. Maybe it helps you prove some certain points but I think it really does hide the mystery and the complexity and I should say the sort of organic, ⁓ bigger than human.

⁓ nature of some of these systems that we interact with, I think there's a sense in which the very rationality that people claim about markets oversells what happens in them and also undersells how awesome they are. so I think that that would

Getting away from that sort of rational actor ⁓ assumption is very useful when I talk to people who understand sort of intuitively why an ecosystem in the natural world is complicated and why interventions into it would be, fraught ⁓ and unpredictable, but maybe don't understand why interventions into markets might also be fraught and risky.

even if you must do them, you know, occasionally.

Christopher Mitchell (19:26)
Mm-hmm.

When I think it's

also there's this feedback that is complicated because in a healthy market or a healthy ecosystem, most of your interventions don't have much of an impact because they're absorbed. And then at a certain point, if you've ripped out enough stuff, then you have a flood in Thailand and no one can get a hard drive. And you're like, wait a minute, what was the one thing we did that when it's like, well, no, it's an accumulation of a whole bunch of things that went on. And, that's a piece that I just feel like people are just starting to

only begin to appreciate. But I want to focus on a question which is, I don't know if I read it wrong, but it seemed like Scott in talking about ⁓ centralization was opposing the centralization of weights and measures. And this is one of those areas where I feel like I really like having strong centralization of certain things that we can then on and build on as almost though they were like the biological

and physical facts that ants rely on and how they act.

Neil Chilson (20:32)
Yeah, so ⁓ Scott is using weights and measures. I think he looks at a lot of European history, right? And so I think that the example that he's talking about is, you know, if you were just trading in a small town, the weights and measures that your local baker had for flour, like everybody kind of knew what that standard was, because the guy was always putting the same weight on the other side of the scale. It didn't really matter.

what its absolute weight was, it kind of mattered, like you knew, I need 1.2 of those for my daily bread needs or whatever. ⁓ That works fine as long as you're local. It works terribly if you're trying to export grain, for example, because somebody on the other side of France needs to know exactly how much they're getting and...

they might have a different size rock that they're putting on the other side of the scale. So I think Scott's point is that local knowledge ⁓ works fine for local problems. In fact, it can work better. I love his road example a little bit better. He has this example where he talks about, you know, let's just say like here in Washington, there was a Washington DC, there's a road that goes from Washington DC to say, I don't know, Richmond, Virginia. And if the,

In traditional times, the road that went from DC to Richmond was just called the Richmond Road. And then, you know, the road in Richmond that went to DC was called the DC Road. And that's actually great. It's actually way clearer than say like, you know, 395 or 95, because you know where, you know which way to go. It's the Richmond Road.

Christopher Mitchell (22:15)
If

you know the rules, you know that it's north and south.

Neil Chilson (22:18)
Yeah, but the

problem is, if you're trying to draw a map, what do you call that road? Because that map might be sold in DC and it might be sold in Richmond and it might be sold lots of other places. And so his example is, if you wanna get a synoptic, what he calls a synoptic view, if you wanna understand the system as a whole, you have to simplify. And I think you're right that Scott generally has a sort of knee-jerk desire to not simplify.

But I think mostly what he's doing is sort of correcting. He's warning people that like, your simplification is useful for your tasks and it may not be useful for other people's tasks. And so you need to understand that you are giving up something and you may be taking something from other people when you impose this simplified version or this consistent version across a range of different communities.

Christopher Mitchell (23:18)
I think that's right. And I'm at the Institute for Local Self-Reliance where we are always trying to figure out how do we actually encourage this stuff? What is the appropriate scale to making decisions? Your rules are that generally decisions should be as simple as possible, made as close to the people as possible. ⁓ For me, seems like when I look at things where, for instance, if I was trying to figure out how are we going to have really good markets, ⁓ I feel like

prices should always be published and publicly available. And it feels like there's a bunch of folks that are active and trying to oppose that. always like, what is the rational rule for this? And I feel like there hasn't been one that I've heard. Generally, it's sort of like, well, it's inconvenient for certain companies that have a lot of political power to share their prices, even though technically we all kind of know what they are.

Neil Chilson (24:06)
Yeah, I think people need to be able to know what trade-offs they're making. ⁓ Transparency is not costless. mean, that is true, right? It's not like transparency has zero cost. There is a cost. There's a cost both ⁓ to the producer or the seller, and there's also a cost to the buyer, frankly, who has to sort of understand what components.

Christopher Mitchell (24:19)
Mm.

Neil Chilson (24:35)
And for some of these complicated, for example, broadband products, like sometimes you just throw up your hands, you're like, I actually am not sure which of these is a better product. Mostly I'm gonna figure it out by using it. ⁓ And then I'll try to figure out if the price is appropriate. But all of this is made much messier by the fact that there are a bunch of, know, taxes and rules and the companies fight about whether or not they can break out the taxes as a separate line item. So people.

see that or if they had to have an all-inclusive price. I think you're right that some of this is just about compliance costs and fights over that. But we know from my work at the FTC that transparency is not a panacea either. There's some good research on nutrition labels that say that they basically don't add any information to consumers.

Christopher Mitchell (25:28)
Yeah, I can't believe it. mean, I've tried to read them and it's sort of like.

Neil Chilson (25:33)
What they do

add, what they can add information to is to advocacy groups who are spending time reviewing them. And so I am with you, know, more information is usually better. ⁓ But again, the types of information that we're getting from price might be the wrong, it might be useful, but it might not be the only information we would want. And it might not even be the information that's the primary thing that somebody wants. ⁓

when they're shopping for something. Sometimes they might want something that's really intangible that they can really only test. I still find it very difficult to buy clothes online because I feel like, am I gonna like this thing or not? And don't really wanna deal with the, and price is not my primary measure on that. It's something about the clothes themselves. And so I'd rather go in person still, even though it's more convenient, maybe cheaper to go online. And so I think, yeah.

Christopher Mitchell (26:31)
Yeah, my grandparents

were, lived during the depression and the idea of getting clothes that I would just be returning if I didn't like just there's something about the culture that I grew up in. It strikes me as wrong. I can't handle it. But on the, talk about the labels for a second, because I think taking the lessons of your book, one of the things that I feel like I would draw away is like, all right, let's not mess with, let's not mess with the label too much. Let's have a label that people can sort of come to understand and predict.

Neil Chilson (26:40)
Yeah. Yeah.

Christopher Mitchell (27:00)
And we're not gonna deal with the federal government deciding if corn syrup is a sugar. you know, cause this is where we get into the trouble, right? Is that some companies like, well, I don't have sugar. I use corn syrup and I use high fructose. like, people are like, I don't know what any of this stuff is. And like, at a certain point, I feel like the government has to, can be the one that says everyone must follow these rules, but they should keep them simple.

And then others like nonprofit, everyone, anyone from like your lefty, ⁓ you know, like health food store to RFK on the other side, you know, coming around a long ways can tell you what that means. And that's okay. Right. Is that like, to some extent there was a role for civil society to play. And, and I think where you and I would definitely draw different lines, but I think we'd agree is that like, there is a line for civil society to step in and government's not going to be the one to just give us everything that we need.

Neil Chilson (27:39)
Absolutely

Absolutely. I mean, there are all types of institutions that help ⁓ solve problems that are not government, but don't also, they also don't look like companies. And I think civil society is, one of those very useful ⁓ institutions that can help, you know, there's lots of different institutions in civil society. They can help surface important issues and certain important information.

for people who are trying to make good decisions. I really like Yuval Levin's work on some of the civil society institutions, the need for those institutions and the value that they create outside of rulemaking, ⁓ but just basic problem solving for communities and for individuals. So totally agree.

Christopher Mitchell (28:45)
One of the things where our philosophy has broken down, ⁓ or I mean, I wouldn't say necessarily broken down, but it's an area that we have to be cognizant of is ⁓ that widespread racism, for instance, you say ⁓ some terrible phenomena such as widespread racism are the result of emergent order. And that's right. We're a huge fan of cooperatives. Cooperatives are a way of magnifying local power, whether that's commercial power or individual, know, like purchasing power for people in a buyer's co-op or.

whatever. And sometimes people use that power in terrible ways. So ⁓ I feel like people that are encouraging the kind of mindset that you and I have here kind of often skate over this and try to ignore that. like, there's there is horrible things that can come out of it and we need to figure out solutions to it. ⁓ I was I am curious if we think about this. So

I think you got Rosa Parks wrong. don't know if you've, when you publish something like that, I feel like there's a bunch of groups that exist just to come at you and tell you that over and over again. So, so nonetheless, like Rosa Parks, ⁓ you know, had gone through training, was an activist and, and, and did as part of an or coherent coherent strategy, you know, ⁓ sort of this, but at the same time, like no one really was in charge of a boycott like that. Right. Like I can tell you that there are like,

Neil Chilson (29:44)
Yes. Yeah. Yeah. Yeah. Yeah.

Christopher Mitchell (30:04)
a hundred people that try to start a boycott for every one that's like the Montgomery boycott that works, right? And there's an emergent nature of that where it does work, I think. anyway, I'm just curious if you want to share any like thoughts you have about how we deal with those kinds of problem areas when we're trying to sort of have this humility. And I haven't said that word enough yet, like embracing this does require humility that you don't know everything and you cannot know everything.

Neil Chilson (30:27)
Well, I think you're right that the Rosa Parks story is complicated. It's so many layers, just like any complex system. so I am 100 % confident my book oversimplifies that story and maybe pulls the wrong object lesson from it. Maybe the right object lesson to pull from it is that, and maybe the one that applies to what you're talking about right now is that individual action is not.

Optional, right? ⁓ none of these emergent order does not work if individuals do not act. There is a sort of nihilistic tendency that maybe you could fall into if you're just like, it's all a complicated system. I don't know what to do. It will work itself out. And that is sort of true on the margin, like for you as an individual, maybe, you know,

There are plenty of people who, know, because of where they started in life, if they just follow the, if they just sort of coast, they'll be probably fine, right? And there are plenty of people who will have to fight every day because of where they started and may never be fine. ⁓ But coasting is not really an option for emergent order to work. The ant hill does not work if the ants are like, somebody else to do it, right? If everybody does that. And so,

The story from the, you know, the principle I maybe should have drawn from the Rosa Parks story is that ⁓ her individual action, even if she did not know, you know, she planned, but she could not predict the effect without that individual action, ⁓ things would have been different in this world. And so,

Christopher Mitchell (32:13)
Well, in fact, if I recall correctly,

I actually think this was the second time she'd made that refusal and the first time did not have the impact. it may actually be. Yes.

Neil Chilson (32:20)
And there were several other people, there

was a community of other people who had done the same sort of refusal, but it's like, and some of this is, I talk a little bit about in the book, but I've certainly talked about more, the sort of historicism idea. There is a tendency for complex phenomena. We wanna tell a simple story. And so history is,

can never capture all the details, right? It can never capture all the nuance. And so we look backwards and we pick a compelling story to tell. ⁓ And that story is true in maybe the capital T sense, but it might not be true in all the details. that's...

Christopher Mitchell (33:03)
yeah, no. So

I'm a photographer and I think it was the famous photo from the Edmund Pettus Bridge, but it may have been a different situation. The Atlantic wrote a story about this years ago. ⁓ It's an iconic photo that was printed in every ⁓ paper in the country, basically, with a police officer and a dog. The dog looks like it's attacking a young man. The young man turns out not to have been protesting. The officer was restraining the dog that was reacting to something else.

and was not trying to harm this young man. But it looks terrifying and we know that that happened thousands of times. It just happens that the best photo of it was not that. And that is what happens in life from what I can tell.

Neil Chilson (33:38)
Yeah. Right. Right.

Yeah. And that doesn't mean we should, you know, intentionally do such things. mean, like, yeah, putting, yeah, my personal preference would be, you know, getting true, truthful information out there is important ⁓ in the details as well. Even in the, and in the capital T, but it does mean that we can look at past events and, you know, with humility say,

Christopher Mitchell (33:54)
intentionally lie or like engage in propaganda.

Neil Chilson (34:17)
We don't know exactly what happened there, but let me try to draw what lessons I can for how I make decisions going forward. And I think that's more than fine. That's like admirable. That's something we should all struggle to do. But the humility is a really important part of this, of understanding, hey, the world is still more complicated than my simple narrative. Even if my simple narrative ⁓ matches what I, like my principles, and matches what I wanna believe about the world.

⁓ We should be careful, be careful because the facts are all undoubtedly more complicated than you think.

Christopher Mitchell (34:55)
Now, the Internet is emergent, right? In general, I mean, it strikes me that the things I love about the Internet, which include that anyone can write on fairly low cost, anyone can read, know, like interact, there's no gatekeepers really. ⁓ I mean, I'm certainly concerned that ⁓ we've made it more difficult than it needs to be for many people to access the Internet, but as far as historical norms go, it's an unprecedented tool. ⁓

Neil Chilson (35:20)
Right?

Christopher Mitchell (35:22)
And so I'm curious, ⁓ you know, and actually one of the things I feel like is emerging from it is this sense of, I'd rather curse the night than light a lamp, ⁓ or, know, curse the darkness, whatever, however that goes. ⁓ there's like that sense, which doesn't get at the, what you just, the point you just made, the individual has to act. ⁓ but I'm curious when we talk about trying to preserve, as we're thinking about rules for the Internet, ⁓ and I think mostly within the U S obviously if I could write the rules for China, it would look a lot different, but like,

Within the United States, what are the threats to this amazing thing that we should look out for in terms of you feel like we're trying to exert too much control over it?

Neil Chilson (36:01)
Wow, that's a really good question. ⁓ It's amazing because the Internet is emergent ⁓ in part because it has some simple rules at the core. ⁓ Some of those rules are in code. So it's actually hard to write something that's more explicitly rule-like than code. ⁓ And obviously those standards, TCPIP, et cetera, have changed over time. ⁓ But it is the sort of layered

structure of it that gives it a lot of flexibility. And I think when I worry about, ⁓ you know, things that are threats to that, think about, there's so many different kinds, but you could sort of characterize many of the threats as, you know, trying to tie different parts of that stack together and not allow other types of ⁓ freedom. And so when I think about the biggest threats right now to the Internet,

A lot of them are driven by good motives around safety, around anti-fraud, around concerns about the information environment that we have in the US and around the world. ⁓ But they put at threats that ability, that emergent ability of us to make sense of the world through conflict and communication. And they risk, like all centralization does,

They risk having a.

not just unrepresentative, that's not even the right, ⁓ but like, or even incorrect central decision makers about what we should hear and learn. That's scary. But I would say that's, I think a lot of people think that's scary because they might not agree with what the central person thinks. I would say that is scary even if you agree with what the central person thinks right now, because the world is a complicated place and that,

Christopher Mitchell (37:52)
Mm-hmm.

Neil Chilson (38:04)
⁓ that person will not be able to reflect all of the possibilities of information that are coming in the future. so centralization stops the discovery process of humanity, I think, or it slows it or it restricts it, it gate keeps it in a way that makes it harder for us to ⁓ be adaptive to the next wave of things that are coming. so undoubtedly the

decentralized nature of the Internet is a real mess and there are people who get hurt and there are people who say terrible things. ⁓ But the benefit of that is that there are lots of powerful, otherwise unrepresented voices who can get their message out, who would not be able to in a more centralized environment. So I would characterize all the threats as.

Christopher Mitchell (38:53)
Mm-hmm.

Neil Chilson (38:59)
trending towards centralization. you can just list these off on like sort of like child protection laws, like things like repealing. Exactly, yeah, yeah, do it for the kids. Things like Section 230 reform, which I think has been largely a decentralizing force on the Internet. And then there's plenty of sort of technical.

Christopher Mitchell (39:08)
Well, that's how things are usually claimed, right?

Neil Chilson (39:26)
know, there's the possibility that there's types of technical choices that would also ⁓ push towards centralization. And I think we should keep, especially if there's policy thumbs on the scale towards those, I would be worried about them as well. And so I think about things that make it harder to do open source technology types of policy that make it harder to produce or. ⁓

know, share open source technology. I worry about those types of things as well because open source is a very good ⁓ back channel around sources of centralization and often is a good relief valve, even though, you know, I would say much of what we do on the Internet is going through, you know, a few.

know, central bottlenecks in many cases, it's very essential to have those outside pressures to keep those central bottlenecks honest and, yeah.

Christopher Mitchell (40:28)
Well, I

think where I agree with you is on the danger of centralization. And I think we both agree on government being concerned about that. part of the humility that I have is living in Minnesota for the past ⁓ six months and ⁓ never imagining that in the United States we'd have checkpoints or ⁓ federal agents knocking down doors saying, I have my own warrant, ⁓ not a judicial warrant. ⁓ And so like, I do feel like my idea of what

the government can do in terms of overreach has expanded in recent months. ⁓ And I'm quite afraid of like a pendulum in which someone from the left decides to be that exuberant having seen how many safeguards failed in the current administration. So I don't want to rule anything out. That said, I still think the greatest threat that we face right now is more like Google and DoubleClick, you know, having merged and Meta and like the way in which they can de-anonymize everything.

⁓ And there's this sense, my view of the Telecommunications Act, this gets tied together in a second, ⁓ is that it's mostly focused on how do we remove barriers to competition to ⁓ allow for more competition for Internet access technologies. But I think at a certain point, I've come to the conclusion, there's no amount of government removal of barriers that can create the level of competition we'd like to see.

we actually have to have some level of like perhaps centralizing government policy to encourage and to actually, you know, spur the competition that's needed. And that's why I feel like I'm often, I would expect you to disagree more. And so I'm just curious how you, like, I feel like we need proactive government steps to deal with the threat of private centralization.

Neil Chilson (42:16)
Yeah, I think some of it kind of depends on the time frame that you're looking at. Antitrust in many ways is a not a centralizing, it's not regulatory, right? So antitrust law is not typically, at least the way we do it in the US is not a regulatory structure. The Europeans have competition policy, it looks a little bit different. They have laws like the DMA that are regulatory and they're about competition.

⁓ But here, we're looking at outcomes and saying, how did this behavior affect those outcomes? And I ⁓ think we can be concerned about practices ⁓ and want to monitor them, but I still think we have to judge them by the outcomes and the outcomes to consumers. so, ⁓ I...

You know, I see companies like Google and online, you know, tracking. And I think generally.

people have different, very different preferences around ⁓ the privacy aspects of that. But generally speaking, I think it's created a ton of value. And so the question there is, what is the harm we're trying to prevent? Why are we worried about this? if we can't articulate an existing harm that's happening to people, ⁓ the fact that it is a large company that has

a lot of information is something we should pay attention to. But I don't know that we need to know what exact harm we're trying to stop before we would know what kinds of solutions will work. The harm I'm most worried about in those situations ⁓ ties back to government again. We have very weak ⁓ precedent around the Fourth Amendment and it means that corporate information that's gathered is

very accessible by the government in a way that I think we have to fix. Yeah. They can buy that data. Yeah, yeah. They need a warrant. They need a warrant to follow me, but they can just buy the information ⁓ and get a lot of information about what I've been doing. that's a real problem. ⁓ And it is contributed to by the information that these companies are collecting. But also I think, you know,

Christopher Mitchell (44:29)
Right, government can't track us, but they can buy that someone else is tracking us.

Neil Chilson (44:51)
Again, I use, kind of overload in my work, ⁓ James C. Scott's term legibility. ⁓ Legibility, increased information about what we're doing is extremely useful. ⁓ And it can be extremely valuable. And the way that we solve problems in the world is not by like pretending or ignoring information, by gathering more of it and using it well. Now the using it well part, I think is the part that people worry about, right?

Um, but, privacy, uh, has trade-offs on that side as well. And we, we should be cognizant of that. And that's why I, I, I am not so worried about, um, you know, honestly, I'm not that worried about online digital advertising. I think there's a lot of worst practices that I would be worried about, um, in, privacy, but yeah.

Christopher Mitchell (45:42)
Well, it's not so much the

advertising as it is the database that I worry about. you may actually know this better than me. I've said this multiple times because I vaguely remember reading it somewhere and I've not bothered to do the work to figure out if it's true or not. But my sense was after East Germany was unified with West Germany and became Germany, ⁓ for people who aren't familiar with history, ⁓ they looked at what the Stasi had been doing.

Neil Chilson (46:05)
Yeah.

Christopher Mitchell (46:08)
And they found this just incredible amount of information that was horrifying. And that there was this sense that no one public, private, no company, no government agency, no one should have a database that detailed about humans, that it's just not consistent with an open society. And that's the kind of role that I would love. I feel like it's very centralizing and it's frankly more arbitrary than I can imagine you would agree with. But I, I like that idea of like, there's just a limit of how much a database can contain of like knowledge of human beings.

Neil Chilson (46:38)
You know, I run into this problem ⁓ that I look at a database like that and I think it could be used like for so much harm. ⁓ But it could be maybe not that particular version. mean, I'm not sure how useful a database of everybody's Internet traffic could be in the aggregate. Like, obviously it's good for targeting ads. And I think I'm a person who will stand, I'm like.

Christopher Mitchell (47:03)
Evidence suggests the evidence

is mixed.

Neil Chilson (47:05)
I will, I will do the mean,

I will do the meme that stands up and says that I think targeted ads are great. but, but, ⁓ in other situations, it's like obvious that there are benefits we're giving up. So if you look at most of the, you know, the genetic research in the world, it's based around two sets of databases, one from South Korea and one from the UK, both of which obviously are not representative of the general world population, but they are.

large data sets in part because those countries collected that data, right? And that type of database is very hard to assemble in the U.S. I understand why that's scary. That could be very concerning data. But on the other hand, having a representative database of DNA sequences in the U.S. would be incredible for solving all sorts of health problems. And so,

Those are real trade-offs that we're making. And it's not like policymakers, when they made the HIPAA rules, really made that trade-off consciously, right? I, so.

Christopher Mitchell (48:15)
Well, when the HIPAA rules

were made, they presumably couldn't have even imagined that database.

Neil Chilson (48:19)
Exactly. so, so those rules are in place now and they are for good reason, but they are stopping some of the potential benefits, especially now that we have tools that could help us very quickly analyze some of that data in a way that would, you know, could have, you know, could save lives. mean, frankly, and how do we solve that problem? We're in a bit of a pickle on that front.

Christopher Mitchell (48:45)
Well, as we were definitely past the time of the show, I feel like this is really helpful. I would love to come back and to talk about specific issues with you in depth, because I do feel like for the person who's encountering this from the first time, they might be like, well, what are some more specific examples? like, you know, what's a terrible idea? And we're out of time to talk about that. But I do think that there's this, that what I would like people to leave with and to encourage them to check out your book is this sense of

Neil Chilson (49:01)
Yeah

Christopher Mitchell (49:15)
that, this bottom up and in that you have to take time and make these rules that sort of recognize what we don't know and trying to have that humility. And then I think to iterate and that's not the way our government or rule makings are set up. Like they're set up in this, like this ancient approach of let's get it right and then never change it absent another massive effort.

Neil Chilson (49:36)
Yeah, I mean, certainly the sort of legislation is set up that way. know, Hayek talks about different types of law. ⁓ Common law is more iterative. It is very sort of case by case, fact driven. Things like the FTC, know, ⁓ Section 5, antitrust, those actually are more iterative because they're general principles that are applied case by case. So there is a whole range of even...

even within government solutions, there's a range from very detailed schematic style legislation to almost like guidance and norms that are much more fluid, much more flexible and have faster feedback loops. And they all have trade-offs. But yes, I think that I love that takeaway that, hey, there are more options than detailed rules for every situation.

There are lots of ways we can solve problems that are not just about writing laws in Washington. ⁓ And in fact, many of those might help us solve not just problems today, but problems in the future much better.

Christopher Mitchell (50:45)
Right, I'll say, David French actually just reminded me of this recently, but like, for the love of God, get involved with local politics, but don't get involved in local politics to bring national issues into them.

Neil Chilson (50:56)
Absolutely,

it's a great point.

Christopher Mitchell (50:59)
one of the last things I'll just note, I don't know if you ever heard the book, the Tyranny of Metrics by, Jerry Z. Muller. I'm a fan. ⁓ and, and I think for people that are trying to get a sense of some of this, that, that, ⁓ the sense of that, some things are unmeasurable is important to understand when you're making these decisions and, and having that humility.

Neil Chilson (51:06)
I have heard of it, yes. Okay.

You know, I am

struggling to remember the name of the book, but there is also a recent book that checked so many boxes for me.

Neil Chilson (51:31)
called The Score, How to Stop Playing Someone Else's Game, and it's by a philosophy professor at the University of Utah. His name I'm very much butcher, ⁓ his last name is N-G-U-Y-E-N. I think that's Nguyen ⁓ Nguyen, yeah. The podcast I listened to with him was amazing. I have not read the book yet, but I am very much looking forward to it.

Christopher Mitchell (51:47)
⁓ when

Neil Chilson (51:56)
But what he talked about is that in games, games are the purest form of this, but you get what you measure. ⁓ And there's all sorts of lessons from game design about how you might design education policy or how you might design tax policy.

that I think resonates with some of those themes that as I understand it, the Tyranny of Metrics talks about. ⁓ And it was just a fascinating conversation because ⁓ back to Hayek's original point, ⁓ the things that are easy to measure might not be the right things to measure. And we might need to get past our metrics and find new proxies for achieving the goals that we have.

Christopher Mitchell (52:46)
Wonderful. ⁓ Thank you, Neil, for joining us. I ⁓ hope people are checking out your book. look forward to whatever you have next if you have the courage to write another one.

Neil Chilson (52:56)
Someday, someday I will.

Jordan Pittman (52:58)
Thanks for listening to this episode of the Unbuffered Podcast. We have transcripts for this and other episodes available at ILSR.org/podcast. While you're there, check out our other podcasts from ILSR, including Building Local Power, Local Energy Rules, and the Composting for Community Podcasts. Email us at [email protected] with your ideas for the show. Follow us on Bluesky. Our handle is @communitynets.

You can catch the latest research from all of our initiatives by subscribing to our monthly newsletter at ILSR.org While you're there, please take a moment to donate. Your support in any amount helps keep us going. Unbuffered is produced by Christopher Mitchell with editing provided by me, Jordan Pittman. Special thanks to Riverside for providing the song Caveman. Until next time, thanks for listening.