Change yourself, change Wikipedia. "hey I can edit, amazing" "Omg. How dare they undo my edit!" I'm an empiricist. I believe in data. But data without context does not result in understanding. After years of reading much of the "research" published about Wikipedia I've reached the disappointing conclusion that that no one really knows how to study Wikipedia, at least not as a whole— people jump straight to trying to present exciting results results. As a consequence they're usually wrong about their conclusions, often laughably so. This is terrible because there are clear issues, especially ones related to growing and maintaining the contributor base and we don't understand how to fix them— So, because I believe that empiricism is currently hopeless for these kinds of large issues. I'm going to do something uncharacteristic and offer some pure opinion, though carefully considered. While it may all be rubbish, I hope that it at least provides some food for thought. I was puzzled by a fundamental discrepancy between my one on one interactions with Wikipedians, and my interactions with Wikipedia. Almost universally, the Wikipedians I've encountered have been a pleasure to deal with— Thoughtful, caring, insightful, — brilliant— people. But yet interacting with the project is sometime frustrating beyond expression... and the unfortunate interactions can't be explained by the tiny minority of Wikpedians who are actually hard to work with. This difficulty in interacting with the project isn't something just a few people experience— it's something we hear loud complaints from the public about, and I believe we can empirically observe the effects of this difficulty on our low retention rates of long time editors (though this isn't an empirical talk!) There are many reasons why we editors depart— sometimes their work is done, they came to work on some niche and they finished the work, or the niche was obsoleted by work done by other contributors. Sometimes, they simply get lives— ... or lose them. In my years on the projects, I've watched not a few of my treasured friends become disenchanted and burn out— most fading away— a smaller number reaching a more spectacular conclusion. I've been pained to watch people who I worked with for years, who were greatly influential on my thinking, whos every action at one time seemed so thoughtfully considered, eventually end up at a point where their only interaction with the projects was to periodically show up and hurl some vicious insults at people. At some point I realized that my involvement was harming my emotional well being— the activities I once enjoyed were making me upset instead. More of my time was being spent on overhead activities rather than obvious direct contributions to the movement. Even though I still strongly supported the project's mission, it seemed like I was going to end up a hot ember like many of the people I had respected in the projects. In my judgment, the people who had become constantly unhappy in their interactions were hardly contributing anymore no matter how hard they worked— they still did good things, but they also created a lot of problems. So I decided that I would immediately stop any interaction with the project which was making me in any way unhappy or angry. As a burnout I'd be no to anyone. After a short while of applying this approach I found that my editing pattern had become: Edit exclusively anonymously. Make edits, and don't bother checking if they stick. On possibly controversial subjects, leave talk page nodes, but don't pay attention to responses. Ignore all messages, (though don't continue editing some subject if someone is yelling either). Make any edit (/revert) only once. Don't bother reading the rules— most editing can't violate them anyways and someone else will fix it if it does. (I should point out: This works better if you're already pretty familiar with the project and culture!) Sometimes I'll log in to participate in some project discussion I bump into where I feel my view would be discounted as an anonymous contributor, but that's few and far between. This works exceptionally well for me, and I've been able to be very productive this way. There have been some bouts where people were incorrectly reverting me for no reason— but it's been a while since I've noticed that. (Though I admit I'm not especially sensitive to it (anymore!)). So this works for me, and I can contribute a lot indefinitely without compromising my emotional well being. But I didn't know why this worked— this pattern was just the result of taking away everything that caused trouble. It wasn't engineered, it was evolved. I've spent the last two years thinking about this and I've come up with a few thoughts. Humans are very flexible. We can interact with other people, with animals, with computers, and with more abstract things like hard mathematics or natural law, and in every case these interactions can be productive and rewarding. I don't say this to suggest that it isn't important for Wikipedia to be very easy to interact with— because it is. But rather, being 'easy' is neither necessary nor sufficient for people to survive prolonged contact. If you imagine, for a moment, waking into a store an attempting to purchase some candy— only to have the clerk respond "Syntax Error." it's easy to imagine the situation becoming very stressful very quickly. Likewise, if you were to attempt to order candy via a machine and it were to deflect your attempts with idle chatter, suggestions of alternatives, or by complaining that you're always ordering it around, you would also probably come away unsatisfied. In these cases the unhappiness arises purely out of failed expectations. Although we're perfectly capable of productive interactions with both people and mechanical devices, we tend to not do well when our expectations are violated. I believe correct expectations are necessary and sufficient for people to survive their interactions. The absolute ease of interaction is important for broad inclusiveness, but what good is inclusiveness if you burn everyone out? I think the structure of Wikipedia encourages us to think that we're interacting with people— the Wikipedians— and to some extent we are but we are also interacting with the Wikipedia superorginaism. This results in mistaken expectations which create the risk of stressful interactions. My revised editing approach was effective for me because it basically expected nothing from Wikipedia. When you expect nothing you can't be disappointed. There are, of course, other superorganisims that we interact with— corporations, governments, bee hives. But each are different from each other and from and Wikipedia in some key ways. When we think about how we interact with something we can identify certain properties that suggest ways of interacting: Computers don't care if you're having a bad day, they just demand exact input. While if you're not sensitive to a person having a bad day they may well give you one too! By having more ways to understand WikipediaTheSuperorganism we can have more realistic expectations and as a result less risk of stressful interactions. To these ends, I think the most important characteristics of wikipedia are: * Made from people but isn't people. * Evolved / Not Designed * Radically decentralized * Is incapable of making decisions, doesn't approximate it well. Made from people. The part of Wikipedia that you interact with is Made from people, but so is soilent green. On the projects wikipedians exist within a complicated structure of social pressures, policies, informational biases, community interactions etc. The bee is not the hive. At the same time— unlike a retail job where there is a clear motivation for "the customer is always right" Wikipedians don't tend to check their humanity at the door. If your interactions fail on a simple human basis— treating people with disrespect you'll do poorly. Successful interactions on Wikipedia are successful interactions with _both_ people and with the system. In some contexts Wikipedia is more human-like— dealing with a small number of editors on single article, or more 'machine' like— image copyright policy and these areas result in fewer problems, at least for experienced editors who understand what to expect, simply because there are fewer things you must get right concurrently. jorm example Evolved. Certainly Wikipedians have designed aspect of the project, but this has proceeded in an iterative process with no piece of design covering more than a tiny part. As a result the project is rich with complex interdependence. Why do we have WP:NOTABILITY? One reason is because we don't have WP:NOHOAXARTICLES. Why don't we have one? It's really hard to tell if an article is a (good) hoax or not, but much easier to say that there isn't much evidence one way or another. But even a single person's ideal evidence criteria for 'worth including / maintaining' is probably different from their ideal criteria for 'probably not someone playing a prank'. So, an editor trying to improve the notability criteria may have an irreconcilable conflict with another editor trying to prevent wikipedia from being flooded by hoaxes. Both are worthy goals and reasonable people can disagree with how they should be weighed against each other. The complex inter-dependencies also cause people to imagine dependencies that don't exist. As a result most non-trivial changes risk upsetting some other careful balance (real or imagined), and so changes to how wikipedia itself works usually happen fairly slowly. Radically decentralized— It's sometimes observed that on Wikipedia every contributor sees themselves as CEO (or at least as second in command). Often this draws comments from Wikipedians that we couldn't do it any other way, after all— we depend on volunteers— The conclusion is probably right, the reasoning however is rubbish. If you show up at the red cross and volunteer, telling them you're good at playing flight simulator— and they ask you to sweep the floor, then you insist that you're going to fly their rescue helicopters and you'll find yourself on the street quickly. Rather— we can say that the radical decentralization is important because it has the lowest overhead possible (none) for our most common and important activities: making a boring improvement to an article. We can also observe that other projects have attempted to do what we're doing with less decentralization (such as CZ) and none have been successful. (at least not compared to English Wikipedia) There may be other reasons for their failures, but finally— the fact remains that Wikipedia is this way, that it's fundamental to how it works, and that it's not changing anytime soon, so it's pointless to argue with it. Does not make decisions— One fundamental consequence of true decentralization is that decentralized systems can't actually make decisions (if you care and don't believe this, I can present a easily understood proof of it later). I don't mean that they're indecisive, that they make a decision and change it— but rather that they really don't make a decision like a person or a hierarchical superorganism can. They can, however, approximate decisions through a process I call convergence. Consider some mathematical formula— X^2==0, I can ask you what X is and you can decide your answer exactly (it might even be right!)... but sometimes solving a formula directly is too hard. Instead you can use a process to approximate a decision. You can guess and test values and try over and over to refine your solution. If the process of guessing you follow tends to shrink the range of possible outcomes the more you run it then the process is _convergent_. Convergent processes tends to change their solutions less the longer they run. It doesn't necessarily mean that the result becomes more accurate over time (though if the process is good it will!). When the amount of change goes below some threshold, you say it has converged. Depending on where you draw the line on what constitutes a decision, you might also argue that some systems of governance (like democracies) also have this property— even though a vote makes a 'decision' you can always have another vote. But even to the extent that this true when we interact with the government we're almost always interacting with an administrative agency which is run like a typical hierarchical superorginism and which does make real decisions. The Wikipedia superorginism also has some embedded hierarchical parts— The Wikimedia Foundation itself and the English Wikipedia arbcom are the most obvious examples... but project contributors interact with these bodies fairly infrequently. The processes used for 'decision' on Wikipedia happen to be convergent for a fairly simple reason— whenever we have a process which is _divergent_ (the amount of change increases over time) it eventually creates a big dispute that pisses people off and causes them to change the process. So, the project's evolution selects for convergence. Wikipedia's evolution doesn't, apparently, select for fast or direct convergence and so it doesn't approximate real decision making especially well. This is a consequence of having to operate with unreliable parts— if you _never_ consider the same option twice you'll be more decisive in less time— but a single error could have devastating results. I believe these traits suggest some useful ideas for interacting with Wikipedia: At any moment, the Wikipedia System is what it is. You may hope to change it— and you may someday be successful— but it isn't changed yet. Getting upset at it for simply being itself is not productive, will not change it, and will ultimately only make you unhappy. Wikipedia doesn't have to make the best choice. Compromise is key in many relationships, but most relationships that depend on compromise involve few people— in workplaces we can resolve disputes by appealing to the boss, but on Wikipedia we must compromise and compromise often for project wide actions because they involves so many people. Yes, compromise can be avoided until an issue elevates to arbcom— but thats like settling an inability to compromise in marriage via a divorce court. No one would call that a success. Your contribution to Wikipedia is your participation in the process, not the result. A lottery ticket with 50% odds of being worth a million dollars or zero is still very valuable even though it isn't sure. Likewise, A contribution which may be reverted is still valuable. Obviously we should try to make the system as unlikely as possible to undo good changes, but because Wikipedia can't make decisions it can never actually be guaranteed to do so. We have incredibly powerful proof— Wikipedia itself— that the _process of Wikipedia_, if followed, produces good results, and does so even though it sometimes undoes good changes. When you contribute to the process at that instant you have made a real contribution to the world which can never be undone. If you tie your satisfaction to this you will never be disappointed, if you tie it to your change lasting you often will be. Finally, I'll close on a point which I consider more speculative, but also more frightening. If you accept the premise I offered earlier— That Wikipedia is a system which evolves towards more stable configurations through changes made by its contributors one logically inevitable ramification is that Wikipedia would continue to evolve until behavior emerged which successfully drove away the contributors who would choose to change it. If Wikipedia is to be a system which serves people, rather than people who serve a system it is not enough to say that the system should change to be more friendly. To whatever extent this fear is true, Wikipedia cannot be changed unless we first immunize ourselves through the ideas presented here or ones like them.