Wikinews:Water cooler/proposals/archives/2013/November

From Wikinews, the free news source you can write!
Jump to navigation Jump to search


Paper trail - supporting Wikinews community development

{{flag}}

I'd like community input on an idea that's been developing slowly in the back of my mind for an age or so, that got stirred up yesterday by an off-wiki conversation on scalability of Wikinews.

The problem
As en.wn has shaped up, we have two fundamental differences from the "traditional" wiki model, entangled so it'd be hard to call either "more fundamental" than the other: the review process, and accumulated reputation of individual users. But while we have massive technical infrastructure to support review (over the life of an individual article), we have almost no technical infrastructure to support accumulated reputation (over the project career of an individual user).
We keep track of individual reputations mainly by knowing everyone. Reputation is, and must be, grounded in perception of individuals by other individuals, but as the project scales up (and the elements of long-term growth are visibly converging, for those who know where to look) beyond the everybody-knows-everybody stage, we'll need better ways to keep track of individuals' history on the project.
An approach
Keep a software-supported "paper trail" of some sort for each user, conveniently conjurable for an individual user. It should be succinct so it doesn't suffer overly from tldr, robust against an individual's record being "poisoned" from the outside, and of course trivially easy to maintain.
Succinctness and robustness would both be served by either separating etries in a user's record by reviewers from entries by non-reviewers, or (more simply, though with some obvious loss of breadth) by only allowing entries by reviewers. Presenting an index of entries could also be a useful tactic.
A very simple approach would be to enter each review into the reporter's record; this might also somewhat alleviate the long-standing problem that article feedback tends to fall into a black hole if the article doesn't reach publication. (Userspacing articles isn't an altogether satisfactory solution because we don't want to keep every failed article lying around, both to prevent project abuse as a web host, and to avoid hanging on to things that might be copyvio or libel.)
Making the record easy to conjure, and easy to grok (succinctness again), should make it useful when dealing with the individual day-to-day as part of the review process, as well as when considering nominations for privileges.

Right now this is still in the brainstorming phase. Although it's fair to ask what the available software can support (or the soon-to-be-available, as my interactive tools are now tantalizingly close to critical mass), it's also fair to ignore what can be done and speculate on what we'd like to be able to do.

So I'm putting this out here, and inviting thoughts/suggestions/alternatives/whatever. --Pi zero (talk) 12:03, 28 September 2013 (UTC)[reply]

If I am thinking of ways to monitor, the most metrics for somewhat objective accumulated reputation ones are:
  1. Total articles published:
    1. Average number of reviews an article goes through before it gets published.
    2. Average times a specific criteria not met when submitting an article that is not readied.
  2. Total types of articles published:
    1. Total synthesis stories published.
    2. Total original reporting event stories published.
    3. Total original reporting online report coverage published.
    4. Total original interviews published
    5. Total original photo essays published.
    6. Total events a person participated in (IE helping for pre-planned events with large outputs either as reporter, photographer, copy editor or reviewer).
    7. Total number of reviews a person left.
    8. Size of the review commentary left for passing and not passing reviews.
  3. Total number of instructional materials created.
    1. Number of pages these materials are linked on.
    2. Number of times these are mentioned in TWG newsletters.
    3. Number of times these are printed and used at workshops.
  4. Total articles translated and successfully published.
    1. Number of articles in English to another language.
    2. Number of non-English to English language articles published.
  5. Total participation in outreach activities.
    1. Number of events helped organize and participate in.
    2. Number of events attended.
    3. Number of events spoke at about Wikinews.
Those would be the metrics that spring to mind for me immediately. --LauraHale (talk) 12:20, 28 September 2013 (UTC)[reply]
  • I've taken the excess indentation off the above, so they're numbered; and, can thus be commented on. I'm afraid I find some a little self-serving in their selection criterion. The priority for the project is, and always should be, published news.
I would strike the following altogether:
2.6 This is double-counting, or crediting for no output (anyone at an event should produce, or review, content; so no extra credit needed).
All points under 3; the project's policies are of more-import than any materials I've seen produced. If you wish to apply that metric, I'd claim {{Howdy}} — which is on more user talk pages than you can shake a tree of sticks at.
All points under 5. We are nowhere-near being in a position to assess any Wikinews-specific outreach.
Items under points 1 and 2 are the only ones I have any confidence that the measurement thereof is going to be viewed as impartial. --Brian McNeil / talk 21:22, 19 October 2013 (UTC)[reply]
Not quite so sure about #2, since the classification is somewhat subjective. Also, en.WN deletes articles which are begun and developed to submission stage but do not pass review, yet clearly represent both positive and negative reputation statuses. (and, of course, we have that fine homogenous review score based on purely objective measures...) - Amgine | t 14:04, 20 October 2013 (UTC)[reply]
I love everything about the above....but let me ask this (and I think you'll give some salient replies).....what value is there in supporting accumulated reputation (and its supposed development)? I have some of my own answers to that question, but I'd love to hear others' answers. --Bddpaux (talk) 17:39, 24 November 2013 (UTC)[reply]
In terms of future scalability, having an idea of how to measure things to see relative activity in the community and as a way of trying to articulate existing community trust based could be useful. It can remove some of the political type decision making that happens on en.wp where people seek roles less for assistance in sharing content freely but for the sake of hat collecting. It also is a potential metric that could allow greater understanding of activities in non-English communities because it allows for potential replication. --LauraHale (talk) 18:36, 24 November 2013 (UTC)[reply]
Concerning the role of accumulated reputation on the project. Accumulated reputation is what makes it possible for us as a project to produce quality output; news requires this, but honestly other projects would benefit from it too once we've demonstrated a sufficiently all-around-successful approach that they'll want to emulate (some problems that we strive to mitigate, other sisters would want mitigated before they adopt the approach). If you try to treat everyone as the same, you can't produce quality output; the alternative is to treat each contributor as an individual, but if one is to treat many people as individuals, which we certainly mean to do as we expand (and which a big project like Wikipedia would have to do on a truly vast scale), one has to have means to carry along an individual's track record with them, so that others who might not know the individual personally can readily find out about them.
I notice Amgine was mentioning our practice of deleting information about failed articles. I recently had an aspiring new contributor ask about how to preserve review comments after the article gets deleted; and this was iirc the subject of my first-ever post to the water cooler, something like three and a half years ago. The need to provide infrastructure for tracking accumulated reputation is implicit in our whole reputation-based approach to quality. --Pi zero (talk) 20:43, 24 November 2013 (UTC)[reply]
I have a partial list of total failed reviews. (I also have a list of failed reviews by reviewers.) I know we have a list of total volume of original reporting. I could provide this data, or try to chug out something to suggest potential output idea. --LauraHale (talk) 21:12, 24 November 2013 (UTC)[reply]
Pretty much along the lines of everything I was thinking (and, er, a few things I wasn't thinking). By profession, I'm a social worker and I worked a long time as a Behavior Specialist for a big school system. I say that to indicate that I think a lot about how we do/don't reinforce behavior(s) here. A big part of what keeps people coming back here (or back to anything, to be exact) is about REINFORCEMENT. I always hate it when one person's work gets failed and they just run away, never to be heard from again. Mind you, that's going to happen.....to be credible, we have to maintain standards; but, I think we must ALWAYS TRY to churn the waters to keep new/somewhat new people here as we can/when we can. --Bddpaux (talk) 00:45, 25 November 2013 (UTC)[reply]