User:Pi zero/essays/The review process

From Wikinews, the free news source you can write!
Jump to navigation Jump to search

Here is what I typically do when I review an article. Fwiw, to other reviewers or to writers. I expect to meditate on it myself when considering how to provide semi-automated assistance for review.

Different features of the process shift around in both order and form based on my understanding (both conscious and unconscious) of what's needed. This page is lengthened as it attempts to cover a wide range of different paths a review can take; and even so, the actual range of variation cannot be fully captured. Reviews come along regularly that have some feature I've never seen before. If the discussion here seems heavier on synthesis than OR, that may be because synthesis processing has less scope for its possible variations, and is thus easier to describe in detail.

I do most of review in a web browser that allows me to turn off javascript for all but a whitelist of sites; I've found that news sites often contain aggressive javascript that can do bad things to my platform if allowed to run.

Before source-check[edit]

I tend to fix problems with headlines early, so as not to forget later.

Rapid not-ready[edit]

When an article review leads to a not-ready, one would like the reviewer's time to have been used as efficiently as possible. This does not always mean minimizing time spent, but some kinds of reviewer time and effort are not well spent. Review time on a not-ready review may benefit the article, and may benefit the writer. If the article isn't ever going to get published, extra review time doesn't benefit the article. If the article does later get published, but the extra review time on an earlier, not-ready review goes into addressing details that will get wiped out by revision to address bigger problems, then, again, the extra review time doesn't benefit the article. The writer benefits from knowing sooner about major problems — if, say, there are three major problems with an article, but they only come out one at a time in not-ready reviews, the writer has the extraordinarily frustrating experience of being told there's a problem, fixing it, then being told there's another problem, and fixing that, only to be told there's still another problem. Even if the writer is only going to fix one problem after each review, it's far better that they are told on the first review about all three problems. The writer may benefit slightly from seeing some smaller things fixed early on, but overwhelming the writer with lesser factors may just be distracting and confusing when they need to absorb some major principles first.

I suspect, btw, that warning the first-time writer of all the major problems on the first review may be the single biggest thing that can be done to mitigate how discouraged they are later if their first submission, after multiple rounds of not-ready's, ultimately isn't published. Which is a common pattern. I have dabbled in gently warning people up-front that this can happen, noting that many veteran Wikinewsies lost their first article, but I've never been good at it. Another thing that I think can be very helpful is the note of personal interest demonstrated by leaving a note on the reporter's user talk telling them the review has happened; providing links to the review comments, unless perhaps they say nothing but that it passed, and to the article revision history if there's anything of interest there; and perhaps making some other encouraging personalized remarks. The first such note to a user seems likely the best place to mention the bit about veteran Wikinewsies losing their first article, if one is going to mention it. We have wanted for years to have a review gadget/assistant that helps the reviewer to leave user-talk notes, keeping in mind one really wants some personal touch to each such note rather than a purely automated message. I try to take time to write such notes, though alas by the time I reach the end of a review I tend to be a bit mentally drained and it really would be helpful to have some semi-automation. A generic example might be something like

== [[Mumbai shop owner accused of stealing kangaroo]] ==

Hi. I took a look at this; see my '''[[Talk:Mumbai shop owner accused of stealing kangaroo#{{anchorencode:Review of revision 1234567 [Not ready]}}|review comments]]''', and '''{{plainlinks|{{fullurl:Mumbai shop owner accused of stealing kangaroo|action=history}}|detailed history of edits during review}}'''. --~~~~

Published. Congrats! See '''[[Talk:Mumbai shop owner accused of stealing kangaroo#{{anchorencode:Review of revision 1234987 [Passed]}}|review comments]]''', '''{{plainlinks|{{fullurl:Mumbai shop owner accused of stealing kangaroo|action=history}}|detailed history of edits during review}}'''. --~~~~

Some classes of major problems that I try to pick up on first review (of a sincere article):

  • Copyvio. If that's happening, the writer needs to know about it ASAP, because it's going to poison anything they write until they become aware of the issue; even starting a whole new article isn't going to help if they don't know to avoid copyvio. So even if I can instantly see another reason to not-ready, if I can take the time to check for major copyvio, I do.
  • Failure to demonstrate newsworthiness: failure of the lede to present a well-defined focal event and a sufficiently recent "day" word, and occasionally failure to explain relevance. (It's unusual for a serious contributor to submit an article about something for which relevance cannot be shown.)
  • Single source. I don't recall any good exposition of the full nuance of this point. We ask that a synthesis article have two mutually independent trust-worthy sources corroborating the focal event; but then there are some circumstances where one of the two sources may anticipate the event beforehand, while the other bears witness that the event actually happened. This sort of before/after split is such a rare phenomenon that our treatment of it hasn't had much chance to evolve. The viability of the before/after arrangement gets harder to defend the more unanticipated detail there is about the event, especially if there's something really surprising about it; easier to defend if it would be difficult to provide a second source from after the fact. A notable case was this article about a court case in Iran, which was refreshed via a single source; as I recall, the outcome was expected though the exact date was not, and there weren't going to be droves of independent sources out of Iran.
  • Large-scale violations of neutrality, especially editorializing. Inadequate attribution isn't usually something I'd not-ready for unless it's quite extensive or intractable, or combined with a bunch of other things — a problem of that sort if small enough for the reviewer to fix without disqualifying themself would either be fixed, if it seemed the article might pass with the repair (keeping in mind, if one knows about a problem and leaves it for later, there's some risk of forgetting to take care of it later), or would be mentioned when not-ready'ing for other things.

Preliminary copyvio check[edit]

At some point before the in-depth source-check, I may use automated tools to help me check for copyvio. (Clarification: copyvio is a term of art, covering both actual copyright violation, and plagiarism; this preliminary check is low-level, for passages that are effectively copied from sources rather than presenting the information in an original way.) There are two forms this might take.

  • Sometimes I'll choose a passage (perhaps one or two dozen words) and do a google search on it (a general search of the Internet). The point is to try to determine that the material is copied from a probably-undeclared source. Failure to file such a match isn't decisive.
Just when to do this is a matter of intuition. It's much more likely to be appropriate if there are no sources provided, but if something feels suspicious it may be a good idea to do this anyway. In my experience most copying is done without malice; however, rarely I've encountered astoundingly malicious individuals, and reviewers should always indulge their suspicions because the ability to intuit that something doesn't feel right is part of the value of having human reviewers. A common pattern in cases of copying is that a first-time contributor submits very clean news copy that doesn't follow idiosyncrasies of English Wikinews house style; though of course this pattern also occurs with experienced writers, especially but not exclusively experienced journalists. Sometimes a desire to do this sort of check comes up later in the review, during source-check; information in an article that oddly mismatches the cited sources may suggest a source was (practically always either accidentally or by misunderstanding) left off the list, or that a source changed after the writer consulted it (sadly commercial news sites often don't limit their changes to increasing how much they inform their readers), or that the writer drew from Wikipedia (requiring a copyvio check and explanations).
Which passage to use for this sort of search is also something of an art. It's a lot of trouble to go to if it's not going to turn up a match, so one really wants to choose a passage especially likely to reveal an existing problem. The lede may be more likely to be customized for the Wikinews submission, so a match may be more likely elsewhere; maybe. Certain kinds of phrases may be particularly suspicious. After failing to get a match on one passage I may rethink how I chose the passage and try again.
  • Often I'll use toollabs:dupdet to compare the submitted article to each listed source. (Rarely I have also used dupdet to compare listed sources to each other, to help understand to what degree the sources have a common source behind them.)
This is a labor-intensive process, because the matches found by the tool include both false positives and false negatives. Direct quotes can produce long verbatim matches that aren't significant (if it's something that was direct-quoted by the source); but then a direct quote that's copied along with its surrounding unquoted context may be a problem; and of course putting too much or too little of the text within quotation marks further complicates things. A relatively generic passage that's just a bit over-length (the rule of thumb is fewer than four consecutive words) may be nothing if the surrounding context is unrelated, while a series of related passages separated by synonym substitutions could be a major copyvio. Occasionally I've discovered an entire lengthy sentence copied from source, with superficial scuffing up that doesn't mitigate the plagiarism problem, based on the clue of a two- or three-word sequence noted by dupdet; a situation where a human being can recognize a major problem that automation is oblivious to because the human being knows what the text means.
I do this sort of check a large fraction of the time, these days (except, of course, when dupdet has crashed). It may be useful with experienced writers as well as unknowns, since it covers flubs as well as misunderstandings — though there are some experienced writers who are known to be very unlikely to make this kind of flub, for whom it may be a poor use of reviewer time. It is, in any case, not a reliable way to pick up all such problems, and I not-infrequently notice similarity-to-source problems later, by unassisted human pattern-recognition during the in-depth source check — so that this automation-assisted preliminary check doesn't catch everything, and there's a likelihood of catching problems later even without it. Advantages of the pre-check include catching such problems earlier, before investing a great deal of effort into a review that fails for copyvio further downstream in the process; and giving me greater confidence because I have two different ways to catch copyvio (assisted and manual) instead of just one (manual).

Set up scratch copy[edit]

I almost always set up a copy of the article that I can mark up to keep track of what I have and haven't done in the review.

When I first started reviewing, I used Special:ExpandTemplates for this. The interface was clumsy, but I found it could be made to work, at least to keep track of which text passages had not yet been examined. Later, User:Bawolff provided a simple gadget that allowed the superficial appearance of a page to be modified by editing text and by shading particular passages with different background colors; the main disadvantage of that was that if one ever left the page for any reason, such as accidentally clicking on a link, the entire state of the page was instantly lost; but I used that for years, until tinkering with the interface by the devs caused the gadget to cease working. By the time the gadget failed, I'd advanced far enough with the dialog tools to construct a breadboard, which has a more ergonomic interface (for my purposes) than Special:ExpandTemplates, and holds its state better than the gadget had. Since I'd learned from the gadget how useful shading can be during review, nowadays I use {{background color}}, which is tolerably usable.

If the copyvio pre-check turns up so big a problem that there's no need to keep detailed track of the problem passages, I might not bother to set up the scratch copy. If the pre-check turns up nothing, or almost nothing so that I can just do a small copyedit and eliminate the problems, I might wait till after the pre-check to set up the scratch copy. Typically, though, I'll set them both up at once, and shade passages too close to source in different colors to show which source they come from, or do them all in yellow if there are too many different sources to give each a color; I reserve lightblue for passages that aren't a problem and have been verified successfully, and orange for fragments that are superficially different within a longer copyvio passage (so I can show the size of a problem passage and also how much it's been "scuffed up"). If there's more than a very small amount of copyvio to mark up, I'll rarely take the time to find and mark it all (my thinking about this has evolved over time).

Preliminary read-through and copyedit[edit]

At some point before undertaking the in-depth source-check, I'll generally make sure that, at a minimum, I've read the entire article through carefully from beginning to end, and done basic copyediting. (I find it hard to read through without copyediting, as it bothers me at the time and it's very annoying if, having noticed a problem and not acted on it earlier, I forget to act on it later.) If I'm not going to move on to the source-check on the current review, how closely I look at the article body (beyond the lede) may vary; if there's likelihood of neutrality problems, for example, that would call for closer study. The dialog of an interview, I might only skim before source-check, studying the text in-depth only at the time of in-depth verification.

Whether to do the read-through/copyedit before or after copyvio pre-check is a decision for each article; sometimes fixing the mechanical problems makes it much easier to see the closeness-to-source, while other times it's already very close and copyedits only make the pre-check slightly harder.

Basic copyediting includes, amongst other things,

  • grammar and spelling, ascifying quotation marks and apostrophes, spelling out small numbers, italics, categories. Maybe an infobox.
  • wikilinks. There are two schools of thought; either create hard local links when possible, so as to minimize local links via {{w}}; or use {{w}} for everything when first writing the article, trusting that the local ones will later be processed carefully. In favor of using {{w}} for everything, it is the simplest thing to do when writing, when there's plenty of other things to worry about. It is also the scenario that was envisioned when planning for the creation of template {{w}} (before gaining any practical experience with it): start out with all links using {{w}}, then later on, as a maintenance task, consider each individual local link by comparing it to the list of categories for the article, decide whether or not to add some category based on that link, then change the link to a hard local link so it is removed from the list of such links still to be considered. (The list of such local links still to be considered is Category:Pages with categorizable local links.) When a link via {{w}} is non-local, one still has to worry about, first, whether it should be local, which would involve either retargeting it, or creating a local redirect, or in some cases creating a local disambiguation page; and, second, if it still remains non-local, whether its non-local target exists — because, as a news site, we don't want links that don't lead to an existing article. Optional parameters to {{w}} that are sometimes useful include foreign=suppress, sister, and anchor; but anchor prevents local linking so it's usually preferable to target a non-local redirect to the intended section.
  • date formats. When using "today" or "yesterday" it's a good idea to follow that word with an embedded html comment<!--like this--> specifying the day of the week, which helps keep track of things for verification during review and, if the word occurs in the lede (as it often does), also helps anyone using the WN:Make lead device to remember to replace today/yesterday with the day of the week.

Source-check[edit]

Synthesis sources[edit]

For a review with a small-to-moderate number of sources, I open each source in a browser tab. This, especially, is where I use a browser that won't execute javascript on the source pages; though I sometimes have to open some sources in a browser that runs the javascript, because important aspects of the source, such as menus, needed for the review aren't there without the javascript.

I typically start by checking all the source citations, which, in addition to getting that information right and uncovering/highlighting some problems related to source dates, also helps ease me into familiarity with the array of sources cited.

The two basic modes for conducting the in-depth source-check are, (1) read each source through, marking on the scratch copy what is verified (by lightblue shading, or by deletion); or (2) choose a particular item in our article and track it down in the sources. Under ordinary conditions — which are not so usual — I'll do the first, then use the second to deal with whatever I didn't verify on the first pass. Sometimes that doesn't work — just for example, a blow-by-blow live-blogging account of an entire football (soccer) match does not really lend itself to reading straight through. Even if I resort partly or entirely to using the second method rather than the first, there's a good chance by the end of the review I'll have read through much, or all, of the sources because some facts will not lend themselves to string-search.

This phase of the review isn't just verification; often neutrality, style, and of course copyvio issues can only be resolved in consultation with the sources.

Occasionally articles have valid reasons for huge numbers of sources; On the campaign trail articles have not uncommonly had forty or fifty. Fortunately the reporter on that series of articles goes to a great deal of effort to embed many, many html comments to document just which facts come from which sources. The articles are complicated enough that even all those comments can't cover everything; in those articles I'm likely to open the sources one-at-a-time, string-search to find all the embedded comments citing that source, and try to be familiar enough with all the materials in the Wikinews article to read through the source and find everything it verifies before moving on. When something isn't verified that way, figuring out which of the many sources to look in is an art.

Original reporting[edit]

Whereas synthesis is apt to be pure synthesis, OR isn't pure (at least, I can't recall ever seeing such a thing on Wikinews, and have difficulty imagining it). Even interviews, about as close to pure OR as can be, have introductory sections providing background information backed by synthesis sources. I'll typically process the synthesis sources first, before tackling the OR material (which is typically on scoop); amongst advantages of this approach, it tends to provide familiarity with the background that can help in understanding the original documentation. In reviewing interview introductions I've had moments of confusion before realizing that, although it's generally desirable to draw most of the introductory background from independent sources, there may be some points that are, and safely can be, taken from the interview content.

In checking the original documentation, three factors are in play: the bona fides of the documentary evidence, the meaning of the documentary evidence, and the relationship between the documentary evidence and the material presented in the Wikinews article.

We use, of course, accumulated reputation to inform the question of bona fides; this is one of the various subtly pervasive differences between Wikinews and Wikipedia, that whereas Wikipedia tries to achieve general cooperation through uniform treatment of others, every aspect of Wikinews is geared toward seeing users as individuals. Trying to process OR without knowing the reporter (in an interactive sense; knowing someone's name is likely to be useless) is extraordinarily tricky. Although the option exists to contact an interviewee directly to verify the interview, doing so might not be as useful as it sounds. I recall one occasion where I was presented by an unknown user with an interview, whose subject was fairly prominent and could be independently contacted — but as I looked into their background, I found that the interviewee was famous for perpetrating hoaxes on the news media and later taunting them over their gullibility, and that they had a partner they had historically worked with... and I began to suspect that the interviewer might be the interviewee's partner. In which case, contacting the interviewee to verify the interview would be of not use at all. I could see no possible way of verifying the interview, and said so. Although there are many situations in which we're happy to userspace unpublished OR (and some where we'd userspace unpublished synthesis), material under a cloud like that should be allowed to fall through abandonment to deletion.

What the documentary evidence means is a question of what one actually knows by looking at the evidence, even given its bona fides. This is why we want notes taken during an event rather than after, and why we want an audio recording (with the great hassle for a reviewer to check it against its transcription) rather than an uncheckable transcript. This isn't fully separable from the relationship to the Wikinews article. Given an audio recording together with the transcript in the Wikinews article, I'm often able to improve the transcript, because two sets of ears is better than one. Sometimes I've been able (with difficulty, admittedly) to verify article text by carefully comparing it to field notes and recognizing how the reporter who took the notes was reliably reminded of what the article says by what the notes say, even though the notes in themselves would be cryptic to an outsider.

A simple, general principle is that if there isn't enough documentation for the reviewer to check the article text against it, there isn't enough documentation. We encourage, of course, original reporters to provide way more documentation than is needed; redundant documentation gives better perspective on both the reporting process and the article content.

Spoken text always needs some neatifying during transcription; as linguists are well aware, human speech is full of ums and ers (or variants of those sounds depending on dialect/culture/native language/etc.) hesitations and retries, and could be nearly incomprehensible if transcribed exactly. Our style guide acknowledges reasonable polishing (WN:SG#Quotes). One also tries not to do too much, not to wash out the speaker's individual character. Editorial marks in square brackets, such as ellipsis ("[...]"), missing first or last names, words whose transcription is uncertain, explanations, and even rarely "[unintelligible]", can be powerful tools to help the reader.

Final sanity check[edit]

If the article has survived to the end of the source-check, I emerge from the source-check's wading through details and try to look at the overall shape of the article. Does it read smoothly? Is there systemic bias? (By this time I've not only read through all the details, I've dwelled in them.) Is there a problem with the title?

Am I forgetting something really important?

Always, once I've called up the review gadget with intent to publish, I then call up the article history in a separate browser panel. By calling them up in that order I can be certain that I'm seeing the complete history of the page revision that the review gadget is looking at; so if somebody other than me has edited the article since I started my review, I'll know it. If they have, I have to study what happened (and possibly leave someone a note about the meaning of the {{under review}} template); and in any case I use a diff, or if necessary more than one, to get a clear picture of what I did during review, the better to judge whether something has gone seriously wrong and/or whether I've really gotten myself too involved.

There is a moment when I've got the review gadget set up for a passing review, and I'm hovering over the submit button — and it's rather scary. All that work, and it's so easy to miss things — it's impossible to avoid missing things — my worry is that I not missing anything really important. I've spoken with other reviewers who describe the same experience, of the moment before publishing. If you understand the responsibility, you should feel that.