Thursday, February 21, 2013

Tech related IEG proposals

The deadline for IEG proposals has now passed. IEGs (Individual engagement grants) are a new pilot program by the Wikimedia foundation to give small amounts of money to people who promise to do cool things with it. Ok, the criteria are a bit more complicated than that, but that is the gist of it.


I thought I'd take some time to look through the technical proposals. To be honest, I was hoping to see more programming proposals, something like google summer of code, but for already experienced devs. By and large that did not seem to happen. This may be due to some mixed messages on technical projects - contrast this mailing list post and this late addition to the rules I just discovered today. Also it may simply be because its a new program, and developers weren't its primary audience. Perhaps it has to do with the timing, which would interfere with students going to school (as opposed to google summer of code, which coincides with summer break), resulting in less student programmers participating. Who knows.

Additionally, of the technical proposals made, only one actually consulted with the developer community :( at large (by which I mean wikitech-l). I should note that some of the proposals I'm listing below as technical, are more of the form "develop a vision statement" which doesn't really require consulting with dev community. However, I still expected more people to be chatting up the devs in relation to IEG proposals.


tl;dr: My favourites are: Elaborate Wikisource strategic vision, and The Wikipedia Adventure. My runner up favourite is backlog pages for WikiProjects (That one is a runner up as its too vague on what actually will be accomplished).

Anyhow, here's my take on the technical proposals that were submitted. Note, I have mostly just read through the proposals once, so if I misunderstand anything in any of the proposals, I apologize in advance.

Backlog pages for WikiProjects

This is an interesting proposal. Basically the author notes that Wikipedia has categories and organizational pages for its various backlogs. However individual WikiProjects do not have such per-project backlog pages, or if they do, they're very limited.

The actual proposal for what to do is rather vague. It sounds slightly like figuring out what to do is part of the proposal. From what I've gathered the proposal breaks down into two related wants:

  • (Efficient) Category intersection - The ability to get all pages that are in the intersection of a set of categories. There are some tools that do this already - like DPL [Not enabled on Wikipedia, but is on other wikis like meta], and CATSCAN. Neither scales well once things get big.
  • A snazzy interface for showing the backlog - The authors point to WikiHow's CommunityDashboard as an example to potentially emulate. This is the first time I've heard of WikiHow's tool, and while I only gave it a brief glance, it is very cool looking.
Category intersection is an interesting problem, one that has been wanted for quite a long time by many people. I'm currently the maintainer of the DynamicPageList extension (however I mostly ignore it, and simply fix the rare bug that pops up). DynamicPageList does category intersection in the naive way, which simply does not scale to Wikipedia-size wikis (or even wikis significantly smaller than enwikipedia). (By naive method, I mean doing a bunch of self-joins on the categorylinks table) Some people have suggested that it may be possible to implement this efficiently using full-text indexes and a program like lucene. There's even a proof of concept extension written using this approach. Adapting DynamicPageList to use this type of method is certainly something I would personally like to investigate if I ever had a large swath of free time.

The authors of this proposal suggest $1000 to hire a developer to implement their feature requests. While its hard to be certain, as the actual project requirements of this proposal are basically not defined, that seems like way too low a number given the amount of work wanted for the project. Particularly if efficient category intersection is a requirement.

Elaborate Wikisource strategic vision

I really like this proposal. Wikisource has always been a bit of a mystery to me. I know it has something to do with digitizing documents, and proofreading the resulting text, but I don't know much beyond that. In particular, I have almost no knowledge about how their main tool, ProofreadPage, actually works.

Having a strategic vision for Wikisource would help more people understand, and thus appreciate the work of Wikisource. In turn, this may result in more people using Wikisource.

One of things I noticed about the proposal, is that they make it very clear they want to in the short term concentrate on things that do not need Wikimedia technical staff attention. This is probably a reaction to how Wikisource has been ignored by both the foundation and the larger developer community. The vast majority of work done on Wikisource related extensions has been done by volunteer developers who come from the Wikisource project. Personally I would caution this proposal from ignoring potential wmf tech resources too much. Well its important to consider what is do-able and what is not, it is also good to first decide what is wanted, and then figure out how to do it (Where there's a will there's a way). Wikisource may even find that once there is a clear picture of what is needed, much more resources are available to them. WMF employees aren't the only developers, there are also (non-wikisource) volunteers. Who knows, perhaps these people would be willing to help if they knew what needed doing (More generally, if you want some new feature for your wiki, a good first step is always to produce a good design document of precisely what is wanted. Developers aren't mind readers, and would much rather code than try to figure out what the user wants. Having a clear statement about what you need may be half the battle to getting what you need). Also just because the WMF isn't willing to devote tech resources to Wikisource, doesn't mean that employees might not help. Employees do have 20% time, and occasionally even commit code unrelated to foundation goals in their free time.

I really wish this project luck, and should it be accepted, I look forward to reading the final report.
Edit: I wrote this section before I saw the new part of the rules where nothing involving WMF-tech resources is allowed. With that in mind, the no-wmf-tech parts of this proposal make much more sense.

Mapping History: Revision History Visualizer and Improvement Suggester using Geo-Spatial Technologies

This one gets points for being the only tech proposal to actually talk to the developer community.

Basically what they want to do, is create a map from an articles edit history, to highlight which region is editing the article the most. Afterwards they want to do some fancy machine learning stuff to see if any automatic inferences can be made from this geo-spatial data (For example, if only one country edits an article, maybe its POV).

Unfortunately the proposal has several problems. First of all the privacy policy. The authors want to get the IP addresses of logged in users, in order to find out roughly where they live, so they can be plotted on a map. That's not going to happen for privacy reasons, end of story. Hence the visualizations will be a lot less complete (If they only use anon locations). The proposal could perhaps parse user pages for location based infoboxes, but not everyone specifies that sort of information

Beyond not sufficiently researching the privacy policy, the authors seem not to understand what sort of access different technical projects (extensions vs gadgets vs third party hosted thingies) have, along with what data the API provides. I would expect that someone making such a proposal would understand the limitations of the technology that they intend to use before making the proposal.

Last of all the $30000 budget request seems a little high relative to the amount of work (I believe would be required) and the impact the project would make.

MediaWiki and Javanese script

This is an interesting one. It would be interesting to see what someone from wmf's i18n team thought of it.

As far as I understand, the main points are:

  • There are no input methods generally available for the Javanese script except in MediaWiki (which sounds odd to me)
  • People should be able to type in their own script easily
  • Therefore we should distribute MediaWiki-on-a-stick (A Wiki on a usb stick, so you can take the wiki with you).
First of all, it would be kind of cool if Wiki on a stick was supported for MediaWiki. The author mentions XAMPP, but there might be simpler options (Using PHP's built in webserver, combined with sqlite). However the wiki on a stick part seems to be a means to an end, not the main goal of this project.

For the actual project, I'm not sure what the end goal is - Have Javanese speaking people start using MediaWiki as a personal word processor? It seems like making an input method for X11/where ever input methods go in general for various operating systems would be much more effective at accomplishing the authors goals. Its also unclear how this benefits Wikimedia, other than wiki on a stick support would benefit MediaWiki. Having more Javanese speakers familiar with MediaWiki might make them more likely to contribute to a Javanese project, but that seems like a rather indirect benefit.

Replay Edits

This is an interesting proposal.

From what I understand, what is being proposed is that you could replay the history of an article, having text being added and removed in front of your eyes. Somewhat similar to how edits happen in front of your eyes in etherpad (?) (but replaying the past, not real time editing). This would allow a cool visualization of how articles change with time.

I'm unclear on this proposal if its meant to operate on the wikitext or on the rendered page. I'm also unclear if as it goes forward in time, does it highlight the changes, or just show the new page. Some of the comments on the talk page, and this mockup suggest it would work on rendered pages. Having diffs that highlight what changed, but on the rendered page instead of the wikitext source (so-called visual diff), is a feature that would be awesome in and of itself. (There was once upon a time some experimental support in MW for this, but it was removed due to being incomplete).

If it is indeed the authors intention to provide visual diffs, then this project becomes quite exciting. It also becomes quite a bit more difficult, and I would be hesitant supporting it, unless the author stated his implementation plans in much more detail, in order to verify he understands the issues involved. If this is more just a visualization of how articles change in time, it is a much lower impact project, but still an interesting one. I would support it, especially because the proposer is only asking for $200 to do this.

MediaWiki data browser

This proposal is from Yaron, who is (among other things) a very prominent developer of Semantic MediaWiki. This is by far the most ambitious technical project of any proposed and could potentially have a huge impact.

At the same time it is a little unclear what is actually being proposed. The author says a framework to create drill down interfaces. Perhaps my confusion stems from only having a vague idea of what a drill-down interface is. A picture of an example interface would really be worth a thousand words.

With that said, the idea seems to be creating an interface where the user could filter or select pages by some criteria based on information in an infobox. This all sounds really cool, but it also sounds very hand-waving, to really evaluate this proposal, I think I would need to better understand what is actually being proposed. A concrete example of what an app designed with the framework would look like, including what sort of scope in terms of data processing a potential app could have, would be helpful.

An interesting part of this proposal is that all the processing is done on the client side. The author mentions that (obviously) only a small portion of wikipedia's data would be downloaded. I would be interested to know more about how much data would be downloaded, what data would be downloaded (is it wikitext of relevant pages), how the framework would find the relevant information it needs to download (this is part of my confusion over what the relevant information the framework would be working on is).

Certainly an interesting proposal, and one with much potential.

TapAMap

Basically author has an apple iPhone app that gives you a map. You click somewhere on the map, and it takes you to the nearest article to where you clicked. This is the opposite of most geo-location efforts, although is somewhat similar to WikiMiniAtlas type things that provide wikilinks to various places at their location on the map. It appears this one tries to be different by not showing textual links on the map, but instead concentrating on the geographical location only.

The developer wants grant money to port his App to Android. Apparently the iPhone version is fairly popular.

My main concerns with this proposal is that while it is different from other article mapping things, its similar enough to make it relatively low impact. Additionally it seems the author is reluctant to open source his app, or possibly only willing to open source the Android port that would be funded by the grant. I feel this would be a show stopper. Anything funding using Wikimedia money should be Free Software, no ifs ands or buts. I would not support this proposal unless the entire thing (including the existing iPhone app, and the proposed android port) were GPL'd (or another OSI approved license).

Wiki Makes Video

I'm only going to briefly mention this, as its mostly non-technical, but does include implementing a video capture/upload? app for phones. Making videos easier is certainly a useful thing, and something we could do much better at. Would probably want to check that the Mobile and TMH tech teams aren't already doing anything in this direction (I don't think they are, but should check).

The Wikipedia Adventure

Last but certainly not least (Whew, there was actually a lot more of these than I originally thought) comes The Wikipedia Adventure. This is a proposal to continue some work originally started as a fellowship, to create an educational game to show people how to edit Wikipedia.

This is an interesting approach to help break down barriers. While I am personally a fan of manuals and what not, I understand that most people aren't, and this could serve as a very effective introduction to editing Wikipedia.

I (very) briefly tried the prototype, and I must say its pretty cool. I would be interested to see where people can go with this, if given the proper opportunities to pursue it. This is definitely a proposal I would support.

And that is the end. There were actually quite a few more tech proposals than I thought, and it took a lot longer to read through them then I thought it would. If you've stuck through reading this blog post for this long, thanks for reading :)

Tuesday, August 21, 2012

On git and gerrit

We've now been using git and gerrit for mediawiki for quite some time. I must say the software has grown on me quite a bit. When we first switched, I hated both of them. Now that I've had some time to adjust, I've discovered that I really like git, and I don't hate gerrit quite as much as I used to.

First of all git. Git is quite cool from a technology standpoint. In my usecase I like how it has the ability to work on multiple different features at once, since you can have multiple local branches. On SVN when I wanted to do that, I had to use workflows like svn diff > somefile.patch; svn revert. With git I can switch to different branches easily, and its all still contained in the version control system.

The ability with git to easily work offline is also very nice. I currently don't have an internet connection at home (and no neighbour's wifi to steal either, I don't know what this world is coming to ;). Git makes it much easier to develop features without having to go to somewhere like a library. [Note: The no internet thing is by choice, and is a way to force myself to waste less time on the internet. I can work on things, save (commit) it to a branch, work on more things, commit that, all well being in isolation from the rest of the inter-connected world.

The only thing I do slightly miss from svn, is incremental revision number. In SVN, each version of the code had an id number, and they went in order. With git it is a totally random sha1 hash (aka 6176d71256aa94a25c471c8696f28820f0b4e8e7). This is less annoying than it might seem at first glance, however it means when I do git blame I get these sha1 things rather than monotonically increasing revision numbers. This makes it harder to tell what version something was introduced in because I can't just look up the revision in the [[mw:Branch points]] page. (To be fair, git also provides a date, which helps somewhat).

Now on to gerrit. Gerrit has certainly grown on me. I think this is a combination of getting used to it, and the new skin we started to use. However, I still think the interface is horrid, and I miss [[mw:Special:CodeReview]].

The interface to gerrit is pretty confusing, especially at first. Almost everyone doesn't understand how to save an inline comment the first time around (Hint: you also have to save a non-inline comment for it to go through). I don't like the fact I can't use wildcards when searching for projects (aka cannot do project:mediawiki/* to get mediawiki/core as well as everything in mediawiki/extensions/... ), since I mostly don't care what happens in ops (Although I am very happy to see how ops is becoming more and more open - Good job ops folks!). You also cannot do a search for everything that matches some certain path (AFAIK, however you can set up email alerts based on paths), which was easy to do in Special:CodeReview. Free-form tagging of revisions would also be nice (Another feature missed from Special:CodeReview). Last of all, the gerrit user interface begins to get really clunky when a patchset has been amended multiple times (Also I wish git review should ask for a patch-set message if its not the first version, it is a real kludge to modify the commit message with Patchset 6: rebase type messages). With that said, I do understand from the gerrit alternatives discussion that gerrit is the only system that really even remotely meets our needs, so I am by no means advocating switching.

I suppose one of things I most like less about gerrit (But understand the reasons for, and am not advocating changing), is the gated trunk model. For the non-technical audience (although to be honest, I'd be surprised if anyone is actually reading this, and if they are, and are non-technical, that they got this far), the gated trunk model is roughly a spiffier version of FlaggedRevisions/PendingChanges, but for computer programs instead of wikis. I've found it has some of the draw backs that FlaggedRevs detractors were all talking about — namely less instant gratification. In the SVN days, if you had commit access, you coded your feature or bug fix, hit commit, and that was the end of that. Sure someone would eventually come along and review it (In a similar way as how edit patrol works on wikis), and reviewers were not afraid to revert something if there was something wrong with it. However you still had to do something wrong in order for it to be reverted. With gerrit, it requires someone to approve your commit, as opposed to merely someone not finding an issue with it. Thus if nobody cares, your commit could sit in limbo for weeks or even months before anyone approves it.

So all in all our great glorious git future is growing on me more and more. There are still things I miss from the old system, but with time, perhaps that will no longer be the case.

Tuesday, December 27, 2011

Extension:PageInCat

I haven't blogged recently (or really ever), so I thought I'd make a rambley post about an extension I'm currently writing.


Recently, at Amgine's suggestion, I have been working on a new MediaWiki extension called PageInCat. What it does is add a new parser function {{#incat:Some category|text if current page is in "Some categoy"| text if it is not}}. At first glance I thought it'd be fairly straightforward, but it turned out to be tricky to get right.


It's fairly easy to determine if a page is in a specific category. Just query the database. The problem is that when we're reading the {{#incat, it is before the page is saved, so the db would have the categories for the previous version of the page, not the version we are in the process of saving. Thus it would work fine if no categories were added/removed in this revision, however if categories did change, the result wouldn't reflect the new changes.


The solution I used, was to mark the page as vary-revision. This is a signal to MediaWiki that the page varies with how its saved to the database. The original purpose of vary-revision was the {{REVISIONID}} magic word, which inserts what revision number the page is, which can only be determined once the page is saved to the db. With the page marked as such, MediaWiki will only serve users versions of the page rendered after it is saved in the DB.


vary-revision fixes the problem for saving the page, however, previews still don't work because previews never get saved, so are never inserted into the db. So what the extension does is hook into EditPageGetPreviewText which is run right before the preview is generated. It takes the edit box text, parses it, and stores the resulting categories. Next once mediawiki does the actual preview, it hooks into ParserBeforeInternalParse, which is a hook run very early in the parse process. At this point it checks if we already have the categories for this text stored, and if so uses those for calculating the #incat's for the preview.


This makes the preview give the correct result, albeit at the price of parsing the preview text twice, slowing down the preview process.

However, there's one more situation where the extension could give wrong results during preview (or saving for that matter). What if someone does something like {{#incat:Foo||[[category:Foo]]}} (read: The page is in category foo only if it is not in category foo). There's really no correct answer for if the page is in category foo or not (as it is self-contradictory), so #incat can't chose the right result. A less pathological case would be #incat's that depend on each other - if page in foo add cat bar, if page in bar add cat baz, if page in baz add cat fred, and so on. The category memberships can't be determined in this case by the two stage, figure out which categories the article is in, and then base the #incat's on that, as each category would only be determined to be included once it was determined the previous category in the chain was included.


Really there's not much we can do in these cases. Thus instead of trying to prevent it, the extension tries to warn the user. What it does is keeps track of what response #incat gave, and then at the end of the parse (during ParserAfterTidy) it checks if the #incat responses match the actual categories of the page. If they don't match, it presents a warning at the top of the page during preview, via $parser->getOutput()->addWarning(), which is similar to what happens if someone exceeds the expensive parser function limit (It doesn't add a tracking category though like expensive parser func exceeded does, but it certainly could if it'd be useful)


Anyways, hopefully the extension is useful to someone :)

Tuesday, July 20, 2010

image metadata

I thought I'd write a blog post about my google summer of code project. I've never been much of a blogger, but I see lots of my fellow gsoc'ers blogging, so I thought I'd write a post. My project is to try to improve mediawiki's support for image metadata. Currently mediawiki will extract metadata from an image, and put a little table at the bottom of the image page detailing all the metadata (for example, see http://commons.wikimedia.org/wiki/File:%C3%89cole_militaire_2545x809.jpg#metadata ).

However this is far from all the metadata embedded in an image. In fact mediawiki currently only extracts Exif metadata. Exif metadata is arguably the most popular form of metadata, so if you're going to only extract one, Exif is a good choice. Every time you take a picture with your digital camera, it adds exif data to your picture. Most of this type of data is technical - fNumber, shutter speed, camera model, etc. You can also encode things like Artist, copyright, image description in exif, however that is much more rare.

What I'm doing is first of all fixing up the exif support a little bit. Currently some of the exif tags are not supported (Bug 13172). Most of these are fairly obscure tags no one really cares about, but there are some exceptions like GPSLatitude, GPSLongitude, and UserComment.

I'm also (among other things) adding support for iptc-iim tags. IPTC-IIM is a very old format for transmitting news stories between news agencies. Adobe adopted parts of this format to use for embedding metadata in jpeg files with photoshop. Now a days its being slowly replaced by XMP, but many photos still use it. IPTC metadata tends to be more descriptive (stuff like title, author, etc) in nature compared to how exif metadata is technical (aperature, shutter speed) in nature.

My code will also try to sort out conflicts. Sometimes there are conflicting values in the different metadata formats. If an image has two different descriptions in the exif and iptc data, which should be displayed? Exif, IPTC, or both? Luckily for me, several companies involved in images got together and thought long and hard about that issue. They then produced a standard for how to act if there is a conflict [1]. For example If both iptc and exif data conflict on the image description, then the exif data wins.



Consider [[File:2005-09-17 10-01 Provence 641 St Rémy-de-Provence - Glanum.jpg]]

On commons the metadata table looks like:



But on my test wiki the table looks like:

Camera manufacturerCASIO COMPUTER CO.,LTD
Camera modelEX-Z55
Exposure time1/800 sec (0.00125)
F Numberf/4.3
Date and time of data generation14:21, 28 September 2005
Lens focal length5.8 mm
Latitude43° 46′ 21.35″ N
Longitude4° 50′ 1.34″ E
OrientationNormal
Horizontal resolution72 dpi
Vertical resolution72 dpi
Software usedMicrosoft Pro Photo Tools
File change date and time14:21, 28 September 2005
Y and C positioningCentered
Exposure ProgramNormal program
Exif version2.21
Date and time of digitizing14:21, 28 September 2005
Meaning of each component
  1. Y
  2. Cb
  3. Cr
  4. does not exist
Image compression mode3.66666666667
Exposure bias0
Maximum land aperture2.8
Metering modePattern
Light sourceUnknown
FlashFlash did not fire, compulsory flash suppression
Supported Flashpix version0,100
Color spacesRGB
File sourceDSC
Custom image processingNormal process
Exposure modeAuto exposure
White balanceAuto white balance
Focal length in 35 mm film35
Scene capture typeStandard
Scene controlNone
ContrastNormal
SaturationNormal
SharpnessNormal


Most notably, GPS information is now supported. As a note, the wikipedia links for camera model are a commons customization, which is why they don't appear on my test output.

As another example, consider [[file:Pöstlingbahn TFXV.jpg]]. On commons, it has no metadata extracted. (It does have some information about the image on the page, but this was all hand-entered by a human). On my test wiki, the following metadata table is generated:



I'm almost done with iim metadata, and plan to start working on XMP metadata soon. If your curious, all the code is currently in the img_metadata branch. You can also look at the status page which I will try to update occasionally.

Cheers,
Bawolff

Sunday, June 6, 2010

drama defined

Drama-defined:


This is what the rc has looked like all day... :(

Wednesday, April 14, 2010

restyling the reader feedback

Recently at Wikinews we've been trying to give a more inviting look to the reader feedback extension. This extension adds a little box at the bottom inviting readers to rate the article. Many people felt that it could do with a little more snaz. The extension makes the form as a bunch of boring old html <select>'s:



Some people wanted something more like the typical rating systems you find on websites now a days (youtube and newstrust were two prominent examples given of what people were looking for). So we tried experimenting with some custom javascript to give it a new look. First we experimented with unicode stars / (considering the 9 billion different type of stars in unicode, its amazing how few have filled in and non-filled in variants). Then we moved to different star images:



Eventually we choose to use red stars. On hovering it gives users help text to describe the rating they are giving (you know for the stupid people who think one star means excellent). Here's the final result:




You'll also notice in the images a "Comment on this article" box as well. This was the only part of this that was a little ugly (since this is now straying into stuff that should be done at the php level) It still doesn't handle captcha's for anons who include links that well. (it redirects them to an edit page for the moment, eventually it will just prompt people to answer the captcha, when I get around to doing that. for the moment the fallback is ok, as not to many people [read no one as of yet other then me during testing] post links).

However if our comment pages are any indication, this feature seems to be quite widely used. There is some non-sense posted, but there is also some nice comments posted via the form. Whats really surprising is people commenting on our older articles [1]. Often at Wikinews we assume articles have a shelf life of at most a week, and after that almost no one reads them. That appears not to be the case. We were considering having a comment form on the Main Page, but weren't sure where we'd want the comments to go. (Talk:Main Page, Opinions:Main Page, Wikinews:Water cooler/assistence, Wikinews:Geust book, etc - none of them really seem to fit) so currently there is only the rating part on the Main Page.

With the adoption of liquid threads, our comment pages have really been taking off. I think the special:newmessages notification on the top right corner, brings commenter back to the comment page to respond to new comments. For example [[Comments:Large Hadron Collider reaches milestone]] has while not the most intelligent of conversations, still quite the conversation going. Before it used to be somebody posts something, then forgets, now they have a reminder that they have new messages, and thus respond to those who reply to them, and so on.

After a couple of days, it really does appear that these changes made a difference. Here is the graph of how people rated the Main Page over the last month. The green/blue line is how they rated us (in reliability), and the red line is how many people rated on a 1:6 scale. Notice how the number of raters per day increased almost 10 times!



Source: http://en.wikinews.org/w/index.php?title=Special:RatingHistory&target=Main_Page

Sunday, April 11, 2010

mediawiki update brings new goodies for wikinews

Now that Wikimedia got updated, we get to have all the cool new features, which is exciting! I always love software updates. Furthermore, almost none of the javascript broke (well one minor thing we stole from commons did, but otherwise all is well. None of *my* js broke ;) Well actually one thing I did broke due to the mediawiki and user namespace on wiktionary becoming first letter case insensitive, but other then that nothing broke. On the bright side, due to the software update, my WiktLookup gadget now works in IE (or should anyways, haven't tested).

The one feature we [at wikinews] were waiting for was changes to DynamicPageList that allows us to put our developing articles on the Main Page without them being picked up by Google news. (Google news assumes any article on our main page with a number in the url, that does not have nofollow is a published news article. Since we allow anyone to create an article, we don't want our articles in progress being picked up by google). Thus {{main devel}} is back on the main page after a long absence.

Speaking of DynamicPageList (to clarify, the Wikimedia one, not DPL2), it has a number of cool new features for us at Wikinews, and other wikis that use it. (I'm especially happy about this, as I contributed a patch for it, and its really cool to see something I've done go live). Among other things, it can now list articles alphabetically (a feature request from wikibooks), and you can specify the date format that the article was added to the category (before it was just a boolean on/off switch). However, one of the coolest new features (imho) is the ability to use image gallery's as an output mode. One can now use DynamicPageList to make a <gallery> of say the first 20 images in both Category X and Category Y but not in Category Z.

Here's to all the devs for continuing to do an excellent job with Mediawiki.

Cheers,
Bawolff