Flare to MediaWiki to Flare (part 1, getting started)

I asked for suggestions for posts on the Users of MadCap Flare on Linkedin. The first one was to see content flow freely between Flare and a wiki. I’m not promising anything. But the suggestion is very good and it is worth a try. I saw some posts on MadCap Software Forums about this. It sounds like Flare to wiki to Flare has been done with varying degrees of success.

I’m going to try this with MediaWiki. Click a few links from a Google search and you’ll see that MediaWiki is not necessarily the preferred wiki platform for wiki customization. But Wikipedia is powered by powered by MediaWiki and that is a strong enough argument for me. The idea is to blog this as it happens. If I misstep, I’m going to blog it.

If you are reading this, you probably already have Flare installed. I have never implemented a MediaWiki wiki, so I am not going to assume everyone else has. The download is appropriately posted on the MediaWiki wiki. I downloaded 1.19.1. It is a .tar.gz file and WikiMedia notes that you can use 7-Zip to extract the files. Use the file manager application that comes with 7-ZIP. You will have to drill into the file to get to the actual folder.

I’m testing this on my computer which is a Windows 7 laptop. I copied the mediawiki-1.19.1 folder from the downloaded archive to C:\inetpub\wwwroot. Then in a web browser, I entered this URL: http://localhost/mediawiki-1.19.1/index.php, which brought up an webpage with a link to install:

Please set up the wiki first.

I clicked that and followed the wizard. For my setup, I re-installed MySQL separately. Remember to run these items as an administrator. I had previously installed MySQL for another stack and I couldn’t remember the password. I had to reset it. Instructions to do that for Windows are here: How to Reset the Root Password. After that I mostly selected the defaults on the wizard. The point is just to get a test wiki up and running.

A freshly installed MediaWiki wiki already has a page. I didn’t add anything else right away. The big question is how to retrieve the wiki content. The solution should be programmatic and repeatable. MediaWiki has an API. There is also a bulk export. Without committing to either one, I firstly explored the API.

One requirement for this round trip is to get wiki pages from the wiki and convert the pages to Flare topics. The API is a web service and if you browse to the endpoint for the web service on your installation of MediaWiki, you can see the generated documentation page. Again, I installed the wiki locally. My installation’s API endpoint is http://localhost/mediawiki-1.19.1/api.php.

The MediaWiki wiki has an introductory example that describes a get of the content for the main page on the English version of Wikipedia. The call for my installation is:

http://localhost/mediawiki-1.19.1/api.php?format=xml&action=query&titles=Main%20Page&prop=revisions&rvprop=content

When I viewed the call in a web browser, the browser displayed the returned XML from the call. I could have specified another format such as JSON or PHP. But I thought XML would be a safe place to start. Unfortunately, the returned XML wraps another kind of markup which is the actual content. Parsing XML would great since that is what Flare wants. But instead, it is necessary to parse Wikitext. It has been done before and MediaWiki provides a repository of links at Alternative parsers.

A few side notes: It may be worth looking into direct queries of the database which houses the wiki content. It is tempting to look into grabbing HTML. But I have a feeling that would greatly complicate the round trip back into the wiki.

The upside of the API is that other information is nicely organized in the XML. The content would be difficult without a parser. Although it would be interesting to build one, I won’t. But the rest of the metadata and probably the navigation seem easier. Here is a call to get all pages:

http://localhost/mediawiki-1.19.1/api.php?action=query&generator=allpages

To recap this initial exploration, installing MediaWiki isn’t perfectly strait forward but it isn’t terrible. MediaWiki has an API which among other things can be used to retrieve content. Content is not in XML but in Wikitext. There are many Wikitext parsers out there already and there is no reason to reinvent the wheel.

6 comments

  1. I’m following this series of posts with great interest. Getting tech comms content into and out of a Wiki is an emerging new challenge.

    As this series progresses, I’ll be especially interested to see how a given Wiki topic or page is addressed by the exchange mechanism. That is, how does one access a particular “target” in a Wiki?

    To be continued…

  2. I am looking for information on the opposite end of the spectrum. That is how to convert confluence wiki pages to PDF using Flare. Any tips?

    1. I haven’t done it. Out if curiosity, where does the Confluence PDF export fall short? You can export Confluence pages to Word and import Word into Flare. You can also import DITA into Flare. But I don’t know if there is native DITA export from Confluence. There is an outside Confluence2DITA project, I think. The version of Confluence also matters because the markup language changed.

  3. Thomas, are you saying that PDFs created from Confluence wiki are good? But my task is to convert wiki pages to PDF or help files using Flare. Any tips?

    1. I would stick with whatever level of export from Confluence and import into Flare you feel comfortable with. If you don’t have the resources to pull the markup, build a transformation, and apply it, I would go with an export option native to Confluence which has a corresponding import option in Flare such as Word.

Leave a comment

Your email address will not be published.

HTML tags are not allowed.

253,343 Spambots Blocked by Simple Comments