Experiments: Reference

The vision of MDN’s Reference product is to use the power of MDN to build the most accessible, authoritative source of information about web standard technologies for web developers. Accomplishing this vision means optimizing and improving on the product’s present success.

Optimization requires measurement, and MDN’s current measurements need improvement. Below I describe two measurement improvements underway, plus a few optimization experiments:

1. Helpfulness Ratings

Information quality is the essential feature of any reference, but MDN currently does not implement direct quality measures. Bug 1032455 hypothesizes that MDN’s audience would provide qualitative feedback that will help measure and improve MDN’s content quality. But qualitative feedback is a new feature on MDN that we need to explore. Comment 37 on that bug suggests that we use a 3rd-party “micro-survey” widget to help us understand how to get the most from this mechanism before we implement it in our own codebase. The widget will help us answer these critical questions:

  • How can we convince readers to rate content? (We can experiment with different calls to action in the widget.)
  • How do we make sense of ratings? (We can tune the questions in the widget until their responses give us actionable information.)
  • How can we use those ratings to improve content? (We can design a process that turns good information gleaned from the widget into a set of content improvement opportunities; we can solicit contributor help with those opportunities.)
  • How will we know it is working? (We can review revisions before and after the widget’s introduction; our own qualitative assessment should be enough to validate whether a qualitative feedback mechanism is worth more investment.)

If the 3rd-party widget and lightweight processes we build around it make measurable improvements, we may wish to invest more heavily into…

  • a proprietary micro-survey tool
  • dashboards for content improvement opportunities
  • integration with MDN analytics tools

Status of this experiment: MDN’s product council has agreed with the proposal and vendor review bugs for the 3rd party tool are filed.

2. Metrics Dashboard
In an earlier post I depicted the state of MDN’s metrics with this illustration:

metrics_status

The short summary of this is, MDN has not implemented sufficient measures to make good data-driven decisions. MDN doesn’t have any location to house most of those measurements. Bug 1133071 hypothesizes that creating a place to visualize metrics will help us identify new opportunities for improvement. With a metrics dashboard we can answer these questions:

  • What metrics should be on a metrics dashboard?
  • Who should have access to it?
  • What metrics are most valuable for measuring the success of our products?
  • How can we directly affect the metrics we care about?

Status of this experiment: At the 2015 Hack on MDN meetup, this idea was pitched and undertaken. A pull request attached to bug 973612 includes code to extract data from the MDN platform and add it to Elasticsearch. Upcoming bugs will create periodic jobs to populate the Elasticsearch index, create a Kibana dashboard for the data and add it (via iframe) to a page on MDN.

3. Social Sharing
For user-generated content sites like MDN, social media is an essential driver of traffic. People visiting a page may be likely to share the page with their social networks and those shares will drive more traffic to MDN. But MDN lacks a social sharing widget (among other things common to user-generated content sites):

feature_statusBug 875062 hypothesizes that adding a social sharing widget to MDN’s reference pages could create 20 times more social sharing than MDN’s current average. Since that bug was filed MDN saw some validation of this via the Fellowship page. That page included a social sharing link at the bottom that generated 10 times as many shares as MDN’s average. This experiment will test social sharing and answer questions such as…

  • What placement/design is the most powerful?
  • What pages get the most shares and which shares get the most interaction?
  • Can we derive anything meaningful from the things people say when they share MDN links?

Status of this experiment: The code for social sharing has been integrated into the MDN platform behind a feature flag. Bug 1145630 proposes to split-test placement and design to determine the optimal location before final implementation.

4. Interactive Code Samples

Popular online code sandboxes like Codepen.io and JSFiddle let users quickly experiment with code and see its effects. Some of MDN’s competitors also implement such a feature. Surveys indicate that MDN’s audience considers this a gap in MDN’s features. Anecdotes indicate that learners consider this feature essential to learning. Contributors also might benefit from using a code sandbox for composing examples since such tools provide validation and testing opportunities.

These factors suggest that MDN should implement interactive code samples, but they imply a multitude of use cases that do not completely overlap. Bug 1148743 proposes to start with a lightweight implementation serving one use case and expand to more as we learn more. It will create a way for viewers of a code sample in MDN to open the sample in JSFiddle. This experiment will answer these questions:

  • Do people use the feature?
  • Who uses it?
  • How long do they spend tinkering with code in the sandbox?
  • Was it helpful to them?

The 3rd party widget required for the Helpfulness Ratings experiment can power the qualitative assessment necessary to know how this feature performs with MDN’s various audiences. If it is successful, future investment in this specific approach (or another similar approach) could…

  • Allow editors of a page to open samples in JSFiddle from the editing interface
  • Allow editors of a sample to save it to an MDN page
  • Create learning exercises that implement the sandbox

Status of this experiment: A pull request attached to Bug 1148743 will make this available for testing by administrators.

5. Akismet spam integration

Since late 2014 MDN has been the victim of a persistent spam attack. Triaging this spam is a constant vigil for MDN contributors and staff. Most of the spam is blatant: It seems likely that a heuristic spam detection application could spare the human triage team some work. Bug 1124358 hypothesizes that Akismet, a popular spam prevention tool, might be up to the task. Implementing this bug will answer just one question:

  • Can Akismet accurately flag spam posts like the ones MDN’s triage team handles, without improperly flagging valid content?

Status of this experiment: Proposed. MDN fans and contributors with API development experience are encouraged to reach out!

MDN Product Talk: The Series

  1. Introduction
  2. Business Context
  3. The Case for Experiments
  4. Product Vision
  5. Reference Experiments
  6. Learning Experiments
  7. Services Experiments
Experiments: Reference