Part 4: Testing an In-Flight Entertainment Solution from the Ground Up

PART 1 – Innovation at 30,000 ft

PART 2 – Engineering for Quality – The “Ratio Way” 

PART 3 – Architecting a Quality In-Flight Experience

A Final Note from Microsoft

PART 4

In the first three installments of this blog series, my teammates Ted, Paul and Bill gave some great insights into the strategy, architecture and development efforts that went into the In-Flight Entertainment (IFE) solution that Ratio built for SkyCast Solutions and Alaska Airlines (in partnership with Microsoft and others). For the final installment of the series, I will share my insights on what made this effort a success from the quality assurance/testing point of view.

Please accept my apologies in advance, as I’m going to use baseball analogies liberally for this occasion.  Ratio headquarters is located in the heart of Pioneer Square in Seattle, a stone’s throw away from Safeco Field where the Seattle Mariners just kicked off their 2015 season.  Baseball is in the air right now.  Go Mariners!

 

The QA Role

Like a pitcher on a baseball team, QA Engineers lead the defensive effort on a software implementation team.  We throw fastballs, change-ups, sliders and your occasional knuckleball to try to flummox the product under test and see how it responds to adverse circumstances.   It’s not the most glamorous job though. In fact it is not unusual to have a bit of competitive tension with other members of the team.  In the course of a day’s work, we expose other’s mistakes and oversights and in some cases we may seem to prolong projects by bringing issues to light. This may  delay releases and push project budgets into the red.

Not unlike hitters coming up to the plate to face Mariner’s ace Felix Hernandez, those who are good at what I do can bring about looks of fear and desperation over an experienced developer’s face, when they see me approaching.

Although a little healthy competition never hurts, we all play for the same team here. Unlike a baseball pitcher, we ultimately want to see the balls get knocked out of the park.  Software testers take pride in the bugs we find, perhaps a little bit too much sometimes.  But what we really live for is to see those issue get fixed and come together in the form of a solid product.

 

The In-Flight Entertainment Mission

The Alaska/Skycast IFE project was one of the more challenging projects I have worked on and ultimately one of the most rewarding.  As was described in the previous entries in this blog series, one of the challenges our team faced was working with numerous partners to pull a large number of components into a cohesive solution. The things we had on our plate to test included: the “Movies & TV” application to play back protected video content from a memory card in the device, a “Music” application built to play curated song playlists, a magazine viewing application, six different Xbox Games titles which Microsoft customized for this solution, three different Windows services to collect and transmit analytics data, another Windows service to shut down the device after inactivity and an app called “Settings” which shows users remaining battery life and allows them to adjust the screen brightness.   We’re already at more than a dozen different components and that’s not even counting the content management system (CMS) we built to manage the multimedia content that gets deployed to these devices!

All of the above components (except the CMS) needed to run on devices with a customized “locked down” Windows 8.1 configuration and needed to function regardless of whether an Internet connection was available.   We also needed to ensure that our solution was compatible with the in-flight network based video service that would be part of the Alaska Beyond initiative.  This development effort was occurring in parallel to ours and the only way to properly test it would be to go to a specially equipped lab in Chicago!

 

Test Planning and Strategy

With all of the moving parts involved in this effort, having a strong plan for test execution was mission critical. By the time I joined the team, a number of “epics” and several dozen “user stories” defining the various components of the solution we would be building had been created by other team members.  I reviewed the stories and asked questions to get a better understanding of what we were building.  Before I received my first application build,  I started creating a “Test Plan,” which in this case was a 19 page document outlining the approach that would be used to test the product.  This document details the strategy we used to test, identifies resources needed to complete the effort and risks and dependencies that might get in the way of the execution.   The test plan is peer reviewed so that we can align on the expectations that the solution will be held to during the test effort.

But the test plan is a high level document and as the saying goes, “the devil is in the details.” Since we use an agile methodology, many of the requirements were defined and refined after the development effort was in flight.  We start building right away and emphasize failing early so we can course correct as we go along.  Requirements are not all collected upfront at the beginning of the project, many evolve over the course of the project through team collaboration.

Due to the fast pace that we work, many tests are created at the time of execution, so testers must also be skilled test writers.   As agile team members, we’re much like jazz musicians.  Improvisation is a core skill, honed over the course of hours of practicing.  We don’t just read notes from a page, we follow our instincts and keep our eyes and ears open so we can carefully monitor and react to what’s going around us.   It’s hard to rationalize, but good software testers have a sixth sense of sorts that seems to magnetically draw us to problems.  We also tend to be thorough, detail oriented individuals.

Before any given feature is implemented, team members from each discipline can review and provide input.  In our culture, input from the test team is highly valued.  Most of our test team members are not proficient coders, but we add value by having a strong understanding of the end user experience, among other things.   We’re able to articulate recommendations for feature implementation from the lens of a typical user without the bias that comes from having to worry about how to architect the solution.  In some cases, we can prevent hours or days of work by making a simple suggestion before a feature is implemented.

For this project with so many moving parts, to be successful we would need a balanced and thorough test effort.  We would need to leverage team collaboration and communication, create and document detailed requirements based tests over the course of the project, and also use creativity to drive effective exploratory testing.

 

Putting The Puzzle Together

Though I didn’t give much thought to it at the time, our strategist and project management team did a great job of organizing the timing of our development efforts so that everything could come together at the end of this project.   We started off by creating the Movies & TV and Music applications that would run on the system.    Though essential components to the system, these were low hanging fruits because we have a great team of experienced Windows developers on staff and video apps in particular are a core competency.  We weren’t able to implement the DRM (Digital Rights Management) components to the Movies & TV app right away due to a third party dependency, but after about one month of development, we had all of the other major functionality for those applications in place.

One essential ingredient that was missing when I received the application though, was content.   We wouldn’t receive anything resembling “final” video content until months later at which time the solution needed to be complete and ready to deploy.  With “real” content so far off in the horizon, I knew that if we stood idly by, things would be rough going later on.   Though our development team handed off a set of test files with a few images and test videos, the test content provided was not representative of the load the application would be expected to carry later on.   So I downloaded numerous movie trailers and poster art and assembled a manifest file that would more accurately represent that content end users would see when the product shipped.   This effort was laborious, but it exposed some issues that may not have become evident until it was too late otherwise.   Also, as it turned out, we later needed to put a prototype application in front of passengers at Sea Tac Airport for usability testing. At that point, having a realistic set of content paid dividends.  The collected test content was used yet again when we created a content management system to support the solution which needed to be tested, so it turned out to be well worth the time investment.

I went through a similar effort to test our Music application.   It is here where the project’s most powerful test case bore it’s fruit.   Here was the test:  start playing a music playlist and then turn the device’s screen off.  It should continue playing for up to one hour, or until the end of the playlist (whichever came sooner).   This simple test haunted us for months on end.  When all was said and done, this test exposed three defects:  one in the device’s audio driver, one in the device’s firmware and a third in our application itself!

As the team blazed through efforts to assemble the smaller pieces of the puzzle, there was an elephant in the room that could not be ignored.   The biggest task at hand was to optimize the custom Window 8.1 configuration.  This process ultimately entailed a great amount of exploration and testing.  The process of deploying a build took about thirty minutes and sometimes we had to take several builds per day to work through issues. Answers were not always straight forward, but thankfully, we had a team of great folks at Microsoft to assist with the execution of that effort when we got stuck.

Once the solution became completely locked down, another problem emerged: the system access was so restrictive that it became hard to collect information to diagnose issues or even determine if tests passed or failed!    On the positive side, I learned to write batch scripts and PowerShell commands; in many cases that was the only way I could determine test outcomes.  (Note to hackers planning on flying Alaska:  don’t get any crazy ideas, the mode that allowed me to run my test scripts is disabled on the devices in service.)

While we were chiseling out the custom Windows build, all of the other pieces of the project being built in parallel slowly started to fall in place. The “Settings” app was created to adjust screen brightness and monitor battery settings.  The services to collect all of the analytics data from the devices came together.  Microsoft delivered games that met the offline requirements of the solution.  The Next Issue Magazine application was repurposed to run offline and accept content from the device’s memory card.   The DRM enabled video player was eventually integrated into the Movies and TV application successfully, an essential achievement to ensure that the product delivered the early release window movies that are the product’s key selling point.

None of these things came about very easily.  Each component needed to be tested rigorously and most had their own unique challenges and problems to be solved along the way.   On a typical project, an unfulfilled external dependency can quickly grind activity to a halt.   At least in this case, we had so many things to do that it always seemed like there was something else waiting to be done.

 

Teamwork

The planning and strategy that went into this effort was key to it’s success.  But the real secret sauce that made the solution come together was great collaboration.  Everyone was dedicated to each other’s success, even if it meant going outside of our typical roles.

Though I’m not sure that dev lead Paul Cullin always looked forward to my visits to his desk, he would always hear out my point of view.  In fact, he would even come by my desk to solicit my opinion from time to time.  When it came time to test the in flight server based video integration, I was not able to travel to Chicago to participate in the integration testing, so I armed Paul with a set of test cases and he did a first class job of testing and reporting back the results.  And he had to fix the issues too!

I also enjoyed collaborating with architect Bill Bronske, who taught me how to use some tools that I hadn’t used before and made sure that the analytics services he coded were highly configurable, setting me up for success when I went to test it. These are just two examples among many that illustrate things that worked well.  We also had a weekly cross team conference call with representatives of Alaska, SkyCast, Microsoft and Ratio which was essential for keeping the collective team in sync.

Of course, there were times when things didn’t go perfectly either.  Some problems seemed insurmountable until breakthroughs were made, but with a lot of persistence, hard work and team collaboration, we pulled out a win.

A Final Note from Microsoft