We are looking to improve our service on Prep Air so want to calculate NPS from a survey and see how we compare.
Step 1 - Combine Data
First we want to combine the data from both of the files. We could use the union tool for this, but I have used the Wildcard Union in the input tool. You don't need to add any matching patterns as we want to bring all of the files through:
We should now have both of our inputs in a single table:
Step 2 - Classify Customers
Next we want to classify the responses that we want to compare, so the first step is to count the total number of customers for each Airline. To calculate this we can use a fixed LOD:
Number of Customers
After counting the customers, we can then filter to only airlines with more than 50 customers. You can filter this by using a range filter on Number of Customers:
Finally, to classify the customers responses we can use the following IF statement:
Classification
IF [How likely are you to recommend this airline?]<7
THEN "Detractor"
ELSEIF[How likely are you to recommend this airline?]<9
THEN "Passive"
ELSE "Promoter"
END
After the classification our data should look like this:
Step 3 - Calculate NPS
We can now turn to the main part of the challenge, calculating the NPS for each Airline.
First we need to count the number of customers for each classification and airline by using an aggregate tool:
After the aggregation, we only want to focus on Detractor and Promoter so we can exclude the Passive classifications. Also, to make things easier we can rename Number of Customers to Total Customers and Customer ID to Number of Customers.
Now we can calculate the % Total in each Airline & Classification using this calculation:
% Total
100*[Number of Customers] / [Total Customers]
Then make this a whole number and remove the Total Customer and Number of Customers fields.
Finally we are ready to pivot our table so that we have the Detractor and Promoter scores on a single row. The rows to columns pivot setup looks like this:
Now we are ready to calculate the NPS:
NPS [Promoter]-[Detractor]
Our table looks like this:
Step 4 - Calculate Z-Score
The last step is to calculate the Z-Score for each airline. We calculate this with the following calculation
[NPS]-[Average] / [Standard Deviation]
Before we get to this stage we first need to calculate the overall Average and Standard Dev across the data set. We can use a fixed LOD for this:
Average
Standard Dev
Now we're ready to calculate the Z-Score:
Z-Score
ROUND(
([NPS]-[Average])
/
[Standard Deviation]
,2)
Our data should now look like this:
Then finally we can filter for just Prep Air (filter for selected values) so our output will look like this:
You can also post your solution on the Tableau Forum where we have a Preppin' Data community page. Post your solutions and ask questions if you need any help!
Created by: Carl Allchin Welcome to a New Year of Preppin' Data challenges. For anyone new to the challenges then let us give you an overview how the weekly challenge works. Each Wednesday the Preppin' crew (Jenny, myself or a guest contributor) drop a data set(s) that requires some reshaping and/or cleaning to get it ready for analysis. You can use any tool or language you want to do the reshaping (we build the challenges in Tableau Prep but love seeing different tools being learnt / tried). Share your solution on LinkedIn, Twitter/X, GitHub or the Tableau Forums Fill out our tracker so you can monitor your progress and involvement The following Tuesday we will post a written solution in Tableau Prep (thanks Tom) and a video walkthrough too (thanks Jenny) As with each January for the last few years, we'll set a number of challenges aimed at beginners. This is a great way to learn a number of fundamental data preparation skills or a chance to learn a new tool — New Year&
Created by: Carl Allchin Welcome to a New Year of Preppin' Data. These are weekly exercises to help you learn and develop data preparation skills. We publish the challenges on a Wednesday and share a solution the following Tuesday. You can take the challenges whenever you want and we love to see your solutions. With data preparation, there is never just one way to complete the tasks so sharing your solutions will help others learn too. Share on Twitter, LinkedIn, the Tableau Forums or wherever you want to too. Tag Jenny Martin, Tom Prowse or myself or just use the #PreppinData to share your solutions. The challenges are designed for learning Tableau Prep but we have a broad community who complete the challenges in R, Python, SQL, DBT, EasyMorph and many other tools. We love seeing people learn new tools so feel free to use whatever tools you want to complete the challenges. A New Year means we start afresh so January's challenges will be focused on beginners. We will use dif
Free isn't always a good thing. In data, Free text is the example to state when proving that statements correct. However, lots of benefit can be gained from understanding data that has been entered in Free Text fields. What do we mean by Free Text? Free Text is the string based data that comes from allowing people to type answers in to systems and forms. The resulting data is normally stored within one column, with one answer per cell. As Free Text means the answer could be anything, this is what you get - absolutely anything. From expletives to slang, the words you will find in the data may be a challenge to interpret but the text is the closest way to collect the voice of your customer / employee. The Free Text field is likely to contain long, rambling sentences that can simply be analysed. If you count these fields, you are likely to have one of each entry each. Therefore, simply counting the entries will not provide anything meaningful to your analysis. The value is in