Budgets and/or timelines won't allow for me to test all of my products, what should I do?

Budgets and/or timelines won't allow for me to test all of my products, what should I do?

Best practice is to test all products but if budgets and/or timelines won't allow for that then some testing is better than no testing.  Please keep in mind that this method of extending some testing knowledge to other non-tested items is intended to provide the product manufacturer with the best practices on how to apply the learnings from testing to the broadest number of products possible.  Part of that process is to monitor shipments and ensure that the assumptions made are holding true.  It is not intended to be a blanket ‘approval’ of all products, instead it gives the product manufacturer an understanding and confidence that if a damage does occur for a particular product, it will have a high likelihood of passing a follow up test.   


Below are some recommendations on how to approach this method.


Each variable should be isolated and evaluated independently.  We have seen in some situations where even the small variables of color change have led to different testing outcomes.  The extrapolation of testing data to other product(s) or similar packaging configurations is a risk based decision that only the product manufacturer can make and one that ISTA can’t endorse as we are absent of many of the key decision elements.


A best practice process to ensure that you are informed prior to bridging knowledge gaps of packaged-product tests is to create a detailed matrix of product and packaging attributes i.e. all packaging and product specifications.  This enables a clear understanding of the packaged-products attributes so that those with similar attributes can be congregated together.  This is also allowing for a testing plan that captures the full range of packaged-products i.e. small, medium, large, extra-large.  By having the full range defined, it becomes easier to understanding the packaged-products that exist in your testing gaps.


Post testing, detailed shipping records for the entire product-line should be maintained including number of units shipped and the number of damaged in distribution.  If the damage level indicates for those items not tested is similar to those that were tested, then you should have greater confidence in the risk-based assumptions made.  If, however, any particular packaged-product shows damage trends that are deemed excessive when compared to those items that were tested that packaged-product should be tested. 

Lastly, fully documenting your testing rationale will provide you with a valuable reference moving forward should a damage occur and someone within your organization what to understand why something was not test.  It also provides a stronger case when dealing with Carrier damage issues as you have a clear rationale for the work done.