Meta-attributes in sensory descriptive analysis

Sara King/ August 23, 2015/ Poster/ 0 comments

Descriptive analysis was conducted by a trained panel on potato varieties, forty in 2010 and forty-four in 2014. The panel evaluated 52 well-defined sensory attributes on line- scales anchored at 0 and 100. Our objective was to determine if there were groups of homogeneous attributes, or “meta-attributes.”

Best practice recommendations for attribute order in Check-All-That-Apply (CATA) and related test methodologies

Sara King/ August 22, 2015/ Poster/ 0 comments

It is well documented that the position of attributes in a Check-All-That-Apply (CATA) question can bias responses. As positional biases cannot be eliminated, they are balanced across products via experimental designs, ensuring each attribute appears with equal frequency in each position for each product. But what is the best way to allocate attribute list orders?

Immediate feedback training for difference from control panels

Sara King/ August 22, 2015/ Poster/ 0 comments

Panelist training is essential for a successful analytical sensory analysis, like Difference from Control (DFC). Panelist performance requires feedback, calibration and motivation. Typically, panelists are recruited, screened, trained and qualified before becoming part of an ongoing DFC quality panel. This study compared the impact of training on two groups of vodka quality panelists. A pool of panelists, inexperienced in vodka

Read More

Sara King/ August 22, 2015/ Poster/ 0 comments

Data collection methods and devices have changed over the years. Research facilities often compare historical to newly obtained data sets however, the question remains; can data be compared when different data collection devices were used? This study determined that descriptive analysis test results are comparable across three different devices, an iPod, an iPad and a computer monitor.