Results & Findings:
For user testing I created two groups of six people and equal number of females and males in both groups. One group was a moderated group at university computer lab and other unmoderated, where users performed tasks remotely from their home. I've recorded each users performance, analyzed and measured it by two factors: task completion speed and error rate. The findings were quite interesting. Most interesting was that, even though users of variation A performed better (faster, made fewer errors), in the end they rated the interface appeal and enjoyment lower than users who used variation B, and performed worse.
Males in general performed much faster in all of the tasks, except for task 4, which required more concentration and focus.
Users in moderated group performed worse than the users doing tasks remotely from home in unmoderated group
Minimalist style of navigation on variation's B welcome screen, seemed to confuse users the most and they performed worst on this screen.
On average users performed tasks faster on variation A. The average time it took to complete all of the tasks on variation B was higher by 28 seconds.
Rate of errors, where users made mistakes, and clicked wrong elements was much higher on variation B.
Feedback results show that in general users on Variation A performed better, yet Variation B users enjoyed the interface and doing tasks more