My project is about comparing undergraduate student self-confidence before and after participation in a contest called the UTA Libraries Pitch of the Week (POW!) Contest. In this report I will describe how I used the contest to satisfy my goal of measuring how student self-confidence in teamwork, proposal writing, technical communication, creative thinking and critical thinking are affected by participating in a fun, low-risk competitive environment. I have described the contest at the above website, and on my PLC Mini-conference website. Please refer to those resources for in-depth information about the POW! contest itself.
To meet my objectives, I gathered and analyzed self-assessment data to determine how the contest effected self-confidence in the above listed skills. Questions I sought to answer were:
- Did the contest appear to have an effect on student self-confidence, and if so, which of the five skills did it have the greatest impact on?
- Is there a difference in effect between the team members that won their preliminary rounds compared to team members who did not win preliminary rounds?
There were a total of four rounds of the POW! contest; three preliminary rounds and a final round where winning teams from the three preliminary rounds competed against each other. Each student that registered to compete completed a self-evaluation about their confidence levels in the above listed skills (teamwork, proposal writing, technical communication, creative thinking and critical thinking). After the competition, students completed a post-contest survey where they answered the same questions about confidence levels again. The winners of each preliminary round did not take the post-contest survey until after the final round, so every student only took the post-contest survey once.
The most important tool used in this study is the Qualtrics survey system, used to gather self-assessment data. Every contestant pre-registered and completed a self-evaluation of their confidence levels in the five specified skills (teamwork, proposal writing, technical communication, creative thinking and critical thinking). Qualtrics was used again to gather post-contest self-evaluations for comparison.
Many other tools were used throughout the planning and implantation of this contest. Basecamp was used as a project management tool to keep Experience @ UTA Libraries Planning Committee members on-task. Lots of technology was used in the contest rounds themselves, including laptops, audience-facing displays, whiteboards, a PA system with wireless microphone, streaming video capture technology, and a popcorn machine. Audience members were able to vote using the Poll Everywhere platform.
In total, 26 students participated in the competition. There were 25 matched pairs of pre- and post-self-evaluations. I initially believed that student self confidence would get a boost from exposure to the competition and that student self confidence would increase in each of the five categories. When comparing all pre- and post-evaluation averages, we find that this holds true for all but the teamwork category, which decreased. The teamwork skill also showed the largest change in student self-confidence, dropping from an average of 4.44 to 4, a 9.5% change. The largest gain in confidence was in proposal writing, rising from an average of 3.8 to 4, a 5.1% change.
I then wanted to see how comparisons differed between teams that won their preliminary rounds and those that didn’t. My prediction was that winning teams would show a greater increase in confidence across all categories than those who did not win, but that even the teams that did not win would, on average, show an increase in confidence. This turns out to have been wrong. The winning teams gained significant confidence in all categories, but the non-winning teams lost confidence in all but two categories, critical thinking and teamwork, where the averages in pre- and post-evaluations were identical (no increase or decrease shown).
Though it was not part of my initial research plan or hypothesis, it is interesting to note that winning team pre-contest confidence averages were higher than non-winning team pre-contest averages in all but one category, proposal writing. Could pre-contest confidence levels be an indicator for winning?
Due to low sample size and a couple problematic issues with data collection, this data is not completely reliable. It should be seen as a pilot model with much room for improvement. Two problems come to mind. First, there were most likely a subset of respondents who hastily selected values to finish the surveys, rather than putting thought into their selections; second a few of the competitors did not complete their pre-contest self-evaluation until after their preliminary rounds, which makes that data suspect (more about this in Future Direction, below). However, I feel that if the process were scaled up to larger sample sizes, the data would become more reliable. For example, if we host the contest every semester, we would accumulate more data over years. The aggregated data would be more reliable than any individual set of contest data.
Something that occurred to me as I was conducting my data analysis was that the way in which teams were formed might have affected the outcomes of preliminary rounds in unintended ways. In this first-ever series of contests, we had a very small pool of registrants. I initially intended to group teams by attempting to diversify them based on major, minor, classification, and their five confidence levels. But because there were so few registrants, teams ended up being formed with only a mix of majors and classifications considered, but no thought was put into diversifying teams based on participants’ confidence levels. When I noticed that the higher-average pre-contest confidence levels correlated to the primary round winners, I realized that the winning teams may have been unfairly stacked with higher-confidence students. In future iterations I would like to either explore this theory by intentionally teaming up higher-confidence students together or find out if those teams always, or almost always win, or I would like to try to diversify the teams more evenly based on pre-contest confidence levels. If we conduct this contest twice a year in fall and spring, I might try to do both.
The above observation is directly related to the biggest obstacle that we faced in implementing the POW! contest: very few students registered to play. Even though we placed a lot of time and resources into marketing the events, we were still barely able to fill the teams. On top of that, some of the students who registered in advance didn’t show up and we had to get volunteers from the audience to take their places. Because of these unintended no-shows and their last-second volunteer stand-ins, pre-diversification of the teams was effected because we basically stuck the volunteers in the empty spots without any regard to their major, minor, classification or confidence levels. As mentioned above, some of the volunteers selected from the audience did not even complete their pre-contest self-evaluation until after their preliminary rounds, which makes that data suspect.