The aim of this project is to attempt a factorial analysis of the congruence amongst three layers of assessments of Oral Presentation: Expert, Self, and Peer. Participants included graduate and undergraduate students of Asian background studying Research Writing at a technological university. The research instrument consisted of a set of assessment checklists with weighted and specified criteria for Expert, Self and Peer assessment of research presentations. Broader criteria were based on the following four factors: Use of spoken media, Use of visuals, Interrelationships, and Communicative Effect. Participants received training and were involved in a practice session on self and peer assessment before engaging in the assessment tasks. While the pilot study is unable to conclusively establish congruence among the modes of assessment, it is expected that results from a wider database would indicate the nature of well-defined assessment criteria and the ways they can be effectively communicated to assessors. The present study suggests that the differences in ratings given by peers and experts are not due to a lack of capability on the part of students, but due to lack of orientation and training. It is necessary to evolve criteria that are well-documented to enable students to evaluate their own performance as well as the performance of their peers as effectively as done by experts.