As differing evaluation instruments are adopted in learning object repositories serving specialized communities of users, what methods can be adopted for translating evaluative data across instruments in order to share this data among different repositories? How can evaluation from different reviewers be properly integrated? How can explicit and implicit measures of preference and quality be combined to recommend objects to users? In this research we studied the application of Bayesian Belief Network (BBN) to the problem of insufficient and incomplete reviews during learning objects evaluation, and translating and integrating data among different quality evaluation instruments and measures. Two BBNs were constructed to probabilistically model relationships among different roles of reviewers as well as among items of different evaluation measurements. Initial testing using hypothetic data showed that the model was able to make potentially useful inferences about different dimensions of learning object quality. We further extend our model over geographic distances assuming that the reviewers would be distributed and that each reviewerwould change the underlying BBN network (to a certain extent) to suit his/her expertise. We highlight issues that arise due to a highly distributed and personalized BBN network that can be used to make valid inferences about learning object quality.