Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.16 No.4 pp.632-646

A Hierarchical Model of Service Quality in Higher Education Institutions

Marc Immanuel G. Isip*, Richard C. Li
Department of Industrial Engineering, University of the Philippines, Los Baños, Laguna, Philippines
Department of Industrial Engineering, De La Salle University, Manila, Philippines
Corresponding Author,
20170205 20170602 20170817


A need to develop a multi-perspective assessment tool that will measure the institutional quality of service of public HEIs in the Philippines has been realized. Thus this study aims to propose a hierarchical conceptual model of service quality in public HEIs. Moreover, this study offers an industry-specific model of service quality, which can provide a foundation for further research regarding perceived service quality and a practical assessment tool for assessing quality in HEIs. The model underwent three phases of development. Analysis of data from 872 respondents from three state universities indicates that the proposed model passed the tests for sample adequacy, reliability, and discriminant /convergent validity. Moreover, model showed strong absolute and incremental fit indices. The model is composed of three levels with 71 variables representing 6 quality dimensions (Teaching Achievement, Research Capability, Delivery, Student Competence, Continuous Improvement, and Content) and 19 sub-dimensions.



    Resources are scarce especially in Philippine public higher education institutions (HEI); because of this, quality assessment is important to tell whether HEIs are using their resources efficiently and productively. Existing quality models focus much on how services affect customers/ students or on whether students/graduates meet international standards and employer expectations. While students are considered to be the primary customer of higher education (Hill, 1995) and the biggest stakeholder in the provision of quality education in HEIs, there are certain weaknesses of considering only a single perspective (student’s) in evaluating the quality of education. A single perspective in the evaluation process will not encourage perspectives from other stakeholders concerned such as concerns on internal, technical, and academic elements of quality education.

    According to Joseph and Joseph (1997), owing to resource restrictions, it is not possible for institutions to provide everything only for the benefit of the students. The need for a multi-perspective approach in quality model development becomes significant as it considers the different perspectives of quality from other cluster groups who have a stake in the delivery of quality education in HEIs (i.e. students, parents, academic and non-academic personnel, administration). A multi-perspective assessment tool considers each of the concerned stakeholders’ perspective resulting in an acceptable trade-off among the stakeholders whereby a common ground for satisfying each stakeholder will be achieved. Joseph and Joseph (1997) recommended in their study to consider other customer groups in the de- termination of service quality. Moreover, student perspective assessments by Ford et al. (1999), Angell et al. (2008), Joseph and Joseph (1997) disclosed that the perceptions of other concerned stakeholders must be considered for HEIs to come up with strategic directions towards the delivery of quality education.

    Review of literature shows that existing tools for school evaluation are not appropriate and not enough to measure the service quality of higher education institutions (HEI) for the following reasons:

    • The difference of the two classifications of HEIs (public and private) in the Philippine setting makes the tool IQuAME, used by Commission on Higher Education (CHED), inappropriate as supported by Ruiz and Junio-Sabio (2012), which recommend a separate assessment tool be developed for the public HEIs.

    • Existing service quality assessment tools: SERVQUAL (Parasuraman et al., 1985), SERVPERF (Cronin and Taylor, 1992), and HEDPERF (Abdullah, 2006) if adopted, are limited to measure the service quality of HEIs because only the customers’ perspective was considered in their development. Abdullah admitted that the higher education system has different groups of customers with diversity of needs. Lehtinen and Lehtinen (1991) state that in a service context, customers often have some participation in the process of service delivery and their participation may vary depending on the kind of service. Higher education has different groups of customers; and these customers of higher education are quite different from that in manufacturing or general services; different levels of participation of customer groups results in varying levels of importance of quality dimensions for different customer groups (Owlia and Aspinwall, 1996). An assessment tool that considers multiple perspectives will provide a different result (Abdullah, 2006).

    • SERVQUAL and SERVPERF were deemed inappropriate for use to assess service quality in higher education because the tools were developed from industries that are very different from higher education. Modification of the tools to suit the higher education setting is regarded not enough because there are certain technical requirements that institutions must achieve. Moreover, if there is any modification in the tools to be done, severe modification would be needed, making the tool questionable with regards to its reliability.

    • HEDPERF lacks specific important technical or academic requirements that are needed from the institutions to be competitive. One example lacking in HEDPERF is the institution’s research-related activities; HEIs around the world are geared towards becoming a research university. This absence of that aspect in the tool is attributed to the single perspective considered in its development.

    • HEDPERF is developed using the combination of student respondents from both the private and public tertiary education institutions in Malaysia. In the Philippines, a huge disparity of nature is observed between private and public HEIs. Similar with the criticism of the IQuAME, if there is an assessment tool to be developed, there should be one specifically for public HEIs and a separate one for the private.

    • HEDPERF is developed in 2006 in Malaysia; more than ten years have passed and many developments in the Philippine education system have occurred that some parts of the HEDPERF tool may be moot and irrelevant.

    Because of the aforementioned reasons, there is a need to develop a multi-perspective assessment tool that will measure the institutional quality of service of public HEIs in the Philippines.

    The purposes of this study are: (a) to propose a conceptual model of service quality in public HEIs, and (b) to test the proposed model by developing a tool for assessing service quality of public HEIs. Moreover, this study offers an industry-specific model of service quality, which can provide a foundation for further research regarding perceived service quality and a practical assessment tool for assessing quality in HEIs.


    A hierarchical conceptual model was first developed; and from this model a new service quality assessment tool was created. The following steps were done to achieve the main objective of developing an operational assessment tool in public higher education that can be used by an HEI for self-evaluation of service quality.

    2.1.Literature Review

    Literature review on existing service quality tools available was conducted. The exploratory research and analysis of journal articles revealed there is no formal published study within the Philippines that concerns service quality assessment tool development or application, and that also considers perspectives of the key stakeholders (not just the students).

    The result of the literature review conducted also yielded service quality dimensions based from the results of previous researches on higher education. It was decided to arrive at a pool of quality dimensions the stakeholders can identify. Thirty (30) service quality dimensions were checked of the given definitions based on re lated literature (see Table 1). The definitions of the service quality dimensions were subjected for expert review. Experts were approached to comment on the final listing of the quality dimensions. They were also asked to indicate other quality dimensions needed to be included in the list, which are not yet included in the initial listing.

    2.2.First Wave Survey

    The thirty dimensions and their definitions were then transferred in a draft survey questionnaire to let the stakeholders (students, administrators, faculty, and parents) determine which are important. The first section pertains to the respondent profile. The second section contained the thirty service quality dimensions and their respective definitions. The respondents were asked to identify which dimensions are important and should be available for them to say or declare that a higher education institution is of quality. With the same rating scale used throughout, the items were measured on a 5-point Likert Scale. Pilot testing was done to measure the internal consistency, just like what Abdullah (2006) used in developing HEDPERF.

    The Cronbach alpha obtained by the pilot testing was 0.90094 for the parents, 0.861945 for the students, 0.878141 for the faculty, and 0.849187 for the administrators. Considering high scores of alpha was obtained per stakeholder in the pilot testing, the survey questionnaires were considered good to be used in the administering of the full-scale survey. This means that the constructs in Section 2 are indeed stated clearly to be understood correctly by the different groups of stakeholders.

    Survey questionnaires were distributed to the four key stakeholders: parents, students, administrators, and teachers. Data had been collected from one state university in the Philippines for the period February to March 2015. Data were collected using the ‘personal contact’ approach whereby ‘contact persons’ have been approached personally and the survey explained in detail. The final questionnaire together with a covering letter was subsequently handed personally to the ‘contact persons’ who in turn distributed it to the key stakeholders. This approach of data gathering is also the same methodology used by Abdullah (2006).

    A total of 120 students from four Humanities recitation classes attended by mixed students from varying colleges in the university have been surveyed from whom 51 corrected completed questionnaires have been obtained, yielding a response rate of 42.5%. Out of 51 students who responded to the questionnaire, 64.1% were female. These 120 students were also given survey questionnaire which they shall give to their parents to answer. Out of the 120 survey questionnaires given for the parents to answer, a total of 51 corrected completed questionnaires have been obtained, yielding a response rate of 42.5%. It was observed that those students who have answered and returned the survey questionnaires also returned the questionnaires answered by their parents. Of the 51 survey questionnaires answered by parents, 68.63% of the respondents are mothers/female.

    For the faculty, a total of six colleges were approached and two departments each were selected. A total of five faculty members in every department were given the survey questionnaire; thus a total of 60 questionnaires were handed out to be answered. Only 51 questionnaires were accomplished and deemed useful. Of the 51 respondents, 28 or 54.9% are female. For the administrators, all the nine colleges in the state university were approached, including the overall administrative offices that house the executive committee. Since there can only be a few individuals that hold administrative assignment, almost all administrators were approached. Only a total of 37 responses were gathered, which already included those who participated in the pilot testing; this only yielded a response rate of 25.52%. Table 2 shows the summarized demographics/ response statistics of the first wave survey.

    This part of the methodology aims to determine the degree of importance for each of the service dimensions obtained from the literature review; and the results of the 1st wave survey provided a shortlist of six (6) important quality dimensions which was based on a cut-off score of 70%. These six quality dimensions become the 1st level dimensions that will compose the hierarchical quality model. The 2nd and 3rd level dimensions will be explored in the preceding methodology which will be discussed in section 2.3.

    2.3.Focus Group Discussion/Workshop

    The quality attributes (sub dimensions) and quality variables (sub attributes) were identified to add depth into the conceptual model. Four sessions of FGD was conducted on June 2015 to arrive with the initial draft. Ten participants attended the FGD: five faculty members, three students, and two parents.

    These participants have the following profiles, and they were chosen for the following reasons:

    • 1) Parents: the two fathers, to be specific have their children studying in a state university. Both parents are active in research and works in a research institute. Their wives are faculty members of a state university, thus they have a background on the terms needed to be discussed.

    • 2) Students: three students were chosen from the current roster of enrolled students in a state university. One student is a student leader serving in the university student council. Another is a cum laude-standing student of engineering. And the third student participant is a president of a student organization. These students were chosen because of the insights that they can give especially in the quality of education a student must receive. These students were believed to be able to provide a broad range of experience that will be fruitful during the discussion.

    • 3) Teachers: five faculty members were chosen from two prominent colleges of a state university. All five faculty members are full-time employees; three of them are already tenured. They are active not just in providing teaching instruction, but also in research and extension services. All five faculty members already have publications in international refereed journals and have participated in international conferences. Their committee works range from Socials and Sports, Curriculum, Research and Extension, Building Administration, Library, Internal and External Public Affairs. They were selected based from their individuality as faculty members, which includes their expertise and experience based on their committee works and research contributions.

    In summary, a diverse participation from the stakeholders was obtained in order to generate discussions with a wider view and understanding of different service quality expectations. Looking at their background, work experience, knowledge and expertise, interests, and capability, these chosen participants were believed to be a strong group to help the researcher of this study arrive with an initial draft of the assessment tool. Kumarapperuma (2014) used a similar approach of choosing a diverse mix of participants for her focus group discussion, with the intention of getting a representation from different backgrounds.

    The objective of this FGD/ Workshop is to determine the key service quality attributes (2nd level) and variables (3rd service quality dimensions.

    The workshop was termed as a creative type of focus group discussion whereby a group activity was provided for the participants to engage in. The workshop starts with the setting up of the intention of getting the work done for two quality dimensions. Since there are six quality dimensions to work on, two quality dimensions were assigned per session, having the last session as the culminating activity of finalizing the initial draft of the assessment tool.

    The first activity of every session was to approve the definition of the quality dimensions as stated in the workshop guideline. That was to make sure that every participant agrees to the given definition of the quality dimension, and if not, a participant is welcomed to provide a modification or a totally different definition of the quality dimension. The participants were also encouraged to provide inputs and suggestions should they find the definition lacking, unclear, unspecific, or unacceptable.

    After the definition of the quality dimension is already settled, the quality attributes and the sub-level attributes (quality variables) were then provided initially by the researcher/facilitator. The role of the participants was to identify appropriate quality attributes that would fall under the specified quality dimension. Moreover, defining the quality attributes was also another task to be done. Situations per stakeholder were also given by the facilitator as a guide. Those situations were noted beforehand based on readings from newspapers, magazines, journals, testimonies, etc. Another major source of guide was the CHED institutional assessment tool, together with the AACCUP institutional assessment tool.

    After a service quality attribute is finalized, the participants were then provided with initial list of sub-level attributes, which are termed as quality variables. The same process of identifying, defining, and grouping was made per variable. The participants were instructed to also provide other quality attributes and variables not found on the initial choices provided by the facilitator, should the participants think those attributes and variables need to be included. The participants thus must reach an agreement amongst other an accepted definition of that quality attribute or variable.

    The participants were also encouraged to speak up and contribute whether the quality attribute/s and/or quality variable/s does not fall appropriate within a group or does not possess an acceptable and clear definition. Participants were highly welcomed to say what he/she feels and think towards what has been discussed, whether or not it is for the agreement.

    2.4.Expert Panel Consultation

    Inputs coming from the perspectives of administrators were also considered. They cautioned that if the questionnaire is to be answered by parents and students, there might be tendencies that those respondents may not understand some of the items. It was then recommended that the researcher must always be available for consultation whenever a respondent needs reinterpretation/ restatement/ rephrasing of an item.

    The output provided by the preceding FGD/workshop was then approved by the expert panel; and this finalized the conceptual model of the service quality of public higher education institutions of the Philippines. The conceptual model was then subjected to further confirmatory testing and analysis.

    2.5.Pilot Testing

    The survey questionnaire was first released for a pilot testing. The same procedure was done prior the 1st wave survey as discussed previously. A contact person was approached to distribute and collect. The test result yielded a high Cronbach alpha of 0.973 for students, 0.948 for parents, 0.965 for faculty, and 0.953 for administrators. Given the high alpha values across stakeholders, the internal reliability of the questionnaire is assured.

    2.6.Second Wave Survey

    The survey questionnaire was distributed to the four key stakeholders: parents, students, administrators, and teachers. Data had been collected from three state universities in the Philippines for the period of July to August 2015. The subject HEIs were chosen based on the following criteria: 1) public HEI, 2) categorized as a “University” by CHED, 3) diverse student population with different cultural background/ethnicities, 4) diverse academic offerings which include graduate, undergraduate, and diploma courses, etc. and 5) majority of its students are in the middle-to-low income segment.

    A total of 780 students were given survey questionnaires. Completed questionnaires of 334 was obtained, which yields a total response rate of 42.82%. The 780 students of the three state universities were also given questionnaires to be answered by one of their parents. A total of 304 returned and were deemed usable, yielding a response rate of 38.97%.

    A total of 450 faculty members of three state universities were given, with 170 questionnaires returned and deemed usable, yielding a response rate of 37.78%. Administrators given with a survey questionnaire was 190. A total of 64 completed questionnaires were deemed usable, having a response rate of 33.68%. Table 3 shows the graphical summary of the demographics and response statistics of the second wave survey.

    The same as with the First Wave Survey, this aims to determine the degree of importance for each of service attributes and variables that fall under the six service di mensions. Survey responses will be used for the empirical model formulation which will be discussed in the next section.

    2.7.Empirical Model Formulation

    After the results were taken in, the encoded data were subjected to a confirmatory factor analysis (CFA) through structural equation modeling (SEM) with the use of the SPSS Amos software.

    2.8.Statistical Analysis

    The sample adequacy test was performed using the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy. Other statistical analyses were also performed: the uni-dimensionality of the constructs was tested through the goodness-of-fit indices values generated through SEM. The reliability of the model was then tested through Cronbach alpha.

    For validity, the content validity was secured by using experts. During the time the second wave survey questionnaire was developed, aside from the stakeholder representatives that participated in the FGD, a panel of experts were consulted before the pilot testing of the questionnaire; their inputs, even though minor, were considered. This makes the model formulated in this study valid based on content, as reviewed by the experts prior the full-scale survey was distributed.

    Another validity test, which is construct validity, was conducted. To assess the construct validity, the convergent validity and discriminant validity was tested. Convergent validity was tested through inspecting the factor loadings of the quality dimensions and attributes; moreover, the composite reliability (CR) was computed and observed. Discriminant reliability was tested through comparing a computed square root of the average of variance extracted (AVE) against the correlations of the quality dimensions to each other. Figure 1 for the flow chart which will be discussed in section 3.3.


    The quality dimensions that made the cut are as follows:

    The overall Cronbach Alpha for the 1st wave survey is 0.850484. The alpha translates into a good internal reliability considering that the value is greater than 0.8.

    The output of the series of FGD serves as the initial framework or conceptual model to be confirmed in a factor analysis through the SEM technique.

    3.1.Second Wave Survey Results

    The assessment tool can explain large percentages of the variability in the different main dimensions. Variability ranges from approximately 79% to 96%. Quality as teaching achievement (88%), quality as research capability (89%), quality as delivery (90%), quality as student competence (79%), quality as continuous improvement (95%), and quality as content (96%) can be significantly explained by the evaluation tool. The conceptual model is presented in Figure 2; which is composed of six first level quality dimensions and 19 second-level dimensions.

    The standardized regression weights can be interpreted as the correlation between the observed quality dimension and the corresponding tool (see Table 4). Except for Tenure, all z scores are above 0.6, which indicates positive correlation between the dimensions and attributes, and between dimensions and the tool (Hooper et al., 2008).

    The overall root mean square error approximation (RMSEA = 0.037), goodness of fit index (GFI = 0.961), normed fit index (NFI = 0.965), comparative fit index (CFI = 0.978), and ratio of chi-square and degrees of freedom Chi-square/df = 2.663) indicate that the overall model fits well with the data.

    According to Lei and Wu (2007), if a model fits the data well and the estimation solution is deemed proper, individual parameter estimates can be interpreted and examined for statistical significance. As seen from the Table 4, the critical ratio (CR) values are greater than 1.96 for a regression weight, meaning that path is significant at the .05 level or better. All regression weight estimates are positive, with very low standard errors. This shows strong positive correlations between dimensions, attributes, and the tool.

    3.2.Statistical Test Results

    Figure 1 shows the process flow of how the statistical analysis was carried out in this study. The preceding sections discuss the results of the tests conducted and their interpretation.

    3.2.1.Sample Adequacy Test

    The KMO value of 0.931 was obtained, with an approximated chi-square value of 6854.218. The KMO value of 0.931 is above the 0.9 value which have an interpretation of “superb” adequacy of sample size (Hutcheson and Sofroniou, 1999: 224-225).

    3.2.2.Test for Uni-Dimensionality

    Table 5 shows the measures of model fit per dimension, and the results indicated an acceptable fit for the model, because the indices show values very near 1.0. The relative likelihood ratio values (χ2/d.f) were all below 5, which also translate to a good fit. The overall assessment tool from combined perspectives of the stakeholders has a close fit, since the RMSEA value is below 0.05. Given the good showing of the indices values, there is strong evidence that uni-dimensionality is observed; therefore, it can be concluded that the model fits well and represents a reasonably close approximation in the population.

    3.2.3.Test for Reliability

    The Cronbach alpha values computed in the 1st and 2nd wave survey were already discussed in the previous sections. The alpha values were greater than the required prerequisite of 0.8, which indicates good reliability (Abdullah, 2006). It can be concluded that the constructs/ dimensions are internally consistent and have satisfactory reliability values in their original form. There is reasonable confidence that the assessment tool reliably measures the concept of service quality of HEIs.

    3.2.4.Test for Content Validity

    Content validity depends on the judgment of the experts looking at the contents and the contrast to ensure that the construct is adequately covered (Kimberlin and Winterstein, 2008). In this study, experts commented during the development phase of the 1st wave survey and 2nd wave survey questionnaire. The expert panel, in the 1st wave, specifically looked at the collection of quality dimensions and their corresponding definition which were based from review of literature. The expert panel, in the 2nd wave, specifically looked at the outcomes of the FGD series and compared them with their own expert views. The participants of the FGD series were well chosen individuals/ stakeholders with good understanding and knowledge of quality in public HEIs. All these measures have been taken to maximize the content validity of the survey instrument by eliminating all possible options for misinterpretation of questions as well as to avoid any key omissions of established theories.

    3.2.5.Test for Construct Validity

    According to Abdullah (2006), in assessing the construct validity, it is recommended to test the convergent validity and discriminant validity. Looking at the results of the CFA, the overall model (see Table 4) has passed the convergent validity since the values of the factor loadings are greater than 0.5 (Janssens et al., 2008) which is acceptable; almost all of the factor loadings are more than the 0.7 value, which Janssens et al. (2008) consider the recommended threshold.

    The square root of the AVE value must be obtained to be used in the analysis for discriminant validity; and must be greater than the correlation coefficient values (Bertea and Zait, 2011). Table 6 shows the comparison of the overall AVE value against the correlation coefficient for each quality dimension. It can be seen that the AVE value of 0.773 is greater than the correlation coefficient for each type of quality dimension thus, discriminant validity is supported.

    Moreover, it tells that the latent variable is able to take into account most of the variances that comes from its own group of quality variables, and not mainly from other groups or other latent variables, and most importantly, not from unmeasured influences or unknown errors (Farrell and Rudd, 2009).


    Aside from soliciting views from parents and administrators, the assessment tool (MPAT) caters not only to the needs of students as the customer of focus, but also to the faculty members who are also considered as the main internal customers, which are recognized to be significant for maintaining internal service quality.

    The MPAT provides items that explicitly recognize technical requirements (i.e. Teaching Achievement, Research Capability, Content, and Student Competence) needed from a competitive HEI, specifically, a research university. These recognized technical requirements are not explicitly stated in HEDPERF which made the MPAT more appropriate and more balanced.

    MPAT, a 71-item tool is hierarchical which makes it favorable for three levels of development and management: strategic, tactical, and operational. The administrators can look at the quality dimensions, which provide inputs and understanding on a macro level if they are to plan strategically; the quality attributes (the sublevel of quality dimensions) are there if they are to plan on a tactical level; and the quality variables can provide specificity which can be helpful in the operational level.


    The model developed was further put into practical use as an assessment tool in the evaluation of service quality of public higher education institutions in the Philippines.

    The assessment tool is divided into four (4) parts; please see Appendix E. The first part asks the respondents about the “PERFORMANCE” of the institution by rating how they agree (disagree) to the statements that tell about the performance of the services provided by the institution. The 2nd and 3rd part ask the respondents about how they agree (disagree) to the statements that tell about the IMPORTANCE of the service attributes and dimensions; this applies to the 19-item and 6-item parts of the survey questionnaire, respectively. And lastly, part four asks the respondents to provide an OVERALL PERFORMANCE rating of the service quality of the institution.

    The importance and performance ratings can be further used in the Importance-Performance Analysis as developed by Martilla and James (1977). The data points are plotted in an IP matrix; the x-axis is for the Performance, while the y-axis is for Importance. The data points that fall in a specific quadrant have a specific interpretation.

    • Quadrant I (Concentrate Here): The quality dimensions/ attributes are perceived to have high importance, but the performance is low. Improvement efforts should be concentrated on those that fall in this quadrant. This quadrant is located in the top left.

    • Quadrant II (Keep up the Good Work): The quality dimensions/attributes are perceived to have high importance and high performance. Therefore, the institution should maintain its efforts. This quadrant is located in the top right.

    • Quadrant III (Low Priority): The quality dimensions/ attributes are perceived to have low priority and low performance ratings. Even though the quality dimensions/attributes have low performance ratings, the institution should be very careful on spending its resources working on improving the performance of these attributes/dimensions because of the low priority rating. This quadrant is located in the bottom left.

    • Quadrant IV (Possible Overkill): The quality dimensions/attributes are perceived to have high performance rating but have low importance or priority. Institutions should evaluate the option of diverting its resources or efforts away from this quadrant towards those attributes/dimensions located in Quadrant I. This quadrant is located in the bottom right.

    The general guideline on how to use the assessment tool is presented in Table 7. This guideline can be used in identifying improvement areas per customer groups/ stakeholders; therefore specific customer groups can have their own summarized results as presented in their own IP Matrix. Moreover, the IP Matrix can also be created based on quality dimensions, sub-level dimensions (attributes). It should be noted that if the IP Matrix is to present sublevel dimensions (attributes), the importance ratings of the quality attributes (sub-level dimensions) must be used. The same must be applied for IP Matrix using quality dimensions, where the importance ratings of the quality dimensions must be used and not the importance ratings of the quality attributes.


    There are two (2) major areas for further study:

    The model can be streamlined further to eliminate some quality attributes or variables that may fall outside the acceptable thresholds Elimination of some variables and attributes were not done in this study because the overall model already provides significant values that fall within the thresholds as dictated by literature; therefore the overall model is already statistically acceptable.

    With a significant difference in nature, activities, resources, and focus; there is a recommendation to look into developing a model for private higher education institutions. This study is focused on the Philippine public higher education institutions; thus the MPAT can be used as a starting framework, and further quality dimensions and other factors can be included (or excluded) and then tested in a private institutional setting.


    The following were done to achieve the objectives of this study.

    • 1) Key service quality dimensions that are relevant for the HEIs in the Philippines were identified through literature review such as journal articles, text books, web sources, and existing published works from CHED and accrediting agencies.

    • 2) The degree of importance for each of the individual service quality dimensions in enhancing service quality were determined from the initial 30 dimensions. Survey questionnaire was subjected for expert review and comments; pilot testing was conducted to ensure the content validity of the survey questionnaire.

    • 3) The key service quality attributes/ variables that fall under the identified important service quality dimensions were determined through a series of focus groups attended by selected stakeholder representatives composed of student, parent, and faculty representatives. The output of the focus groups was consulted from a panel of experts comprising of top level administrators. The finalized output becomes the conceptual model.

    • 4) The degree of importance for each of the individual service quality attributes/variables for each service quality dimension was determined through a 2nd wave survey. Prior validation by experts was done and the reliability of the survey questionnaire was tested through a pilot testing before the full-scale survey was carried out.

    • 5) CFA through the SEM technique was used to validate the model. The model was further tested and passed for uni-dimensionality, reliability, and theoretical validity (content, convergent, and discriminant).

    • 6) The developed model, named as MPAT translates into a 71-item assessment tool which can be used by public higher education institutions to assess their service quality.

    Given the abovementioned procedures and corresponding outputs, it is concluded that this study has achieved its objectives. A summarized flow chart of how the hierarchical model was developed together with their corresponding objectives is shown in Figure 3. Moreover, Figure 3 shows how the assessment tool was borne out from the developed model.


    The authors would like to express gratitude to Prof. Jazmin C. Tangsoc, Prof. Bryan O. Gobaco, and Dr. Eppie E. Clark that serve as the guidance committee members for this project. Moreover, thank you to Dr. George A. Lu for the assistance on structural equation modeling using the SPSS-AMOS software.



    Process flow of the statistical analysis.


    Hierarchical model of Philippine public HEI service quality.


    Flow Chart of the development of the hierarchical model of service quality of Philippine public HEIs.


    Review of literature of the thirty quality dimensions (1/2)

    First wave survey demographics and response rate per stakeholder

    Second wave survey demographics and response rate per stakeholder

    Standardized estimates of parameters

    Uni-dimensionality of the model per quality dimensions

    AVE analysis

    **Correlation is significant at the 0.01 level (2-tailed).

    General Guide in using MPAT


    1. AbdullahF. (2006) Measuring service quality in higher education: HEdPERF versus SERVPERF. , Mark. Intell. Plann., Vol.24 (1) ; pp.31-47
    2. AngellR.J. HeffernanT.W. MegicksP. (2008) Service quality in postgraduate education. , Qual. Assur. Educ., Vol.16 (3) ; pp.236-254
    3. AthiyamanA. (1997) Linking student satisfaction and service quality perceptions: The case of university education. , Eur. J. Mark., Vol.31 (7) ; pp.528-540
    4. BerteaP. ZaitA. (2011) Methods for testing discriminant validity. , Management & Marketing-Craiova, Vol.2 ; pp.217-224
    5. ChenS.H. YangC.C. ShiauJ.Y. (2006) The application of balanced scorecard in the performance evaluation of higher education. , TQM Mag., Vol.18 (2) ; pp.190-205
    6. CroninJ.J. TaylorS.A. (1992) Measuring service quality: A reexamination and extension. , J. Mark., Vol.56 (3) ; pp.55-68
    7. CrosbyP.B. (1984) Quality without Tears: The Art of Hassle Free Management., McGraw-Hill,
    8. DemingW.E. (1995) Out of Crisis., MIT Press,
    9. DewJ.R. (2009) Quality issues in higher education. , J. Qual. Particip., Vol.32 (1) ; pp.4-9
    10. FarrellA.M. RuddJ.M. (2009) http://
    11. FordJ.B. JosephM. JosephB. (1999) Importanceperformance analysis as a strategic tool for service marketers: The case of service quality perceptions of business students in New Zealand and the USA. , J. Serv. Mark., Vol.13 (2) ; pp.171-186
    12. GallifaJ. BatallA(c)P. (2010) Student perceptions of service quality in a multi-campus higher education system in Spain. , Qual. Assur. Educ., Vol.18 (2) ; pp.156-170
    13. GarvinD.A. (1988) Managing Quality: The Strategic and Competitive Edge, Free Press,
    14. GreenD. (1988) What Is Quality in Higher Education?, The Society for Research into Higher Education & Open University Press,
    15. HillF.M. (1995) Managing service quality in higher education: The role of the student as primary consumer. , Qual. Assur. Educ., Vol.3 (3) ; pp.10-21
    16. HooperD. CoughlanJ. MullenM.R. (2008) Structural equation modelling: Guidelines for determining model fit. , Electron. J. Bus. Res. Methods, Vol.6 (1) ; pp.53-60
    17. HutchesonG.D. SofroniouN. (1999) The multivariate social scientist: Introductory statistics using generalized linear models., Sage Publications,
    18. JanssensW. WijnenK. De PelsmackerP. Van KenhoveP. (2008) Marketing research with SPSS., Pearson,
    19. JosephM. JosephB. (1997) Service quality in education: A student perspective. , Qual. Assur. Educ., Vol.5 (1) ; pp.15-21
    20. KimberlinC.L. WintersteinA.G. (2008) Validity and reliability of measurement instruments used in research. , Am. J. Health Syst. Pharm., Vol.65 ; pp.2276-2284
    21. KumarapperumaN. (2014) Development of the service quality and performance model for independent colleges in the UK,
    22. Le BlancG. NguyenN. (1997) Searching for excellence in business education: An exploratory study of customer impressions of service quality. , Int. J. Educ. Manag., Vol.11 (2) ; pp.72-79
    23. LehtinenU. LehtinenJ.R. (1991) Two approaches to service quality dimensions. , Serv. Ind. J., Vol.11 (3) ; pp.287-303
    24. LeiP.W. WuQ. (2007) Introduction to structural equation modeling: Issues and practical considerations. , Educ. Meas., Vol.26 (3) ; pp.33-43
    25. LingK.C. ChaiL.T. PiewT.H. (2010) The ‘insideout’and ‘outside-in’approaches on students’ perceived service quality: An empirical evaluation. , Manag. Sci. Eng., Vol.4 (2) ; pp.1-26
    26. MartillaJ.A. JamesJ.C. (1977) Importanceperformance analysis. , J. Mark., Vol.41 (1) ; pp.77-79
    27. MavondoF. ZamanM. AbubakarB. (2000) Student satisfaction with tertiary institution and recommending it to prospective students , Proceedings of the Conference de ANZMAC-Gold Coast Australia, ; pp.787-792
    28. MoralesM. CalderA3nL.F. (1999) Assessing serIsip and Li: Industrial Engineering & Management Systems vice quality in schools of business: Dimensions of service quality in continuing professional education (CPE). , J. Econ. Finance Adm. Sci., Vol.5 (9-10) ; pp.125-141
    29. OaklandJ.S. (2003) Total quality management: Text with cases., Routledge,
    30. OwliaM.S. AspinwallE.M. (1996) A framework for the dimensions of quality in higher education. , Qual. Assur. Educ., Vol.4 (2) ; pp.12-20
    31. ParasuramanA. ZeithamlV.A. BerryL.L. (1988) Servqual. , J. Retailing, Vol.64 ; pp.12-40
    32. ParasuramanA. ZeithamlV.A. BerryL.L. (1985) A conceptual model of service quality and its implications for future research. , J. Mark., Vol.49 (4) ; pp.41-50
    33. RodieA.R. KleineS.S. (2000) Handbook of Services Marketing and Management., Sage Publication, ; pp.205-213
    34. RischR.A. KleineS.S. (2000) Handbook of Services Marketing and Management., Sage Publication, ; pp.111-125
    35. RuizA.J. Junio-SabioC. (2012) Quality Assurance in Higher Education in the Philippines. , Asian Journal of Distance Education, Vol.10 (2) ; pp.63-70
    36. RussellM. (2005) Marketing education: A review of service quality perceptions among international students. , Int. J. Contemp. Hosp. Manag., Vol.17 (1) ; pp.65-77
    37. SadiqS.M. ShaikhN.M. (2004) Quest for excellence in business education: A study of student impressions of service quality. , Int. J. Educ. Manag., Vol.18 (1) ; pp.58-65
    38. ŞandruI.M. (2008) Dimensions of quality in higher education-some insights into quality-based performance measurement. , Synergy, Vol.4 (2) ; pp.31-40
    39. SoutarG. McNeilM. (1996) Measuring service quality in a tertiary institution. , J. Educ. Adm., Vol.34 (1) ; pp.72-82
    40. SumaediS. BaktiG.M. MetasariN. (2012) An empirical study of state university students ?(tm) perceived service quality. , Qual. Assur. Educ., Vol.20 (2) ; pp.164-183
    Do not open for a day Close