Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-7248 (Print)
ISSN : 2234-6473 (Online)
Industrial Engineering & Management Systems Vol.18 No.4 pp.676-684
DOI : https://doi.org/10.7232/iems.2019.18.4.676

Usability Evaluation and Design of Student Information System Prototype to Increase Student’s Satisfaction (Case Study: X University)

Studiyanti*, Saraswati Azmi, Abror
Department of Industrial Engineering, Faculty of Industrial Technology, Universitas Trisakti, West Jakarta, Indonesia
Graduated from Departement of Industrial Engineering, Faculty of Industrial Technology, Universitas Trisakti, West Jakarta, Indonesia
Corresponding Author, E-mail: linda.studiyanti@gmail.com
May 7, 2019 September 15, 2019 October 18, 2019

ABSTRACT


A preliminary survey that was conducted to students of the Student Information System (SIS) at X University showed that the satisfaction level SIS was low. The main purpose of this study is to increase the student’s satisfaction level by evaluating the usability of SIS. An experiment was conducted to thirty students of X University, categorized as an expert user (age 17 to 23), according to the usability testing method. The effectiveness, efficiency, and satisfaction of SIS are then measured by the Task Successful Rate, Mouse-Clicking, and System Usability Scale (SUS) scores. During the experiment, the participants were given several scenarios. The result proved that the SIS did not work well enough to satisfy the students/participants and improvements needed to be tested on the same participants. The testing process conducted using a prototype that was designed with HHS Usability Guideline. Through the usability testing, it was obtained that the effectiveness managed to increase from 58% to 85%, efficiency increased from 66% to 92%, and satisfaction scores increased from 53,83 to 70,67. Specific matters that influence the subjectivity of usability testing are noted and analyzed so that this study can be used as a practice for further study of usability evaluation in a similar student information system.



초록


    1. INTRODUCTION

    Student Information System (SIS) as one of the most important learning systems in University X, functions as an interface between lecturers and their students in academic communication. Formed as an online accessible website, University X’s SIS facilitates college information student grades information, courses and practices registration, and student-lecturer councils.

    Students—as the customer of the University, have every right to receive satisfaction in using effective and efficient SIS, so that academic communication can be facilitated appropriately. Through pre-interview towards students in January 2018, the students generally are not satisfied with the SIS. That being said, it became the foundation of this study’s purpose, which is to evaluate University X’s current SIS to prove an early hypothesis about low student satisfaction towards SIS and fixing it to increase student satisfaction as a user and customer.

    Satisfaction, effectiveness, and efficiency are highly related to the usability aspect (ISO, 2018). According to Nielsen (1993), usability showed the quality value to judge how easy an interface can be used and how the user’s convenience can be raised when using the interface. It is very related to early symptoms in the form of students inconvenience complaints and dissatisfaction about SIS, which makes this SIS problem very relevant with the evaluation and improvement of SIS’s usability aspect.

    2. METHODS

    The usability testing method will be used in this study because of its compatibility with ISO’s (2018) objective parameter usability which will also be evaluated in this study. Moreover, the advantage of usability testing compared to other methods is its ability to evaluate how much a product fulfills a specific criterion (Rubin and Chisnell, 2008) through symptom indication received from early studies. In this experiment, usability testing will be conducted three times by conducting pilot usability testing, early usability testing, and final usability testing. There is a prototype system step that uses an offline server done between the early usability step and the final usability step. This study is limited to evaluating SIS usability accessed through PC or laptop, while SIS usage through handphone will not be discussed. The reason behind it is because there are a length and width comparison of Windows SIS on handphones that will not shrink with the same comparison so that it can be concluded already that it won’t be usable.

    2.1 Participants

    The students of Industrial Technology Faculty are chosen as participants as representatives of other faculties with most students in University X, which consist of four majors of Industrial Engineering, Mechanical Engineering, Information System, and Electrical Engineering. Participants chosen are an expert user who had a minimum of 1 semester using SIS, proficient in using a laptop or computer, and don’t have any significant eye problem. The number of participants is 30 people, consist of 15 males and 15 females with age range between 17-23 years old, fitting Macefield (2009) consideration that 30 people would find 95% of a usability problem. The number of participants from each major is adjusted with the comparison between each major’s student quantity divided by the total sum of the faculty students with following numbers: participants from Industrial Engineering with 15 people (50%), Mechanical Engineering with 8 people (27%), Information System with 3 people (10%), and Electrical Engineering with 4 people (13%). The participants fill an identity form before usability testing is begun.

    2.2 Sampling Technique

    The sampling technique used in this study is nonprobability sampling with purposive sampling fitting participants chose which are expert users. Participants are Industrial Technology Faculty Students who have attended lectures for at least one semester so that they are proficient in using SIS.

    2.3 Objective Performance and Measuring Instrument

    Effectiveness, efficiency, and satisfaction as the objective performance that were chosen are measured using three instruments. The table below shows the relationship between objective parameter usability according to ISO (2018) and the instruments used as a parameter. Table 1

    2.4 Usability Testing Protocol

    Usability testing in this study used a protocol from previous studies to reduce moderators’ and participants subjectivity who is related to the scenario making, learning effect (Studiyanti and Yassierli, 2016), SUS languageshift questionnaire (Sharfina and Santoso, 2017), and the testing environment (Kim, 2016).

    2.4.1 Task, Scenario, and Learning Effect

    The task is an activity given to the participants in usability testing, while the scenario is a combination of several tasks. All of the SIS’s functions are mapped to be a function tree and then every function is formulated into one task fitting the usability problem received from pilot testing. Functions that are facilitated by SIS can be seen in Table 2.

    Scenarios of early and final usability testing are heard using moderator recorded voice, which also serves a purpose to reduce comprehension subjectivity due to intonation. Moreover, scenarios are better to read to the participants rather than them to read it themselves so that the moderator can set the duration of the usability testing (Rubin and Chisnel, 2008). Even so, those are still realistic scenario which is doable and not including a clue of the task finishing order (Nielsen, 1993). Every time a task has finished reading, participants then do what is asked and the moderator, later on, takes notes on the number of mouse clicks and participants’ comments. Table 3 shows early’s optimal mouse-clicking.

    The order of task and scenario compiled was made by noticing the learning effect aspect. Fitting Studiyanti and Yassierli (2016) in the previous study, the final usability testing scenario has a different task order of completion compared to the first usability tests task completion. Through that scenario difference, participants’ memorability aspect would not dominantly take a role in deciding the task completion efficiency.

    2.4.2 System Usability Scale

    System Usability Scale (SUS) is an objective performance usability measuring instrument which purpose is to measure a person’s subjective satisfaction about the usability of a product by giving 10 questions graded with a Likert scale (Brooke, 2013). In this study, after the participants finished working on the scenario, they filled the SUS questionnaire that had been translated by Sharfina and Santoso (2017) by using cross-cultural adaptation into Bahasa Indonesia, so that misinterpretation can be avoided in answering the SUS.

    2.4.3 Testing Environment

    Usability testing experiments for each participant was done in different time and place but still have a comfortable and quiet to reduce distractions during the experiments. It was also to avoid significant differences in usability problem findings compared to lab experiments. That function had been proved by previous studies (Horvath et al., 2007).

    2.5 Prototyping

    Evaluation and usability problem findings received through early usability testing become input on how to improve the SIS. Improvement can’t be implemented directly to SIS since it will disrupt academic activities at University X. Therefore, the SIS prototype was made in low fidelity form and accessible through Local Access Network (LAN) with the website improvement guide on HHS Usability Guideline (Sheneiderman and Leavit, 2003).

    3. RESULTS

    Compiled data consist of pilot testing data, early usability testing, and final usability testing. Through the results of pilot testing, the usability problem that was put forward by students as a user was found to be several problems. Visual of SIS was not user-friendly and less attractive, ‘Academic Information’ and ‘Home’ have the same function, font sizes are not proportional with windows width as a whole, and the language is inconstant between Bahasa Indonesia and English. Pilot testing input became the foundation for task and scenario formulation according to the statement on point 2.4.1.

    3.1 Early Usability Testing

    Early usability testing on participants was guided by a moderator with usability testing protocol which had been explained on point 2.4 to measure objective performance in Table 1. The table below shows the result of the Successful Rate that represents the current level of effectivity. The number of successful can be categorized into three, (1) Successful (S), (2) Partial Success (P), and (3) Failure. Success and Failure indicated that participants’ success or failure in completing the task, while Partial Success indicated that the task was successfully done but with a few errors in the process. Successful Rate recapitulation on Early Usability Testing can be seen in Table 5. The average Successful without error is 58%, with 26% Partial Success, and 16% Failure. Those data indicated that the effectivity of SIS usage is very low, which is as the score of Successful without error, 58%. Moreover, the rate of Task 4 (uploading TOEFL test result) is 3% and Task 10 (looking for next semester courses information) is 0%, which means there is not a single participant who was able to complete those tasks until the SIS is improved to be more effective. Table 4

    To measure SIS’s efficiency, the number of mouseclicking is compared to the optimal mouse-clicking on each task. Optimal mouse-clicking was obtained through task completion simulation according to the steps that were set beforehand, as seen in Table 3. If the participants did not finish the task in 3 clicks, according to a 3-click rule (U.S. Departement of Health & Human Services, 2019), participants were considered failed on completing the task or did not achieve task completion. The closer the number of mouse-clicking to the optimal mouse-clicking, the more efficient the system. In Table 6, it can be seen that the lowest percentage score of mouse-clicking is Task 10 (Looking for next semester courses information), Task 4 (Uploading TOEFL test result), and Task 5 (Requesting assigned lecturer to meet). On average, the SIS efficiency score for every task is 66% or in other words, not efficient enough.

    After the participants are finished completing the scenario, the SUS questionnaire was given to be filled. Through SUS score counts of 30 participants, the average score of the whole SUS is 53.83 (Figure 1) or in zone E, which is still in acceptability ranges marginal-low, so that participants’ satisfaction is still must to be increased.Figure 2

    Through the effectiveness, efficiency, and satisfaction of evaluation results toward University X’s SIS, it can be seen that the effectiveness, efficiency, and satisfaction score are still below target, so in conclusion, the SIS is not usable yet and need to be improved. When Early Usability Testing was in progress, participants were given a chance to think aloud about things that they think needs to be fixed or improved from SIS’s usability problem. The moderator then took notes of the problem and categorize them (Table 4). Figure 3

    3.2 Prototyping

    The usability testing problem categorized in Table 4 becomes the input to make the SIS prototype which then will be designed as reference (Table 7). This becomes one of the advantages of the usability testing method, which is to evaluate the repetitive problem for them to be fixed as to how the end-user wants.

    3.3 Final Usability Testing

    The improved prototype was tested on participants. To reduce the learning-effect on the task completion order in the early usability testing scenario, the task completion order is randomized using Latin Square so that each participant will do 30 scenarios with different task completion order. Successful Rate on final usability testing is 85% and only averaged 1% of the participants failed in completing the task. In conclusion, the prototype’s effectivity is increased.

    The task completion order became different according to improvement on the prototype that has been explained in Table 7. The number of optimal mouse-clicking then would also change. The lowest percentage of participants who fulfill the maximum limit of the 3-click rule is 70% for Task 10, these are Task 4 which also has a mouse-clicking score of 87%. After the whole score was averaged (Table 9), the mouse-clicking percentage for the whole task is found to be 92.09%. So then, the effectiveness is increased.

    For final usability testing, the SUS score was counted after the testing was finished. The score received is 70.67 which stands on an acceptable range ‘Acceptable’ with ‘Good’ grade scale C. It can be concluded that the improvements done through prototype design is successful in increasing students satisfaction as user/ participants.

    4. DISCUSSION

    4.1 Comparison of Existing Condition and SIS Improvement

    Through Table 10, it can be seen that the whole objective performance ISO (2018) that was measured with early and final usability testing is increased except of Task 9 (Download examination card) due to technical failure of participant. However the average in total still increased. The participants’ satisfaction increased as well from 53,83 to 70,67. The result of effectivity and efficiency evaluation on early usability testing (Table 5 and Table 6) gave a similar result to Task 4 (Upload TOEFL test result) and Task 10 (Looking for next semester courses information) by being tasks with the lowest score. That indicated both tasks to be the most difficult task to complete by participants. After the prototype was designed and tested on final usability testing, there were no participants failure in completing Task 4 (Table 8), but the mouse-clicking score for task 4 is still 87% (Table 9). With that being said, improvements that had been applied to the prototype is already effective in helping the participants avoid work errors, but efficiency still needs to be improved so that participants can complete the task quicker.

    Failure percentage for Task 10 in final usability testing is 10% (Table 8), which means there are still 3 participants who failed in completing Task 10. So, despite the failure percentage is decreased, improvements still need to be made to increase effectivity. Task 10 completion efficiency had increased from 0% to 70%, but this scores are still the lowest compared to other task efficiency score (Table 9).

    5. CONCLUSION

    Usability testing experiments proved that the SIS quality was not sufficient, measured through effectiveness, efficiency, and satisfaction objective performance. With improvements applied, effectivity increased from 58% to 85%, efficiency increased from 66% to 92%, and satisfaction increased from 53,83 to 70,67, and in conclusion, the study’s purpose has been achieved.

    ACKNOWLEDGMENTS

    Gratitude and gratefulness are bestowed upon Trisakti University’s Research Faculty who had supported this study financially.

    Figure

    IEMS-18-4-676_F1.gif

    SUS score (early testing) (Kaikkonen et al., 2005).

    IEMS-18-4-676_F2.gif

    Improvement prototype for UP6.

    IEMS-18-4-676_F3.gif

    Improvement Prototype for UP9

    Table

    Objective parameter and its measuring instruments

    SIS functions

    Optimal mouse-clicking per task (early testing)

    Usability testing problem (early testing)

    Successful rate (early testing)

    Mouse-clicking score (early testing)

    SIS Improvement on Prototype

    Successful rate usability testing (final testing)

    Mouse-clicking score (final testing)

    Comparison of existing condition and SIS improvement

    REFERENCES

    1. Brooke, J. (2013), SUS: A retrospective, Journal of Usability Studies, 8(2), 29-40.
    2. Horvath, G. , Moss, G. , Gunn, R. , and Vass, E. (2007), Gender and web design software, Journal of Systemics, Cybernetics, and Informatics, 5(6), 22-27.
    3. ISO 9241-11 (2018), Ergonomics of human-system interaction- Part 11: Usability: Definitions and Concepts [Online], https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:en.
    4. Kaikkonen, A. , Kekäläinen, A. , Cankar, M. , Kallio, T. , and Kankainen, A. (2005), Usability testing of mobile applications: A Comparison between laboratory and filed testing, Journal of Usability Studies, 1(1), 4-16.
    5. Kim, I. (2016), Cognitive ergonomics and its role for industry safety enhancements, Journal of Ergonomics, 6(4), 17-19.
    6. Macefield, R. (2009), How to specify the participant group size for usability studies: A practitioner’s guide, Journal of Usability Studies, 5(1), 34-45.
    7. Nielsen, J. (1993), Usability Engineering, Academic Press, Boston, MA.
    8. Rubin, J. and Chisnell, D. (2008), Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Test (2nd Ed.), Wiley Publishing, New York.
    9. Sharfina, Z. and Santoso, H. B. (2017), An indonesian adaptation of the system usability scale (SUS), Proceedings of the 8th International Conference on Advanced Computer Science and Information Systems IEEE, Malang, Indonesia, 145-148.
    10. Sheneiderman, B. and Leavit, M. (2003), HHS Usability Guideline, U.S. Departement of Health & Human Services, Available from: https://usability.gov/sites/default/files/documents/guidelines_book.pdf.
    11. Studiyanti, L. and Yassierli (2016), Analyzing low vision’s accessability and usability problem focusing on the use of Koran application in touch screen device, Proceedings of Quality in Research (QIR), Lombok, Indonesia, 1138-1143.
    12. U.S. Departement of Health & Human Services (2019), 3- click rules [Online]. Available from: https://usability.gov/what-and-why/glossary/3-click-rule.html.