A Visualisation Dashboard for Contested Collective Intelligence Learning Analytics to Improve Sensemaking of Group Discussion
DOI:
https://doi.org/10.5944/ried.22.1.22294Keywords:
Learning analytics, collective intelligence, argumentation, online discussion, information visualisations, online deliberation, sensemaking, learning analytics, dashboard.Abstract
The skill to take part in and to contribute to debates is important for informal and formal learning. Especially when addressing highly complex issues, it can be difficult to support learners participating in effective group discussion, and to stay abreast of all the information collectively generated during the discussion. Technology can help with the engagement and sensemaking of such large debates, for example, it can monitor how healthy a debate is and provide indicators of participation's distribution. A special framework that aims at harnessing the intelligence of - small to very large – groups with the support of structured discourse and argumentation tools is Contested Collective Intelligence (CCI). CCI tools provide a rich source of semantic data that, if appropriately processed, can generate powerful analytics of the online discourse. This study presents a visualisation dashboard with several visual analytics that show important aspects of online debates that have been facilitated by CCI discussion tools. The dashboard was designed to improve sensemaking and participation in online debates and has been evaluated with two studies, a lab experiment and a field study in the context of two Higher Education institutes. The paper reports findings of a usability evaluation of the visualisation dashboard. The descriptive findings suggest that participants with little experience in using analytics visualisations were able to perform well on given tasks. This constitutes a promising result for the application of such visualisation technologies as discourse-centric learning analytics interfaces can help to support learners' engagement and sensemaking of complex online debates.
Downloads
References
Adams, S. A. (2010). Revisiting the online health information reliability debate in the wake of “web 2.0”: An inter-disciplinary literature and website review. International Journal of Medical Informatics, 79(6), 391–400. https://doi.org/10.1016/j.ijmedinf.2010.01.006
Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies, 4(3), 114–123.
Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An Empirical Evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574–594. https://doi.org/10.1080/10447310802205776
Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), 775–786. https://doi.org/10.1111/j.1467-8535.2007.00793.x
Brooke, J. (2013). SUS: a retrospective. Journal of Usability Studies, 8(2), 29–40.
Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In Visualizing argumentation (pp. 3–24). Springer. Retrieved from http://link.springer.com/chapter/10.1007/978-1-4471-0037-9_1
Buckingham Shum, S., & others. (2008). Cohere: Towards web 2.0 argumentation. COMMA, 8, 97–108.
De Liddo, A. (2014). Enhancing Discussion Forums with Combined Argument and Social Network Analytics. In A. Okada, S. Buckingham Shum, & T. Sherborne (Eds.), Knowledge Cartography (pp. 333–359). Springer London. Retrieved from http://link.springer.com/chapter/10.1007/978-1-4471-6470-8_15
De Liddo, A., & Buckingham Shum, S. (2014). New Ways of Deliberating Online: An Empirical Comparison of Network and Threaded Interfaces for Online Discussion. In E. Tambouris, A. Macintosh, & F. Bannister (Eds.), Electronic Participation (pp. 90–101). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-662-44914-1_8
De Liddo, A., Buckingham Shum, S., & Klein, M. (2014). Arguing on the Web for Social Innovation: Lightweight Tools and Analytics for Civic Engagement. In 8th ISSA Conference on Argumentation. Amsterdam.
De Liddo, A., Sándor, A., & Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Computer Supported Cooperative Work (CSCW), 21(4–5), 417–448. https://doi.org/10.1007/s10606-011-9155-x
Hair, D. C. (1991). LEGALESE: A Legal Argumentation Tool. SIGCHI Bull., 23(1), 71–74. https://doi.org/10.1145/122672.122690
Harasim, L. (2000). Shift happens: online education as a new paradigm in learning. The Internet and Higher Education, 3(1–2), 41–61. https://doi.org/10.1016/S1096-7516(00)00032-4
Klein, M., & Convertino, G. (2014). An embarrassment of riches. Communications of the ACM, 57(11), 40–42. https://doi.org/10.1145/2629560
Malone, T. W., & Klein, M. (2007). Harnessing Collective Intelligence to Address Global Climate Change. Innovations: Technology, Governance, Globalization, 2(3), 15–26. https://doi.org/10.1162/itgg.2007.2.3.15
Malone, T. W., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. IEEE Engineering Management Review, 38(3), 38.
Novak, J. D. (1998). Learning, creating, and using knowledge.
Concept MapsTM as Facilitative Tools in Schools and Corporations. Mahwaw: Lawrence Erlbaum. Retrieved from http://cmapspublic2.ihmc.us/rid=1J61L9RDV-1KYY1F2-W9T/novakcap2.pdf
Rahwan, I., Zablith, F., & Reed, C. (2007). Laying the foundations for a World Wide Argument Web. Artificial Intelligence, 171(10–15), 897–921. https://doi.org/10.1016/j.artint.2007.04.015
Rubin, J., & Chisnell, D. (2008). Handbook of Usability Testing: Howto Plan, Design, and Conduct Effective Tests (2 edition). Wiley.
Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of the state of the art. International Journal of Computer-Supported Collaborative Learning, 5(1), 43–102. https://doi.org/10.1007/s11412-009-9080-x
Smith, M. A., & Fiore, A. T. (2001). Visualization Components for Persistent Conversations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 136–143). New York, NY, USA: ACM. https://doi.org/10.1145/365024.365073
Ullmann, T. D. (2004). maQ-Fragebogengenerator. Make a Questionnaire. Retrieved from http://maq-online.de
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 RIED. Revista Iberoamericana de Educación a Distancia

This work is licensed under a Creative Commons Attribution 4.0 International License.
The articles that are published in this journal are subject to the following terms:
1. The authors grant the exploitation rights of the work accepted for publication to RIED, guarantee to the journal the right to be the first publication of research understaken and permit the journal to distribute the work published under the license indicated in point 2.
2. The articles are published in the electronic edition of the journal under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. You can copy and redistribute the material in any medium or format, adapt, remix, transform, and build upon the material for any purpose, even commercially. You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
3. Conditions for self-archiving. Authors are encouraged to disseminate electronically the OnlineFirst version (assessed version and accepted for publication) of its articles before publication, always with reference to its publication by RIED, favoring its circulation and dissemination earlier and with this a possible increase in its citation and reach among the academic community.

