A Visualisation Dashboard for Contested Collective Intelligence Learning Analytics to Improve Sensemaking of Group Discussion
DOI:
https://doi.org/10.5944/ried.22.1.22294Palavras-chave:
Learning analytics, collective intelligence, argumentation, online discussion, information visualisations, online deliberation, sensemaking, learning analytics, dashboard.Resumo
The skill to take part in and to contribute to debates is important for informal and formal learning. Especially when addressing highly complex issues, it can be difficult to support learners participating in effective group discussion, and to stay abreast of all the information collectively generated during the discussion. Technology can help with the engagement and sensemaking of such large debates, for example, it can monitor how healthy a debate is and provide indicators of participation's distribution. A special framework that aims at harnessing the intelligence of - small to very large – groups with the support of structured discourse and argumentation tools is Contested Collective Intelligence (CCI). CCI tools provide a rich source of semantic data that, if appropriately processed, can generate powerful analytics of the online discourse. This study presents a visualisation dashboard with several visual analytics that show important aspects of online debates that have been facilitated by CCI discussion tools. The dashboard was designed to improve sensemaking and participation in online debates and has been evaluated with two studies, a lab experiment and a field study in the context of two Higher Education institutes. The paper reports findings of a usability evaluation of the visualisation dashboard. The descriptive findings suggest that participants with little experience in using analytics visualisations were able to perform well on given tasks. This constitutes a promising result for the application of such visualisation technologies as discourse-centric learning analytics interfaces can help to support learners' engagement and sensemaking of complex online debates.
Downloads
Referências
Adams, S. A. (2010). Revisiting the online health information reliability debate in the wake of “web 2.0”: An inter-disciplinary literature and website review. International Journal of Medical Informatics, 79(6), 391–400. https://doi.org/10.1016/j.ijmedinf.2010.01.006
Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies, 4(3), 114–123.
Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An Empirical Evaluation of the System Usability Scale. International Journal of Human-Computer Interaction, 24(6), 574–594. https://doi.org/10.1080/10447310802205776
Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology, 39(5), 775–786. https://doi.org/10.1111/j.1467-8535.2007.00793.x
Brooke, J. (2013). SUS: a retrospective. Journal of Usability Studies, 8(2), 29–40.
Buckingham Shum, S. (2003). The roots of computer supported argument visualization. In Visualizing argumentation (pp. 3–24). Springer. Retrieved from http://link.springer.com/chapter/10.1007/978-1-4471-0037-9_1
Buckingham Shum, S., & others. (2008). Cohere: Towards web 2.0 argumentation. COMMA, 8, 97–108.
De Liddo, A. (2014). Enhancing Discussion Forums with Combined Argument and Social Network Analytics. In A. Okada, S. Buckingham Shum, & T. Sherborne (Eds.), Knowledge Cartography (pp. 333–359). Springer London. Retrieved from http://link.springer.com/chapter/10.1007/978-1-4471-6470-8_15
De Liddo, A., & Buckingham Shum, S. (2014). New Ways of Deliberating Online: An Empirical Comparison of Network and Threaded Interfaces for Online Discussion. In E. Tambouris, A. Macintosh, & F. Bannister (Eds.), Electronic Participation (pp. 90–101). Springer Berlin Heidelberg. Retrieved from http://link.springer.com/chapter/10.1007/978-3-662-44914-1_8
De Liddo, A., Buckingham Shum, S., & Klein, M. (2014). Arguing on the Web for Social Innovation: Lightweight Tools and Analytics for Civic Engagement. In 8th ISSA Conference on Argumentation. Amsterdam.
De Liddo, A., Sándor, A., & Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Computer Supported Cooperative Work (CSCW), 21(4–5), 417–448. https://doi.org/10.1007/s10606-011-9155-x
Hair, D. C. (1991). LEGALESE: A Legal Argumentation Tool. SIGCHI Bull., 23(1), 71–74. https://doi.org/10.1145/122672.122690
Harasim, L. (2000). Shift happens: online education as a new paradigm in learning. The Internet and Higher Education, 3(1–2), 41–61. https://doi.org/10.1016/S1096-7516(00)00032-4
Klein, M., & Convertino, G. (2014). An embarrassment of riches. Communications of the ACM, 57(11), 40–42. https://doi.org/10.1145/2629560
Malone, T. W., & Klein, M. (2007). Harnessing Collective Intelligence to Address Global Climate Change. Innovations: Technology, Governance, Globalization, 2(3), 15–26. https://doi.org/10.1162/itgg.2007.2.3.15
Malone, T. W., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. IEEE Engineering Management Review, 38(3), 38.
Novak, J. D. (1998). Learning, creating, and using knowledge.
Concept MapsTM as Facilitative Tools in Schools and Corporations. Mahwaw: Lawrence Erlbaum. Retrieved from http://cmapspublic2.ihmc.us/rid=1J61L9RDV-1KYY1F2-W9T/novakcap2.pdf
Rahwan, I., Zablith, F., & Reed, C. (2007). Laying the foundations for a World Wide Argument Web. Artificial Intelligence, 171(10–15), 897–921. https://doi.org/10.1016/j.artint.2007.04.015
Rubin, J., & Chisnell, D. (2008). Handbook of Usability Testing: Howto Plan, Design, and Conduct Effective Tests (2 edition). Wiley.
Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer-supported argumentation: A review of the state of the art. International Journal of Computer-Supported Collaborative Learning, 5(1), 43–102. https://doi.org/10.1007/s11412-009-9080-x
Smith, M. A., & Fiore, A. T. (2001). Visualization Components for Persistent Conversations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 136–143). New York, NY, USA: ACM. https://doi.org/10.1145/365024.365073
Ullmann, T. D. (2004). maQ-Fragebogengenerator. Make a Questionnaire. Retrieved from http://maq-online.de
Publicado
Como Citar
Edição
Secção
Licença
Direitos de Autor (c) 2018 RIED. Revista Iberoamericana de Educación a Distancia

Este trabalho encontra-se publicado com a Licença Internacional Creative Commons Atribuição 4.0.
As obras que são publicadas neste revista estão sujeitas ao seguintes termos:
1. Os autores cedem de forma não exclusiva os direitos de exploração dos trabalhos aceitos para sua publicação a "RIED. Revista Iberoamericana de Educação a Distância", garantem à revista o direito de ser a primeira publicação do trabalho e permitem que a revista distribua os trabalhos publicados sob a licença de indicada no ponto 2.
2. As obras são publicadas na edição eletrônica da revista sob uma licença Creative Commons Atribuição 4.0 Internacional (CC BY 4.0). Podem copiar e redistribuir o material em qualquer suporte ou formato, adaptar, remixar, transformar, e criar a partir do material para qualquer fim, mesmo que comercial. Você deve atribuir o devido crédito, fornecer um link para a licença, e indicar se foram feitas alterações. Você pode fazê-lo de qualquer forma razoável, mas não de uma forma que sugira que o licenciante o apoia ou aprova o seu uso.
3. Condições de auto-arquivo. Permite-se e incentava-se aos autores difundir eletronicamente a versõ OnlineFirst (versão avaliada e aceita para publicação) de su obra antes de sua publicação, sempre com referência a sua publicação na RIED, já que favorece sua circulação e difusão mais cedo e com isso um possível aumento em sua citação e alcance entre a comunidade acadêmica. Color RoMEO: verde.

