[About] [Objective] [Example scenarios] [Service] [Publication]
About
This service provides an interactive dialogue with the user (learner) using semantically augmented content collected with the Intelligent Content Assembly Workbench (I-CAW, presented in deliverable D.3.2). The aim of the dialogue is twofold:
- To build a user model representing the user skills related to selected concepts from the activity (presented as an activity model ontology).
- To facilitate meta-cognitive processes, such as reflection and awareness (revisiting past experiences and linking them to key activity aspects), and goal setting (setting personal targets to practice/acquire in a simulated environment for learning).
The interactive user modelling dialogue is planned to be used at the forethought stage (i.e. before interacting with a simulator), as described in Deliverable D2.2. The dialogue will prompt the learner to make a link between his/her previous experience and examples related to experiences by others. During the dialogue, the user will be presented with example content from the collective repository created with I-CAW. Currently, the I-CAW repository includes two types of content:
- YouTube videos with example content on the activity, accompanied by comments, either extracted from YouTube or contributed by I-CAW users.
- Stories with personal experiences contributed by I-CAW users.
The user modelling dialogue in ImREAL addresses the following research questions:
- How can an activity model ontology and a semantically augmented content be utilized to structure interaction episodes with a user in order to extract a user model?
- Can interaction episodes for learner modeling be used to promote meta-cognition at a forethought stage before a user interacts with a simulated environment?
Objective
Example scenarios
Service
The dialogue uses the semantic query service developed in WP3, takes an XML input with a list of keywords and returns heterogeneous content (comments, videos, and stories) from I-CAW linked to concepts from AMOn which are related to the input keywords.
The dialogue manager, which is the key component in the dialogue service, follows the model from OWL-OLM [Aroyo et al, 2006]. The dialogue consists of episodes, which include turns (speech acts) by the system and the user. Each dialogue episode is implemented as a dialogue game, which includes the following components:
- Type - there can be different types of games, at the moment, three game types are considered - user competences diagnosing, promoting reflection, goal setting;
- Focus - a list of concepts from AMOn which present what the dialogue is talking about (e.g. BODY LANGUAGE, HAND SHAKING, EYE CONTACT);
- Goal - a concept from the cognitive dimensions taxonomy which present the target skill to be diagnosed (e.g. RECOGNISING - the user can recognise a target concept in an example presented to them; or COMPARING - the user can compare target concepts);
- Trigger - which indicates when this dialogue game can be invoked (e.g. a diagnostic game is invoked when the user model lacks information about skills with the target concepts, a reflection game is triggered when the focus concepts have already been discussed and illustrated with examples during the dialogue, a goal setting game is triggered at the end of the dialogue with the user and used as a transition towards the interaction with the simulator).
- Plan - a sequence of speech acts that realise the dialogue.
Publication
L. Aroyo, R. Denaux, V. Dimitrova and M. Pye. Interactive Ontology-Based User Knowledge Acquisition: A Case Study, in Proceedings of the European Semantic Web Conference 2006, LNCS, Springer, 2006, pp. 560-574.