Index

Human's Task Modeling

Introduction
What is Task and Task model?
PMI-based Clustering for Task modeling
Example of Task model learned

Task-based content recommendation and navigation

Task based recommendation
Task based content navigation

Relationship between Role and task model

Human role governs the selection of task

Making Robot Search and Rearrange Objects With Unknown Locations and Shapes

Acquisition of Intermediate Goals for an Agent Executing Multiple Tasks
Controlling A Mobile Robot That Searches for and Rearranges Objects With Unknown Locations and Shapes
Region Exploration Path Planning for a Mobile Robot Expressing Working Environment By Grid Points

Introduction

In everyday life people must deal with various kinds of tasks and problems. For example, people feel thirsty, get lost, and need to find a restroom, or miss their train. Fortunately, there are various kinds of contents or services available on the mobile Internet that can solve these problems. Among those mobile services are “Transit-Guide”, “Tokyo Restroom Map”, and “Gourmet-Navi”[1]. They are useful to some extent, but the user must be prepared to actively find the service desired from among the large number available, and this is very difficult for ordinary users. If appropriate contents and services that could solve the user’s problem could be identified and presented to the user, we could perform our daily activities much more comfortably. Towards the above aim, we have proposed a task modeling method and task-oriented service navigation system[2][3] that supports the user in finding appropriate services as follows.
go to top

What is Task and Task model?

Task

Task represents the user’s real world activity, which consists of action and objects. Actions are represented by verbs and objects are represented by nouns. For example, task “buy movie ticket” consists of action “buy” and object “movie ticket”.

Task-model

Task-model represents the taxonomy of tasks. This model consists of domains and tasks. The top node of the model represents a domain; second level nodes indicate the tasks extracted by the processes associated with the domain. The structure of the hierarchical task-model is shown in Fig. 1.4. In order to help user better identify their problem or request, the upper nodes of the task-model hold generic tasks, while the lower nodes hold more concrete tasks. me pic

Fig. 1.4: Structure of hierarchical task-model, which represents the taxonomy of tasks. This model is excerpts from [Fukazawa, 2006], which is manually created. Domain is the highest concept of task-model, which defines the range of semantically similar tasks. Task represents the user’s real world activity, which consists of action and objects. Actions are represented by verbs and objects are represented by nouns.

go to top

PMI-based Clustering for Task modeling

We propose PMI-based clustering of tasks for learning task-model. As described, the past clustering approach needs feature representation of tasks. If there is no additional information for each task, we need to manually add appropriate features, which is not cost effective. To solve the problem of tasks with too few features, we extend the idea of PMI (Pointwise mutual information)[24], which uses a search engine for calculation without any concept feature. This co-occurrence measure has been extensively used to evaluate the relevance of a set of candidates[6][21]. Fig.4 shows the overall steps of PMIbased clustering. 1. Select parent node from the set of tasks. This is based on the number of search results. 2. Associate other tasks, which are not selected as parent tasks, to the most related parent task. This is done by Point wise mutual information, which measures the similarity between tasks. me pic

Figure 4: Overall steps of PMI based clustering.

We show the procedure of the stpe 1 in the Fig.5. me pic

Figure 5: Step 1 of PMI based clustering. Select parent node from the set of tasks. This is based on the number of search results.

We show the procedure of the stpe 2 in the Fig.6. As can be seen from the figure, children tasks are associated with the parent task, which have the highest PMI calculation value among values calculated between the children task and other parent tasks. Here, in order to avoid few search results of tasks gained from search engine, we have proposed extended PMI calculation in [9]. me pic

Figure 6: Step 2 of PMI based clustering. Asso- ciate children tasks, which are not selected as parent tasks, to the most related parent task. This is done by Point wise mutual information, which measures the similarity between tasks.

By using the extended calculation, we give pmi(x1, x2) as follows: me pic where x1verb, x1noun represent verb part of task x1 and noun part of task x2. The function f(x, y) are defined as fillows: me pic where hits(key) means the number of results the search engine returns against the query “key”.
go to top

Example of Task model learned

Following is the links to the task model learned by four methods:Bi-Section KMeans, Bottom-Up Clustering, Formal Concept Analysis and Pointwise Mutual Information based Clustering. Grand Truth created by two researchers are also shown.
Authority related goals
Drug-related goals
Fitness related goals
Baby related goals
Beauty-related goals
Body disease related goals
Brain disease related goals
daily health-care related goals
Face-care related goals
Healty Diet related goals
Mental Health related goals
Pregnancy related goals
sex related goals
Weight-control related goals

Fig.5.11 shows ground truth model of Baby domain. Fig.5.12 show the model learned by PMI-based Clustering respectively at Baby domain. By comparing the figures Fig.5.11 and Fig.5.12, it can be seen that both task-models have the some common hierarchical relationships between parent task and children tasks such as “find infant”→“sterilize baby bottle” and “find infant”→“cry during feeding”. As proposed PMI-based Clustering does not use token-based features but use PMI calculation using search engine for acquisition of relationship between tasks, proposed method can relate parent task “find infant” and children tasks “sterilize baby bottle” and ‘cry during feeding”, even though they do not have a common token as task description. me pic

Fig. 5.11:Ground truth model of the Baby domain. Those tasks marked by a star are not derived from queries but were manually created in order to make readers understand the cluster easily. Note that these tasks are not considered in the experiment, and are not shown in the learned model.

me pic

Fig. 5.12: Results of the model learned by PMI-based Clustering in Baby domain.

go to top

Task based recommendation

Fig.2.1 shows the conceptual difference between our approach and the past approach. In the past approach, the system considers the space of terms due to its use of the term-based features. We, on the other hand, have a different perspective the task-based perspective. Our new approach allows the user to refer to the space of tasks, and so is much more intuitive. me pic

Fig. 2.1: Cconceptual difference between our approach and the past approach.In the past approach, the system considers the space of terms due to its use of the term-based features. We, on the other hand, have a different perspective the task-based perspective. Our new approach allows the user to refer to the space of tasks, and so is much more intuitive.

Fig.2.2 shows the procedure of task-based content recommendation. Task-based content recommendation is content-based recommendation that use task as content features. In content-based recommendation, recommendation contents are decided by selecting contents similar to the user’s profile. The user’s profile is calculated by the action the user has taken such as what the user is viewing or purchasing or what the user has viewed or purchased. In order to calculate similarity, both the user’s profile and content profiles are represented by vectors of the same set of features. In the content-based recommendation, we have to solve following three issues; 1) Define features for profile representation, 2) Represent content’s profile by bag of features and 3) Represent user’s interest of tasks by bag of features based on user’s history. me pic

Fig. 2.2: Procedure of task-based personalized recommendation. We have offline process and online process. In the offline process, we define task-based features for profile rep- resentation. In the online process, we have following two steps: 1) Represent content’s profile by bag of tasks and 2) Represent user’s task interests by bag of tasks based on user’s history.

go to top

Task based content navigation

The number of mobile video services is increasing dramatically, and it is ex- pected to be the next big business opportunity. To satisfy these needs, we have developed a task-based sightseeing spot repre- sentation on map interface for mobile video navigation called Task Guide Road (TGR); it promotes video consumption outdoors. TGR allows the user to find videos associated with tasks connected to the user’s current location. Task repre- sents the user’s real world activity, which consists of action and objects. Actions are represented by verbs and objects are represented by nouns. For example, task “buy movie ticket” consists of action “buy” and object “movie ticket”. In this paper, tasks are expressed by combining the sightseeing spot name as ob- jects with 3,300 verbs extracted from blogs, which can be retrieved by a search engine. As TGR provides task-related videos, the possibility that the user will watch videos of favorite tasks that are appropriate to the current location will be high even if the user has not been to the sightseeing spot before. A screen image of TGR is shown in Fig.2. For instance, Kyoto Uji is a source of a special blend of Japanese tea, and it is difficult to hold the user’s interest if user does not know the name. On the other hand, the description “have a tea party at Kyoto Uji” is far more attractive. me pic

Fig. 2 TGR’s main screen image.

go to top

Human role governs the selection of task

Human plays several roles in a real world such as “PassengerRole” when user ride on a train, “FamilyRole” when user is with family. What kinds of role user play significantly effects the user’s selection of task and vice versa. We define two types of roles; task-role that depends on the task user selected and is changeable during task-selection process, and social-role that depends on human-relationship between surrounding people and is consistent during taskselection process. By associating task-role with the task defined in task-ontology, we can acquire current user’s task-role during the task-selection process, and can recommend the services of only the end task-node associated with the same task-role. We construct the role-ontology by using these two roles; task-role and socialrole, as the top-level role concept. The constructed role-ontology is shown in Fig.2. Concept “role” has two top-level role concept; “social-role” and “taskrole”. As mentioned in the previous section, the role-concepts are modeled by tree structure using “is-parent-of” relation. For example, task-role has role concepts such as “PassengerRole”, “AudienceRole”, “ShoppingCustomerRole” and “DiningCustomerRole” as its child nodes. Social-role has role-concepts such as “FamilyRole”, “FriendRole” and “ColleagueRole” as its child nodes. me pic

Fig. 2. Role-ontology for task-based service navigation system

Fig.3 shows the example of relating task-role to task node. This figure represents the task-ontology whose top-node is “Go to watch movie” and the relationship between tasks is expressed by is-achieved-by relation. In order to express the original relations between role-concept and task node in task-ontology, we use is-played-by relation as described in Section 2.1. The relation is-played-by means that the task is played by the user who plays the designated task-role. The example using is-played-by relation shown in the Fig.3 is as follows. “Go to watch movie” is connected to role-concept “MovieAudienceRole”, which is defined in role-ontology, using is-played-by relation. It means that if user selects the task “Go to watch movie” user’s task-role is changed to the task-role “MovieAudienceRole”. In the same manner, if user selects “Move to movie theater”, and then user’s task-role is changed to “PassengerRole”, which is defined in role-ontology. me pic

Fig. 3. Enhancement of task-ontology using role-concept defined in role-ontology

go to top

Acquisition of Intermediate Goals for an Agent Executing Multiple Tasks

The fact that an autonomous agent performs multiple tasks has at-tracted a great deal of attention because of the technological advancesthis represents and the potential for reducing personnel expenses.For an agent executing multiple tasks, there are two steps to com-pleting whole tasks. The first step is selecting a task from multiple taskcandidates. The second step is selecting the action the robot shouldtake so as to complete the task selected in the first step. Concernedwith the action selection, reinforcement learning such as Q-learning[1] has been used to learn the action of a single task. However, the above learned action is not always appropriate for completing the multiple tasks, even if an agent learns the appropriate action for completingsingle tasks.An example of rearrangement of multiple objects [2] is shown in Fig. 1. The agent transfers these objects to their goal position as soonas possible. There are three tasks: transferring object 1 to the goal state;transferring object 2 to the goal state; and transferring object 3 to thegoal state. However, if an agent learns the action of transferring object1 to its goal configuration without considering the existence of objects2 and 3, the learned action is not appropriate for transferring wholeobjects 1–3 to their goal. Therefore, task selection is a significant issue for an agent executing multiple tasks.When a robot has multiple tasks, there are usually order restrictions among them. In the case of rearrangement of multiple objects, shown in Fig. 1, an agent must determine the order in which each object is tobe transported for effective completion of the task. The objective of this study is to propose an algorithm so as to learn intermediate goals between the initial and goal states for an agent ex-ecuting multiple tasks. The algorithm mentioned in this paper can re-strict the executable tasks among intermediate goals. As a result, theorder restrictions among tasks can be solved.
me pic

Fig. 1. Example of multiple tasks (rearrangement of multiple objects). Robot has multiple tasks, transferring objects 1–3 from their (a) initial state to theircorresponding (b) goal state.

The task is defined for each objects (Table I). State space consists ofthree state variables (x; y ; theta ) , which represent the location of one single object, X and Y coordinates, and angle to X - and Y -axes.

Table 1: Definition of the task with the rearrangement problem of multiple objects

me pic
Here, an algorithm to acquire intermediate goals and the algorithm to use intermediate goals are stated. The flowchart of the acquisitionof intermediate goals, which is shown in Fig. 2, is described in the following. First, the time series of state transitions is acquired between the ini-tial and goal states through trial and error.tial and goal states through trial and error. The term “trial and error”represents that an agent selects the task at random, and then selects the action of the selected task at random. An agent backtracks the time series of state transition and then ac-quires the state in which each task is accomplished as an intermediatequires the state in which each task is accomplished as an intermediategoal. These intermediate goals are referred to as the first group of in-termediate goals. More details are provided in Section II-A. The second group of intermediate goals is acquired by consideringthe last acquired intermediate goals as the goal state of the entire task(problem), and the acquisition of intermediate goals is repeated. Simi-larly, the third, fourth, and further groups of intermediate goals can be acquired.
me pic

Fig. 2. Flowchart of acquiring intermediate goals by backtracking the time series of state transitions of trial and error.

Fig. 4(a) is the initial state, and Fig. 4(b)–(n) shows the ac-quired intermediate goals. The first group of the intermediate goals [Fig. 4(l), (m), (n)] is acquired first; then, the state of Fig. 4(l) is setas a new goal state, and the second group of the intermediate goals [Fig. 4(j), (k), (l)] is acquired to reach the new goal state [Fig. 4(l)]. After repeating a similar process, all of the intermediate goals are acquired. In the case of this problem, objects must be rearranged in the orderof objects 1, 2, and 3 in order to reach the goal state shown in Fig. 4(n). Fig. 4(l), (m), (n) shows this order of rearranging objects. In addition, objects 1–3 must be transferred to the configuration where the robot can hold objects in the order of 1, 2, and 3 to follow the order of re-arranging objects. The configuration of objects, as shown in Fig. 4(k), shows that the objects are transferred to the configuration where a robotcan hold objects in the order of 1, 2, and 3. From the above, the appro- priate intermediate goals for rearranging objects 1–3 are considered tobe acquired.
me pic

Fig. 4: Series of intermediate goals of a rearrangement problem with 3 objects





go to top

Controlling A Mobile Robot That Searches for and Rearranges Objects With Unknown Locations and Shapes

Motion planning for a mobile robot that searches for and carries objects with unknown shapes and positions is challenging. This kind of task has multiple applications such as the conveyance of objects in warehouses or factories as well as homes. The concept of the task is shown in Fig. 1. The robot uses sensors to find objects and then carries them to their own goal position. The significant thing is that this task must be realized not by simulation but by a mobile robot within the limited time. This paper offers a proposal for an algorithm of controlling a mobile robot that searches for and rearranges objects with unknown locations and shapes. In this paper, we divide the task into two parts: exploration task and rearrangement task. The algorithms for each part of the task are presented with respect to the effectiveness of the path length and computational cost. Additionally integration algorithm that effectively combines exploration and rearrangement is presented. Experiments with a real robot are conducted to demonstrate the effectiveness of proposed algorithm.



go to top

Region Exploration Path Planning for a Mobile Robot Expressing Working Environment By Grid Points

Planning a path so that a mobile robot can perform an exploratory task is a fundamental challenge in mobile robotics. Such tasks have various applications, such as, for example, searching for unknown obstacles in a working area, conducting security operations, and cleaning or painting of floors. When a robot performs such a task, it does so in an environment that may have an intricate shape or one that is comprises curved boundaries. In addition, when a robot encounters unknown obstacles, new paths must be calculated while considering the expense of computation and the quality of the new path. In this paper, region exploration path planning algorithm is proposed. In order for a mobile robot to perform this task, appropriate measures with the shape of the working environment, which may be intricate or curved, is necessary. In addition, a robot must be able to react and be flexible when confronted with obstacles. With this algorithm, these challenges can be met by approximately expressing the working environment in grid points and regenerating the path using one that was planned beforehand. Simulations are used to demonstrate proposed exploration path planning and re-planning algorithm.
go to top