IPSJ Digital Courier
What is IPSJ-DC
Editorial policy
Editorial board
Historical background
BackNumber Volume1(2005)
BackNumber Volume2(2006)
BackNumber Volume3(2007)
BackNumber Volume4(2008)
Table of Contents
Contact us


Both bibliographic data and PDF files can be accessed free until December 2008. Copyright reserved.

Table of Contents

Message from the IPSJ president
Editorial board for 2008

Volume 3 (2007)

Operating System
DiscNice: User-level Regulation of Disk Bandwidth

Hiroshi Yamada, Kenji Kono
A recent trend in computing has been to leverage dormant PC resources. To achieve this, background applications such as peer-to-peer applications and PC Grid run on ordinary users' PCs, sharing their computing resources. If not properly managed, the background applications obtrude on the PC user's active jobs. In particular, the contention over disk bandwidth severely degrades performance. In this paper, we present DiscNice, a novel scheme for disk bandwidth management that can host background applications unobtrusively. Its novelty lies in the fact that it throttles disk I/O completely at the user-level. The user-level approach is attractive for heterogeneous environments such as differently configured PCs over the world; portability is enhanced and deployment is easier in comparison with kernel-level approaches. Experimental results suggest that our prototype DiscNice running on Linux 2.4.27 incurs 12% or less overhead, and gracefully ensures the unobtrusiveness of background applications.

Group Interaction Support and Groupware
Effects of Room-sized Sharing on Remote Collaboration on Physical Tasks

Naomi Yamashita, Keiji Hirata, Toshihiro Takada, Yasunori Harada, Yoshinari Shirai, Shigemi Aoyagi
This paper outlines some of the benefits of providing remote users with consistent spatial referencing across sites when collaborating on physical tasks. Two video-mediated technologies are introduced: room-sized sharing that enables remote users to see similar things to what they would actually see if in the same room and a snapshot function that enables users to gesture at remote objects. We examine the impact of these technologies by comparing remote collaboration on physical tasks in a regular video conferencing system with a handy camera versus a room duplication system versus a room duplication system with a snapshot function. Results indicate that room-sized sharing facilitates remote collaborators' sense of co-presence and supports remote gesturing, which is closely aligned to normal co-present gesturing. Although such benefits did not contribute directly to the overall decrease of task performance, room-sized sharing and the snapshot function helped remote collaborators construct appropriate messages, efficiently establish joint focus, and monitor each others' comprehension when conducting complicated physical tasks.

Distributed Systems Operation and Management
Seamless Failure Recovery for Real-time Applications in MPLS Networks

Mitsuo Hayasaka, Tetsuya Miki
The QoS provided by current best effort Internet is not good enough for real-time multimedia applications that are categorized as premium traffic. It is believed that QoS guarantees could be better provided by the connection oriented networks. Multi Protocol Label Switching (MPLS) is one such technology and these connection oriented networks are inherently more prone to network failures. Re-routing is a solution to cope with them. However, the re-routing always causes packet losses and results in service outage. Therefore, the QoS of the real-time premium traffic is highly degraded. The seamless failure recovery proposed in this paper can be used for real-time premium traffic that needs a guaranteed QoS. It applies an FEC technique to the conventional re-routing based path protection and seamlessly recovers the packet losses due to re-routing by way of an FEC recovery technique. The numerical and simulation results show that the proposed method can provide network architecture without service outage for real-time premium traffic while minimizing the service costs such as redundant traffic and additional buffer at edge routers.

Human-Interface Basics
ZWPS and Pressure Scroll: Two Pressure-based Techniques in Pen-based Interfaces

Jibin Yin, Xiangshi Ren
This paper investigates the interaction ability when introducing pressure into current basic interaction techniques by developing two novel techniques. A Zoom-based technique with pressure (hereafter referred to as ZWPS) is proposed to improve pixel-target selection. In this technique the pressure is used as a switch mode to couple a standard Point Cursor and a zoomable technique together. Pressure Scroll is also presented with a view to advancing scrolling performances by employing arc or line strokes to scroll documents. In this technique pressure is used as an additional control factor to widen the adjustable range of the scrolling velocity. We conducted two experiments to examine the effectiveness of ZWPS and Pressure Scroll. The experimental results indicate that they both bring significant benefits to the users.

Database/Software Paper
Free Energy Landscape Analysis System Based on Parallel Molecular Dynamics Simulation

Masakazu Sekijima, Jun Doi, Shinya Honda, Tamotsu Noguchi, Shigenori Shimizu, Yutaka Akiyama
We created a Free Energy Landscape Analysis System based on a parallelized molecular dynamics (MD) simulation adapted for the IBM Blue Gene/L supercomputer. We begin with an outline of our Free Energy Landscape Analysis system. Next we discuss how Parallel MD was tuned for Blue Gene/L. We then show the results for some test targets run on Blue Gene/L, including their efficiency. Finally, we mention some future directions for extension of this project.

Original Papers
Stochastic Regular Approximation of Tree Grammars and Its Application to Faster ncRNA Family Annotation

Kazuya Ogasawara, Satoshi Kobayashi
Tree Adjoining Grammar (TAG) is a useful grammatical tool to model RNA secondary structures containing pseudoknots, but its time complexity for parsing is not small enough for the practical use. Recently, Weinberg and Ruzzo proposed a method of approximating stochastic context free grammar by stochastic regular grammar and applied it to faster genome annotation of non-coding RNA families. This paper proposes a method for extending their idea to stochastic approximation of TAGs by regular grammars. We will also report some preliminary experimental results on how well we can filter out non candidate parts of genome sequences by using obtained approximate regular grammars.

Original Papers
Metabolic Pathway Alignment Based on Similarity between Chemical Structures

Yukako Tohsato, Yu Nishimura
In many of the chemical reactions in living cells, enzymes act as catalysts in the conversion of certain compounds (substrates) into other compounds (products). Metabolic pathways are formed as the products of these reactions are used as the substrates of other reactions. Comparative analyses of the metabolic pathways among species provide important information on both evolution and potential pharmacological targets. Here, we propose a method to align the metabolic pathways based on similarities between chemical structures. To measure the degree of chemical similarity, we formalized a scoring system using the MACCS keys and the Tanimoto/Jaccard coefficients. To determine the effectiveness of our method, it was applied to analyses of metabolic pathways in Escherichia coli. The results revealed compound similarities between fructose and mannose biosynthesis and galactose biosynthesis pathways.

Original Papers
High-throughput Automated Image Processing System for Cell Array Observations

Daisuke Tominaga, Fukumi Iguchi, Katsuhisa Horimoto, Yutaka Akiyama
In many cases of biological observations such as cell array, DNA micro-array or tissue microscopy, primary data are obtained as photographs. Specialized processing methods are needed for each kind of photographs because they have very wide variety, and often needed automated systems for modern high-throughput observations. We developed a fully-automated image processing system for cell array, high-throughput time series observation system for living cells, to evaluate gene expression levels and phenotype changes in time of each cell.

Human-Interface Basics
A Study on Approximate and Fine Adjustments by Hand Motion in an Immersive Environment

Noritaka Osawa, Xiangshi Ren
Immersive virtual reality (VR) has long been considered an excellent environment in which to manipulate 3D virtual objects. However currently used immersive VR user interfaces have limitations. For example, while direct manipulation by hand is easy to understand and to use for approximate positioning, direct manipulation by hand is not suitable for making fine adjustments to virtual objects in an immersive environment because it is difficult to hold an unsupported hand in midair and then to release an object at a fixed point. We therefore propose a method that combines direct 3D manipulation by hand with a virtual 3D gearbox widget that we recently designed. Using this method, hand manipulation is used first to move virtual objects and place them in an approximate position, and then the widget is used to move them into a precise position. The experimental evaluation showed that this combination of direct manipulation by hand and the proposed gearbox is the best of five tested methods in terms of completion ratio of task and subjective preference.

Regular Papers
Controlling Dominance Area of Solutions in Multiobjective Evolutionary Algorithms and Performance Analysis on Multiobjective 0/1 Knapsack Problems

Hiroyuki Sato, Hernán Aguirre, Kiyoshi Tanaka
This work proposes a method to control the dominance area of solutions in order to induce appropriate ranking of solutions for the problem at hand, enhance selection, and improve the performance of MOEAs on combinatorial optimization problems. The proposed method can control the degree of expansion or contraction of the dominance area of solutions using a user-defined parameter S. Modifying the dominance area of solutions changes their dominance relation inducing a ranking of solutions that is different to conventional dominance. In this work we use 0/1 multiobjective knapsack problems to analyze the effects on solutions ranking caused by contracting and expanding the dominance area of solutions and its impact on the search performance of a multi-objective optimizer when the number of objectives, the size of the search space, and the feasibility of the problems vary. We show that either convergence or diversity can be emphasized by contracting or expanding the dominance area. Also, we show that the optimal value of the area of dominance depends strongly on all factors analyzed here: number of objectives, size of the search space, and feasibility of the problems.

Algorithm Theory
Optimal Balanced Semi-Matchings for Weighted Bipartite Graphs

Yuta Harada, Hirotaka Ono, Kunihiko Sadakane, Masafumi Yamashita
The matching of a bipartite graph is a structure that can be seen in various assignment problems and has long been studied. The semi-matching is an extension of the matching for a bipartite graph G =(UV, E). It is defined as a set of edges, ME, such that each vertex in U is an endpoint of exactly one edge in M. The load-balancing problem is the problem of finding a semi-matching such that the degrees of each vertex in V are balanced. This problem is studied in the context of the task scheduling to find a “balanced” assignment of tasks for machines, and an OE¦¦U¦) time algorithm is proposed. On the other hand, in some practical problems, only balanced assignments are not sufficient, e.g., the assignment of wireless stations (users)to access points (APs) in wireless networks. In wireless networks, the quality of the transmission depends on the distance between a user and its AP; shorter distances are more desirable. In this paper, We formulate the min-weight load-balancing problem of finding a balanced semi-matching that minimizes the total weight for weighted bipartite graphs. We then give an optimal condition of weighted semi-matchings and propose an OE¦¦U¦¦V¦) time algorithm.

High-performance Computer Architectures
Architecture and Performance of Dynamic Offloading Mechanism for Maestro2 Cluster Network

Keiichi Aoki, Koichi Wada, Hiroki Maruoka, Masaaki Ono
In this paper, an architecture of software environment to offload user-defined software modules to Maestro2 cluster network, named Maestro dynamic offloading mechanism (MDO), is described. Maestro2 is a high-performance network for clusters. The network interface and the switch of Maestro2 have a general-purpose processor tightly coupled with a dedicated communication hardware. MDO enables the users to offload software modules to both the network interface and the switch. MDO includes a wrapper library with which offload modules can be executed on a host machine without rewriting the program. The overhead and the effectiveness of MDO are evaluated by offloading collective communications.

Research Papers
Using a Partial Geometric Feature for Similarity Search of 3D Objects

Yingliang Lu, Kunihiko Kaneko, Akifumi Makinouchi
Searching in a spatial database for 3D objects that are similar to a given object is an important task that arises in a number of database applications, for example, in medicine and CAD fields. Most of the existing similarity searching methods are based on global features of 3D objects. Developing a feature set or a feature vector of 3D object using their partial features is a challenging. In this paper, we propose a novel segment weight vector for matching 3D objects rapidly. We also describe a partial and geometrical similarity based solution to the problem of searching for similar 3D objects. As the first step, we split each 3D object into parts according to its topology. Next, we introduce a new method to extract the thickness feature of each part of every 3D object to generate its feature vector and a novel searching algorithm using the new feature vector. Finally, we present a novel solution for improving the accuracy of the similarity queries. We also present a performance evaluation of our stratagem. The experiment result and discussion indicate that the proposed approach offers a significant performance improvement over the existing approach. Since the proposed method is based on partial features, it is particularly suited to searching objects having distinct part structures and is invariant to part architecture.

Research Papers
Flexible Interface Mapping for System Cooperation and Its Evaluation

Makoto Nakatsuji, Yu Miyoshi, Yoshihiro Otsuka, Miki Hirano
We model the relationships between the message formats of a business system and their semantics in a machine-processable knowledge base. We describe a message-mapping technique that extracts the relationships between the message formats of several systems semiautomatically by using the class characteristics of the semantics and stores these relationships as past system design knowledge. In addition, we propose process-mapping, which is a technique that discovers suitable software components for system cooperation. We evaluate these methods using the interface specifications of actual service management systems and show that the frequency of interface adjustment can be reduced.

Research Papers
On the Properties of Evaluation Metrics for Finding One Highly Relevant Document

Tetsuya Sakai
Traditional information retrieval evaluation relies on both precision and recall. However, modern search environments such as the Web, in which recall is either unimportant or immeasurable, require precision-oriented evaluation. In particular, finding one highly relevant document is very important for practical tasks such as known-item search and suspected-item search. This paper compares the properties of five evaluation metrics that are applicable to the task of finding one highly relevant document in terms of the underlying assumptions, how the system rankings produced resemble each other, and discriminative power. We employ two existing methods for comparing the discriminative power of these metrics: The Swap Method proposed by Voorhees and Buckley at ACM SIGIR 2002, and the Bootstrap Sensitivity Method proposed by Sakai at SIGIR 2006. We use four data sets from NTCIR to show that, while P(+)-measure, O-measure and NWRR (Normalised Weighted Reciprocal Rank)are reasonably highly correlated to one another, P(+)-measure and O-measure are more discriminative than NWRR, which in turn is more discriminative than Reciprocal Rank. We therefore conclude that P(+)-measure and O-measure, each modelling a different user behaviour, are the most useful evaluation metrics for the task of finding one highly relevant document.

Research Papers
Evaluating Information Retrieval Metrics Based on Bootstrap Hypothesis Tests

Tetsuya Sakai
This paper describes how the bootstrap approach to statistics can be applied to the evaluation of IR effectiveness metrics. More specifically, we describe straightforward methods for comparing the discriminative power of IR metrics based on Bootstrap Hypothesis Tests. Unlike the somewhat ad hoc Swap Method proposed by Voorhees and Buckley, our Bootstrap Sensitivity Methods estimate the overall performance difference required to achieve a given confidence level directly from Bootstrap Hypothesis Test results. We demonstrate the usefulness of our methods using four different data sets (i.e., test collections and submitted runs) from the NTCIR CLIR track series for comparing seven IR metrics, including those that can handle graded relevance and those based on the Geometric Mean. We also show that the Bootstrap Sensitivity results are generally consistent with those based on the more ad hoc methods.

Security Infrastructure
Ancestor Excludable Hierarchical ID-based Encryption and Its Application to Broadcast Encryption

Atsuko Miyaji
An ID-based encryption (IBE) is a public key cryptosystem, in which a user's public key is given as a user ID. In IBE, only a single center generates all user secret keys, which may give the center a load of burdensome work. A hierarchical ID-based encryption (HIBE) is a kind of IBE and overcomes the problem by delegating a user secret key generation to a lower-level center, in which centers form a hierarchical structure. However, all ancestor nodes in HIBE act as centers. That is, any ancestor as well as the root can generate a secret key for any descendant node and, thus, a cipher text to a node can be decrypted by any ancestor node even if the ancestor does not have the same secret key as that of a target node. In this paper, we propose the concept of ancestor-excludable HIBE, in which ancestors with a level less than the designated one can be excluded from a set of privileged ancestors with a right to decrypt a cipher text to a target node. We also give the functional definition together with the security definition. This notion is denoted by AE-HIBE simply. We present the concrete example of AE-HIBE, which can work with constant-size ciphertext and decryption time, independent of the hierarchy level. We prove that our AE-HIBE is selective-ID-CPA secure in the standard model, which can be converted to be selective-ID-CCA secure by applying a general conversion method. Furthermore, AE-HIBE can be naturally applied to the broadcast encryption to realize the efficient public-key version with the user-key size of O(log2 N) and the transmission rate of O(r) for N users and r revoked users. The user-key size is the smallest at the transmission rate of O(r), up to the present.

Security Infrastructure
The Security of RC6 against Asymmetric Chi-square Test Attack

Tomohiko Hinoue, Atsuko Miyaji, Takatomi Wada
Knudsen and Meier applied the χ2-attack to RC6. The χ2-attack recovers a key by using high correlations measured by χ2-value. The best χ2-attacks to RC6 whose security is guaranteed theoretically works on 16-round RC6 with 192- and 256-bit key but just 8-round RC6 with 128-bit key, because it recovers keys of RC6 symmetrically, which requires a time complexity of #plaintexts × 254 and a memory complexity of 280 for recovering one key. In this paper, we improve the χ2-attack to reduce the time complexity. We give the theorem that evaluates the success probability of the χ2-attack on RC6 without using any experimental result. Our key recovery attack recovers keys asymmetrically, which requires a time complexity of #plaintexts × 231 and a memory complexity of 252 for recovering one key. As a result, our key recovery attack works on 16-round RC6 with 192- and 256-bit key and 12-round RC6 with 128-bit key. In the case both of 196- and 256-bit keys, our attack surprisingly reduces the time and memory complexity compared with that of the previous attack. We also demonstrate our theorem on RC6-8/4/8 and make sure of the accuracy by comparing our approximation with the experimental results.

Security and Society
Empirical-Analysis Methodology for Information-Security Investment and Its Application to Reliable Survey of Japanese Firms

Wei Liu, Hideyuki Tanaka, Kanta Matsuura
This paper presents a series of empirical analyses of information-security investment based on a reliable survey of Japanese enterprises. To begin with, after showing our methodology for representing the vulnerability level regarding the threat of computer viruses, we verify the relation between vulnerability level and the effects of information security investment.Although in the first section there is only a weak empirical support of the investment model, one can understand that the representing methodology is worth attempting in empirical analyses in this research field. In the second section, we verify the relations between the probability of computer virus incidents and adopting a set of information security countermeasures. It is shown that “Defense Measure” associated with “Information Security Policy” and “Human Cultivation”has remarkable effects on virus incidents. At the last step, we analyze the effect of continuous investment in the three security countermeasures. The empirical results suggest that virus incidents were significantly reduced in those enterprises which adopted the three countermeasures both in 2002 and in 2003.

Security Infrastructure
Ring Signatures: Universally Composable Definitions and Constructions

Kazuki Yoneyama, Kazuo Ohta
Though anonymity of ring signature schemes has been studied in many publications, these papers gave different definitions and there has been no consensus. Recently, Bender, et al. proposed two new anonymity definitions of ring signature schemes which are stronger than the previous definitions, that are called anonymity against attribution attacks/full key exposure. In addition, ring signature schemes have two levels of definitions for unforgeability definitions, i.e., existential unforgeability and strong existential unforgeability. In this paper, we will redefine anonymities and unforgeabilities within the universally composable (UC) security framework. First, we will give new ideal functionalities of ring signature schemes for each security level separately. Next, we will show the relations between game-based security definitions and our UC definitions. Finally, we will give another proof for the security of the Bender, et al.'s ring signature scheme within the UC framework. A simulator we constructed in this proof can easily simulate an adversary of existential unforgeability, which can be adaptable to the case of strong existential unforgeability if we assume the exploited signature scheme is a standard single strong existentially unforgeable signature scheme.

Network Security
Universally Composable Client-to-Client General Authenticated Key Exchange

Haruki Ota, Kazuki Yoneyama, Shinsaku Kiyomoto, Toshiaki Tanaka, Kazuo Ohta
In large-scale networks, users want to be able to communicate securely with each other over a channel that is unreliable. When the existing 2- and 3-party protocols are realized in this situation, there are several problems: a client must hold many passwords and the load on the server concerning password management is heavy. In this paper, we define a new ideal client-to-client general authenticated key exchange functionality, where arbitrary 2-party key exchange protocols are applicable to protocols between the client and server and between servers. We also propose a client-to-client general authenticated key exchange protocol C2C-GAKE as a general form of the client-to-client model, and a client-to-client hybrid authenticated key exchange protocol C2C-HAKE as an example protocol of C2C-GAKE to solve the above problems. In C2C-HAKE, a server shares passwords only with clients in the same realm respectively, public/private keys are used between respective servers, and two clients between different realms share a final session key via the respective servers. Thus, with regard to password management in C2C-HAKE, the load on the server can be distributed to several servers. In addition, we prove that C2C-HAKE securely realizes the above functionality. C2C-HAKE is the first client-to-client hybrid authenticated key exchange protocol that is secure in a universally composable framework with a security-preserving composition property.

Wireless/Mobile Networks
Performance Analysis of Optimized Link State Routing-based Localization

Tomoya Takenaka, Hiroshi Mineno, Yuichi Tokunaga, Naoto Miyauchi, Tadanori Mizuno
Node localization obtained by estimating node positions is an essential technique for wireless multi-hop networks. In this paper, we present an optimized link state routing (OLSR)-based localization (ROULA) that satisfies the following key design requirements: (i) independency from anchor nodes, (ii) robustness for non-convex network topology, and(iii) compatibility with network protocol. ROULA is independent from anchor nodes and can obtain the correct node positions in non-convex network topology. In addition, ROULA is compatible with the OLSR network protocol, and it uses the inherent distance characteristic of multipoint relay (MPR) nodes. We reveal the characteristics of MPR selection and the farthest 2-hop node selection used in ROULA, and describe how these node selections contribute to reducing the distance error for a localization scheme without using ranging devices. We used a simulation to specify appropriate MPR_COVERAGE, which is defined to control the number of MPR nodes in OLSR, and give a comparative performance evaluation of ROULA for various scenarios including non-convex network topology and various deployment radii of anchor nodes. Our evaluation proves that ROULA achieves desirable performance in various network scenarios.

Regular Papers
A Low-stretch Object Migration Scheme for Wide-area Environments

Ken Hironaka, Kenjiro Taura, Takashi Chikayama
We propose a low-stretch scheme for locating mobile objects in wide-area computing environments. Locating mobile objects in distributed computing systems is a non-trivial problem and has been investigated for decades. The forwarding address algorithm, perhaps the most popular algorithm, requires the previous holder of the object to point to the successive holder, and to forward all requests along this pointer. However, this approach cannot provide any access stretch bounds for wide-area settings, and can incur unlimited communication overhead. This is unacceptable when a large number of objects simultaneously move or when numerous referencers attempt to access an object that has moved. We propose an active update method where nodes in the vicinity of the object's location are notified of its new location via localized update messages. Moreover, we will utilize the overlay topology information to minimize these messages. Referencers beyond the scope of the update will still be able to safely access the object. We will demonstrate that these updates maintain access stretches low even in wide-area settings.

Development Environments and Automated Technologies
Mining Quantitative Rules in a Software Project Data Set

Shuji Morisaki, Akito Monden, Haruaki Tamada, Tomoko Matsumura, Ken-ichi Matsumoto
This paper proposes a method to mine rules from a software project data set that contains a number of quantitative attributes such as staff months and SLOC. The proposed method extends conventional association analysis methods to treat quantitative variables in two ways: (1) the distribution of a given quantitative variable is described in the consequent part of a rule by its mean value and standard deviation so that conditions producing the distinctive distributions can be discovered. To discover optimized conditions, (2) quantitative values appearing in the antecedent part of a rule are divided into contiguous fine-grained partitions in preprocessing, then rules are merged after mining so that adjacent partitions are combined. The paper also describes a case study using the proposed method on a software project data set collected by Nihon Unisys Ltd. In this case, the method mined rules that can be used for better planning and estimation of the integration and system testing phases, along with criteria or standards that help with planning of outsourcing resources.

Mobile Computing
Applying Ontology and Probabilistic Model to Human Activity Recognition from Surrounding Things

Naoharu Yamada, Kenji Sakamoto, Goro Kunito, Yoshinori Isoda, Kenichi Yamazaki, Satoshi Tanaka
This paper proposes human activity recognition based on the actual semantics of the human's current location. Since no predefined semantics of location can adequately identify human activity, we automatically identify the semantics from things by focusing on the association between things and human activities with the things. Ontology is used to deal with the various possible representations (terms) of each thing, identified by a RFID tag, and a multi-class Naive Bayesian approach is applied to detect multiple actual semantics from the terms. Our approach is suitable for automatically detecting possible activities even given a variety of object characteristics including multiple representations and variability. Simulations with actual thing datasets and experiments in an actual environment demonstrate its noise tolerance and ability to rapidly detect multiple actual semantics from existing things.

Applications in Humanities
On Content Distribution Model and Analyzing Distribution Effectiveness

Pao Sriprasertsuk, Akiko Seki, Wataru Kameyama
Almost all traditional methods of advertisement distribution have been concerned only with primary information distribution via certain kinds of media. However, the rapid growth of the Internet and interactive media have demonstrated the power and efficiency of secondary information distribution of information by consumers such as words of mouth and free-mail. However, an advertisement distribution model which can be used to analyze and measure the effectiveness of such secondary distribution has never been discussed. Therefore, in this paper, we propose an advertisement distribution model and show how to use the model to analyze both primary and secondary information distributions. Our experiment and analytical results are also discussed. The experimental result shows that the proposed model can be used to measure and analyze the effectiveness of advertisement distribution.

Adhoc networks
Design and Implementation of Ad Hoc Communication and Applications on Mobile Phone Terminals

Yujin Noishiki, Hidetoshi Yokota, Akira Idoue
Recent advances in wireless communication technologies have attracted ad hoc networking, which enables mobile users to communicate with each other without any infrastructure. While many ad hoc routing protocols have been proposed, ad hoc communication and its applications are not widespread. The handiest wireless devices, namely, mobile phone terminals, are expected to expedite widespread use of ad hoc communications. More mobile phones have been equipped with short distance wireless interfaces such as wireless LAN and Bluetooth. When ad hoc communication is realized on such mobile phones with these wireless interfaces, ad hoc communications will be available anytime, anywhere, which we believe will become a driving force for ad hoc networking technology. However, although the performance of mobile phones has improved greatly over the past several years, their resources are still limited compared to laptop PCs. In this paper, we first present design principles for mobile ad hoc communications based on cases of assumed use. According to the design principles, we implement ad hoc networking protocols as well as their applications on mobile phones to verify their effectiveness. Based on the results of evaluations, we discuss the performance characteristics and potential application areas.

Adhoc networks
Performance Evaluation of Directional MAC Protocols for Deafness Problem in Ad Hoc Networks

Masanori Takata, Masaki Bandai, Takashi Watanabe
Several directional MAC protocols for ad hoc networks using directional antennas have been proposed recently. Although directional antennas have great potential such as high spatial reuse and range extension, directional MAC protocols inherently introduce new problems arising from directivity. Deafness is one of the major problems and reduces the performance, caused by a lack of state information from neighbor nodes. This paper presents two directional MAC protocols, DMAC/DA (Directional MAC with Deafness Avoidance) and RI-DMAC (Receiver-Initiated Directional MAC), which handle the deafness problem, and mainly evaluates these protocols through extensive simulation study. DMAC/DA is a proactive handling method for deafness. In DMAC/DA, WTS (Wait To Send) frames are transmitted to notify the on-going communication to potential transmitters that may experience deafness. In this paper, DMAC/DA is enhanced by the next packet notification, called DMAC/DA with NPN, to distinguish transmitters from neighbors. On the other hand, RI-DMAC handles deafness reactively using a combination of sender-initiated and receiver-initiated operations. In RI-DMAC, each node polls a potential deafness node using RTR (Ready To Receive) after the completion of every dialog. The experimental results show that our proposed protocols outperform existing directional MAC protocols in terms of throughput, control overhead and packet drop ratio.

Network Protocols
A Proposal for Service Differentiation by a Link Layer Protocol Based on SR ARQ and Its Evaluation

Toshihiro Shikama, Tetsushi Matsuda, Yoshiaki Terashima, Takashi Watanabe, Tadanori Mizuno
This paper studies the situation where multiple IP flows are aggregated over a single wireless channel and an error recovery by retransmissions is performed by Selective-Repeat (SR) ARQ. We propose MQ-PFRS (Multi-QoS Per-Flow Resequencing) ARQ that provides a differentiated service for an IP flow depending on its QoS class, such as real-time or non-real-time. MQ-PFRS ARQ eliminates interferences among IP flows by resequencing received packets independently for each IP flow. It also controls the maximum packet delay by limiting the persistency of retransmissions and the maximum resequencing time for each packet. This paper also presents an analysis of the probability distribution of real-time packet delays. Simulation results show that the delay variation of real-time traffic is effectively controlled by proposed MQ-PFRS ARQ and the packet delay distribution is quite consistent with the results of the analysis. This means that MQ-PFRS is effective for handling multiple QoS classes of IP flows and the quality of real-time traffic transferred by the scheme can be predicted by the analysis.

Research Papers
A High Collusion-Resistant Approach to Distributed Privacy-preserving Data Mining

Shintaro Urabe, Jiahong Wang, Eiichiro Kodama, Toyoo Takata
Data mining across different companies, organizations, online shops, or the likes, called sites, is necessary so as to discover valuable shared patterns, associations, trends, or dependencies in their shared data. Privacy, however, is a concern. In many situations it is required that data mining should be conducted without any privacy being violated. In response to this requirement, this paper proposes an effective distributed privacy-preserving data mining approach called CRDM (Collusion-Resistant Data Mining). CRDM is characterized by its ability to resist the collusion. Unless all sites but the victim collude, privacy of a site cannot be violated. Considering that for such applications that need not so high a level of security, excess security assurance would incur extra costs in communication, an extension scheme is also presented so that communication cost can be restrained while still maintaining a user-specified level of security. Results of both analytical and experimental performance study demonstrate the effectiveness of CRDM.

Acquisition and Rectification of Shape Data Obtained by a Moving Range Sensor

Atsuhiko Banno, Katsushi Ikeuchi
“Modeling from Reality” techniques are making great progress because of the availability of accurate geometric data from three-dimensional digitizers. These techniques contribute to numerous applications in many areas. Among them, one of the most important and comprehensive applications is modeling cultural heritage objects. For a large object, scanning from the air is one of the most efficient methods for obtaining 3D data. We developed a novel 3D measurement system, the Floating Laser Range Sensor (FLRS), in which a range sensor is suspended beneath a balloon. The obtained data, however, have some distortions due to movement of the system during the scanning process. We propose two novel methods to rectify the shape data obtained by the moving range sensor. One method rectifies the data by using image sequences; the other rectifies the data without images. To test these methods, we have conducted a digital archiving project of a large-scale heritage object, in which our algorithms are applied. The results show the effectiveness of our methods. Moreover, both methods are applicable not only to our FLRS, but also to moving range sensors in general.

Registration and Deformation of 3D Shape Data through Parameterized Formulation

Tomohito Masuda, Katsushi Ikeuchi
In this paper, we investigate conventional registration implementation, consisting of rotation and translation, to design the most precise registration so as to accurately restore the 3D shape of an object. To achieve the most accurate registration, our registration implementation needs robustness against data noise, or initial pose and position of data. To verify the accuracy of our implemented registration, we compare the registration behavior with the registration behavior of conventional methods, and evaluate the numerical accuracy of transformation parameter obtained by our registration. However, registration by rigid-body transformation is not enough for modeling and shape comparison: registration with deformation is needed. In this paper, we extend our robust registration to simultaneously estimate the shape parameter as well as the rigid-body transformation parameter. This extension method assumes that the deformation is formulated strictly from the deformation mechanism. We additionally introduce the applications of our extension method.

Regular Paper
Sub-Computation Based Transition Predicate Abstraction

Carl Christian Frederiksen, Masami Hagiya
The transition predicate abstraction framework developed by Podelski, et al. (2005) captures size relations over state transitions which can be used to show infeasibility of certain program computations. In particular, general liveness properties (i.e., properties of infinite computations) can be verified by reducing the verification problem to one of fair termination and then proving that all (infinite) fair computations are infeasible. We present an extension of the algorithm by Podelski, et al. that can be used to improve the precision of transition predicate abstraction as well as speed up analysis time for programs with well-structured control-flow. The main key is to identify sub-computations that can be evaluated independently of their context. Efficiency is then readily improved by analyzing each sub-computation in turn, thus avoiding to reanalyze the effect of a given sub-computations for different contexts. Further, precision can be improved by using stronger methods for extracting summary information about a given sub-computation. We present two versions of the sub-computation based analysis: one for a non-parallel imperative language with loops and recursive procedures, serving as an introduction, and one for the extension of the non-parallel language to a parallel language with synchronous communication via statically named channels.

Regular Paper
An Equivalence Relation for the Typed Ambient Calculus

Toru Kato
The ambient calculus is a process algebra designed for describing mobile processes. It has a layered structure of ambients that enables us to describe not only mobile processes but also the world in which the processes move around such as computer networks and freight systems. When we describe such a system with ambients, however, malicious processes can destroy nodes of the network or alter the construction of the system. Thus, several mobility types for the ambient calculus have been proposed to enable us to give each node desirable characteristics that prevent malicious processes from acting harmfully. Gordon and Cardelli, the originators of the ambient calculus, also defined an equivalence relation for the untyped ambient calculus. Our previous work pointed out that there exist identified processes up to the relation that have different properties, and it refined the relation so that we can discriminate those processes. This paper shows that the original relation and our previous relation are no longer available for the typed ambient calculus and it presents another relation that is suitable.

Regular Paper
On-the-fly Model Checking of Security Protocols and Its Implementation by Maude

Guoqiang Li, Mizuhito Ogawa
Trace analysis for a security protocol represents every possible run as a trace and analyzes whether any insecure run is reachable. The number of traces will be infinite due to (1) infinitely many sessions of a protocol, (2) infinitely many principals in the network, and (3) infinitely many messages that intruders can generate. This paper presents an on-the-fly model checking method by restricting/abstracting these infinite factors to a finite model. First, we restrict a typed process calculus to avoid recursive operations, so that only finitely many sessions are considered. Next, a bound variable is introduced as an index of a message to represent its intended destination, so that an unbounded number of principals are finitely described. Then, messages in which irrelevant parts are reduced in a protocol are unified to a parametric message based on the type information. We implement the on-the-fly model checking method using Maude, and automatically detect the flaws of several security protocols, such as the NSPK protocol and the Woo-Lam protocol, etc..

Special Issue on Selected Papers from ICMU2006
A Heterogeneous Multihop Wireless Access Network for Multipoint Streaming: A detailed Performance Analysis

Mariann Hauge, Øivind Kure
The current 3G-Cellular radio access network cannot support many concurrent high data rate unicast or multicast flows due to limited radio resources. We have proposed a heterogeneous wireless network architecture intended for point-to-multipoint services, to improve the availability of such services to mobile users. The architecture consists of a 3G-Cellular network, supported by a number of local ad hoc networks that are established on demand. In this framework the 3G multipoint-channel range is reduced while the unicast and signalling connections are maintained. Local ad hoc networks are used to forward the multicast data onto users located outside the shortened 3G multicast-channel range. In this paper we present a performance analysis of multicast streaming on the heterogeneous network architecture. The simulation results are complemented with a sensitivity analysis identifying the impact that parameters like node mobility and traffic patterns will have. The results verify that the architecture and the routing protocol are able to provide multicast services with acceptable quality to the multicast subscribers, while conserving 3G-Cellular radio resources.

Special Issue on Selected Papers from ICMU2006
Design, Analysis, and Evaluation of Mobile Agent based Privacy Protection Scheme for Multi-party Computation Problems

Md. Nurul Huda, Eiji Kamioka, Shigeki Yamada
Existing cryptography-based algorithms for multi-party computation problems (MPCP) are either too complex to be used practically or applicable only to the specific applications for which they have been developed. In addition, traditional (non-cryptography-based)algorithms do not provide good privacy protection for MPCPs. This paper presents the security system design, intuitive analysis, and empirical evaluation of an agent server platform for protecting privacy in solving multi-party computation problems with traditional algorithms. We have devised the security policies for an agent server platform and showed how the enforcement of those policies can effectively protect privacy while solving many MPCPs. Experimental analysis shows that the presented agent platform security system can significantly enhance privacy protection while solving the problems with traditional algorithms.

Special Issue on Selected Papers from ICMU2006
Sensor Cube: A Modular, Ultra-Compact, Power-Aware Platform for Sensor Networks

Hamadoun Diall, Kishore Raja, Ioannis Daskalopoulos, Stephen Hailes, George Roussos, Tom Torfs, Chris Van Hoof
The Sensor Cube platform is an ultra-compact, modular and power-aware way of building sensor networks; they are based on a stackable hardware design supported by a Tiny OS based operating environment. The Sensor Cube hardware measures 14 × 14 ×18mm3 and features an integrated coplanar antenna, a design that results in an ultra compact footprint. A core characteristic of the system is that its modular design allows for each of the radio, processor, sensing and power management layers to be interchangeable in a Lego-like manner. Moreover, its low power radio (based on the 2.4GHz Nordic nRF2401 design) and microcontroller (based on the Texas Instruments MSP430) allow for very efficient operation. The Sensor Cube operating and software development environment is derived from Tiny OS, which has been modified to meet the hardware requirements, in particular by introducing a power-aware and reliable ALOHA-type MAC protocol. In this paper we present our experience with the Sensor Cube platform and, in particular, the implications of its ultra-compact design on system performance, specifically as it relates to the characteristics and limitations of the radio unit.

Special Issue on Selected Papers from ICMU2006
Performance Improvement of Ad Hoc Networks by Deployment of Directional Antenna

Kagenari Yamamoto, Miki Yamamoto
Ad hoc network which requires no infrastructure equipment such as access points is paid great attention in recent years. In ad hoc network, each host controls its wireless access in a distributed fashion. Generally, carrier sensing is used for detecting channel availability. By using RTS and CTS control frames, MAC sub-layer predicts the channel use and resolves hidden-terminal problem. Broadcast nature of RTS and CTS control frames leads to inefficient spatial reuse. One promising solution for this inefficient spatial reuse is usage of directional antenna. In ad hoc networks, heterogeneous network configuration is general, which means each host is equipped with different network facilities. For antennas, it will be general situation that not all hosts are equipped with directional antenna. This paper evaluates throughput performance of the ad hoc network in this realistic situation and shows how much performance improvement deployment of directional antenna will bring to the network.

Special Issue on Selected Papers from ICMU2006
Decentralised Discovery of Mobile Objects

Richard Glassey, Graeme Stevenson, Robert Ian Ferguson
The partially connected nature of mobile and ubiquitous computing environments presents software developers with hard challenges. Mobile code has been suggested as a natural fit for simplifying software development for these environments. However, existing strategies for discovering mobile code assume an underlying fixed, stable network. An alternative approach is required for mobile environments, where network size may be unknown and reliability cannot be guaranteed. This paper introduces AMOS, a mobile object platform augmented with a structured overlay network that provides a fully decentralised approach to the discovery of mobile objects. We demonstrate how this technique has better reliability and scalability properties than existing strategies, with minimal communication overhead. Building upon this novel discovery strategy, we show how load balancing of mobile objects in an AMOS network can be achieved through probabilistic means.

Special Issue on Selected Papers from ICMU2006
AD LOC: Collaborative Location-based Annotation

Derek J. Corbett, Daniel Cutting
AD LOC is a novel system for mobile devices to collaboratively tie persistent virtual notes to physical locations. Notes that are relevant to particular locations can be created and then cached using serendipitously formed one-hop wireless ad hoc network connections. The location provides an address to which the information is relevant and devices attempt to keep the information stored at devices which remain close to this address. In simulation experiments, even under conditions of nodal flux, it is possible to retrieve over 90% of the notes that have been stored.

Network Security
Anomaly Detection Using Integration Model of Vector Space and Network Representation

Mizuki Oka, Kazuhiko Kato
We propose the Eigen Co-occurrence Matrix (ECM) method, which is a modeling method for tracking the behaviors of an individual, system, or network in terms of event sequences of discrete data. Our method uses the correlation between events in a sequence to extract distinct characteristics. A key idea behind the ECM method is to regard a sequence as a serialized sequence that originally had structural relations and to extract the embedded dependencies of the events. To test its retrieval performance, we applied the ECM method to the problem of anomaly detection in intrusion detection systems. Specifically, we used the method to model a UNIX command sequence and attempted to detect intruders masquerading as valid users. The experimental results reveal that the ECM method offers distinct characteristic models for analyzing event sequences.

Processor Architecture
An Efficient Analysis of Worst Case Flush Timings for Branch Predictors

Masahiro Konishi, Takashi Nakada, Tomoaki Tsumura, Hiroshi Nakashima, Hiroaki Takada
This paper proposes an efficient algorithm to find the worst case flush timings for a given program with respect to the number of branch mispredictions. We first give a basic algorithm based on dynamic programming which takes O(N2F) computation time for a program with Nconditional branches and F flush timings. We then show it can be improved to achieve a computation time of approximately O(NF) for practical programs with its proof obtained through an evaluation with SPEC CPU95 benchmarks.

Computer Graphics
Physical Model to Achieve Virtual Mezzotint

Daisuke Tasaki, Megumi Katou, Shinji Mizuno, Minoru Okada
The authors propose a method of producing virtual mezzotint using a physics-based rendering approach. Mezzotint is a traditional copperplate printing technique. An important characteristic is its gradations from black to white. This is acquired in three phases during the plate-making process, i.e., by roughening, scraping, and burnishing. Numerous dots and burrs are created with a rocker in the roughening phase over the entire surface of the copper plate to obtain black areas in the print. The burrs are removed in the second phase with a scraper to yield halftones. The plate surface is finally polished with a burnisher in the last phase to produce the white areas. A method of simulating these phases and physical phenomena is discussed in this paper to make mezzotint enjoyable even for beginners and children. Zigzag-stroke patterns and paraboloidal-dot models are applied to the rocker to simulate the roughening phase. Reducing and smoothing models are applied to the scraper and burnisher to simulate the scraping and burnishing phases. The feasibility of the proposed method is demonstrated by observing and comparing actual and virtual plate surfaces with determined patterns and actual pieces of handcrafted work.

Theory of Programs
Duality between Call-by-value Reductions and Call-by-name Reductions

Daisuke Kimura
Wadler proposed the dual calculus, which corresponds to classical sequent calculus LK, and studied the relationship between the λμ-calculus and the dual calculus as equational systems to explain the duality between call-by-value and call-by-name in a purely syntactical way. Wadler left an open question whether one can obtain similar results by replacing the equations with reductions. This paper gives one answer to his question. We first refine the λμ-calculus as reduction systems by reformulating sum types and omitting problematic reduction rules that are not simulated by reductions of the dual calculus. Secondly, we give translations between the call-by-name λμ-calculus and the call-by-name dual calculus, and show that they preserve the call-by-name reductions. We also show that the compositions of these translations become identity maps up to the call-by-name reductions. We also give translations for the call-by-value systems, and show that they satisfy properties similar to the call-by-name translations. Thirdly, we introduce translations between the call-by-value λμ-calculus and the call-by-name one by composing the above translations with duality on the dual calculus. We finally obtain results corresponding to Wadler's, but our results are based on reductions.

Intellectual Creative Tasks
A Multi-Resolution Grid Snapping Technique Based on Fuzzy Theory

Qamar Uddin Khand, Sumudu Dematapitiya, Sato Saga, Junji Maeda
This paper presents a grid snapping technique, called multi-resolution fuzzy grid snapping (MFGS), that enables automatic mouse cursor snapping for a multi-resolution grid system. Quick and frequent switching between high- and low-resolution grid snapping is essential to make geometrical drawings that include both fine and coarse structures when using CAD systems with ordinary single-resolution grid systems. MFGS is intended to relieve users of this tedious manual switching. MFGS dynamically selects an appropriate snapping resolution level from a multi-resolution grid system according to the pointing behavior of the user. MFGS even supports an extremely fine grid resolution level, which is referred to as the no-snapping level. We show experimental results which demonstrate that MFGS is an effective grid snapping technique that speeds up low-resolution grid snapping while retaining the ability to snap to high-resolution grids. Furthermore, we examine the role of fuzziness in MFGS and its effect on snapping performance.

Database/Software Paper
Self-organizing Clustering: Non-hierarchical Clustering for Large Scale DNA Sequence Data

Kou Amano, Hiroaki Ichikawa, Hidemitsu Nakamura, Hisataka Numa, Kaoru Fukami-Kobayashi, Yoshiaki Nagamura, Natsuo Onodera
Recently, clustering has been recognized as an important and fundamental method that analyzes and classifies large-scale sequence data to provide useful information. We developed a novel clustering method designated as Self-organizing clustering (SOC) that uses oligonucleotide frequencies for large-scale DNA sequence data. We implemented SOC as a command-line program package, and developed a server that provides access to it enabling visualization of the results.SOC effectively and quickly classifies many sequences that have low or no homology to each other. The command-line program is downloadable at http://rgp.nias.affrc.go.jp/programs/. The on-line web site is publicly accessible at http://rgp.nias.affrc.go.jp/SOC/. The common gateway interface (CGI) for the server is also provided within the package.

Original Papers
A Biclustering Method for Gene Expression Module Discovery Using a Closed Itemset Enumeration Algorithm

Yoshifumi Okada, Wataru Fujibuchi, Paul Horton
A gene expression module (module for short) is a set of genes with shared expression behavior under certain experimental conditions. Discovering of modules enables us to uncover the function of uncharacterized genes or genetic networks. In recent years, several biclustering methods have been suggested to discover modules from gene expression data matrices, where a bicluster is defined as a subset of genes that exhibit a highly correlated expression pattern over a subset of conditions. Biclustering however involves combinatorial optimization in selecting the rows and columns composing modules. Hence most existing algorithms are based on heuristic or stochastic approaches and produce possibly sub-optimal solutions. In this paper, we propose a novel biclustering method, BiModule, based on a closed itemset enumeration algorithm. By exhaustive enumeration of such biclusters, it is possible to select only biclusters satisfying certain criteria such as a user-specified bicluster size, an enrichment of functional annotation terms, etc. We performed comparative experiments to existing salient biclustering methods to test the validity of biclusters extracted by BiModule using synthetic data and real expression data. We show that BiModule provides high performance compared to the other methods in extracting artificially-embedded modules as well as modules strongly related to GO annotations, protein-protein interactions and metabolic pathways.

Original Papers
Hardness Results on Local Multiple Alignment of Biological Sequences

Tatsuya Akutsu, Hiroki Arimura, Shinichi Shimozono
This paper studies the local multiple alignment problem, which is, given protein or DNA sequences, to locate a region (i.e., a substring) of fixed length from each sequence so that the score determined from the set of regions is optimized. We consider the following scoring schemes: the relative entropy score (i.e., average information content), the sum-of-pairs score and a relative entropy-like score introduced by Li, et al. We prove that multiple local alignment is NP-hard under each of these scoring schemes. In particular, we prove that multiple local alignment is APX-hard under relative entropy scoring. It implies that unless P =NP there is no polynomial time algorithm whose worst case approximation error can be arbitrarily specified(precisely, a polynomial time approximation scheme). Several related theoretical results are also provided.

Original Papers
Retrieving Functionally Similar Bioinformatics Workflows Using TF-IDF Filtering

Junya Seo, Shigeto Seno, Yoichi Takenaka, Hideo Matsuda
In bioinformatics, dealing with tools to analyze biological data becomes important. Those tools are provided by various institutions and the number of the tools is rapidly increasing. Recently many institutions have been offering those tools and access to databases with Web service technologies. The workflow technology is one of the ways to manage those tools and it is becoming available in bioinformatics. In order to compose workflows, several research groups develop and provide workflow composing tools. And consequently, the concept “Workflow Reuse” is also arisen in order to help workflow composition. Nevertheless it is still difficult to search for the reusable workflows from the repository of workflows in the current situation. In this paper, we propose a method to extract reusable workflows from the repository by using currently available information. We could extract some functionally similar workflows as reusable ones. By extracting reusable workflows efficiently, researchers can compose their workflow more easily.

Original Papers
Function Approximation Approach to the Inference of Neural Network Models of Genetic Networks

Shuhei Kimura, Katsuki Sonoda, Soichiro Yamane, Koki Matsumura, Mariko Hatakeyama
A model based on a set of differential equations can effectively capture various dynamics. This type of model is therefore ideal for describing genetic networks. Several genetic network inference algorithms based on models of this type have been proposed. Most of these inference methods use models based on a set of differential equations of the fixed form to describe genetic networks. In this study, we propose a new method for the inference of genetic networks. To describe genetic networks, the proposed method does not use models of the fixed form, but uses neural network models. In order to interpret obtained neural network models, we also propose a method based on sensitivity analysis. The effectiveness of the proposed methods is verified through a series of artificial genetic network inference problems.

Original Papers
GroupAdaBoost: Accurate Prediction and Selection of Important Genes

Takashi Takenouchi, Masaru Ushijima, Shinto Eguchi
In this paper, we propose GroupAdaBoost which is a variant of AdaBoost for statistical pattern recognition. The objective of the proposed algorithm is to solve the “ p » n ”problem arisen in bioinformatics. In a microarray experiment, gene expressions are observed to extract any specific pattern of gene expressions related to a disease status. Typically, p is the number of investigated genes and n is number of individuals. The ordinary method for predicting the genetic causes of diseases is apt to over-learn from any particular training dataset because of the“ p » n ” problem. We observed that GroupAdaBoost gave a robust performance for cases of the excess number p of genes. In several real datasets which are publicly available from web-pages, we compared the analysis of results among the proposed method and others, and a small scale of simulation study to confirm the validity of the proposed method. Additionally the proposed method effectively worked for the identification of important genes.

Music Interfaces
Drumix: An Audio Player with Real-time Drum-part Rearrangement Functions for Active Music Listening

Kazuyoshi Yoshii, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
This paper presents a highly functional audio player, called Drumix, that allows a listener to control the volume, timbre, and rhythmic patterns (drum patterns)of bass and snare drums within existing audio recordings in real time. A demand for active music listening has recently emerged. If the drum parts of popular songs could be manipulated, listeners could have new musical experiences by freely changing their impressions of the pieces (e.g., making the drum performance more energetic)instead of passively listening to them. To achieve this, Drumix provides three functions for rearranging drum parts, i.e., a volume control function that enables users to cut or boost the volume of each drum with a drum-specific volume slider, a timbre change function that allows them to replace the original timbre of each drum with another selected from a drop-down list, and a drum-pattern editing function that enables them to edit repetitive patterns of drum onsets on a graphical representation of their scores. Special musical skills are not required to use these functions. Subjective experiments revealed that Drumix could add a new dimension to the way listeners experience music.

Group Interaction Support and Groupware
Estimating Interruptibility in the Home for Remote Communication Based on Audio-Visual Tracking

Yoshinao Takemae, Takehiko Ohno, Ikuo Yoda, Shinji Ozawa
This paper presents a method for automatically estimating human interruptibility in the home environment. To make online remote communication smoother, determining if it is appropriate to interrupt the remote partner is critical. As a first step in achieving this goal, several audio-visual features, extracted from data streams captured by a camera and a microphone, are correlated to human assessments of own interruptibility. Based on these features, the level of interruptibility is estimated using the trained Support Vector Regression (SVR) technique. Finally, we discuss the potential of our method based on the results of several experiments.

Regular Papers
Simulation of Compositional Variation in Liquid Phase Epitaxy InGaP Using a Two Dimensional Model

Hiromoto Susawa, Toshihiro Tsuji, Kazumasa Hiramatsu, Takashi Jimbo, Tetsuo Soga
Compositional variation in initial growth was observed in Liquid Phase Epitaxy (LPE) experiments. It is thought that the cause is the flow of the melt which is induced by the slide of the entire melt from the place where there isn't substrate to the substrate before growth. The phenomenon was simulated with a one dimensional model in InGaP growth with a convection term added to a diffusion-limited model. The velocity of flow in the melt was approximated to Stokes's first problem. It was shown that composition of In in grown solid with the flow is larger than that without the flow. In this paper, InGaP LPE growth is considered and a two dimensional model of the flow is used. For the solute transport, a two dimensional model is also adopted except at the growth interface. A result similar to the one dimensional model was obtained. In the two dimensional model, In composition is greater than that in the case without flow for the greater part of the growth time, but decreases when the flow transports dilute solution. This decrease doesn't appear in the one dimensional model. This phenomenon can be explained by considering the influence of the flow on the mole fraction near the boundary layer.

Regular Papers
Enhancing Multiobjective Evolutionary Algorithms by Local Dominance and Local Recombination: Performance Verification in Multiobjective 0/1 Knapsack Problems

Hiroyuki Sato, Hernán Aguirre, Kiyoshi Tanaka
This paper proposes a method to enhance single population multiobjective evolutionary algorithms (MOEAs) by searching based on local dominance and local recombination. In this method, first, all fitness vectors of individuals are transformed to polar coordinate vectors in objective function space. Then, the population is iteratively divided into several subpopulations by using declination angles. As a result, each sub-population covers a sub-region in the multiobjective space with its individuals located around the same search direction. Next, local dominance is calculated separately for each sub-population after alignment of its principle search direction by rotation. Selection, recombination, and mutation are applied to individuals within each sub-population. The proposed method can improve the performance of MOEAs that use dominance based selection, and can reduce the entire computational cost to calculate dominance among solutions as well. In this paper we verify the effectiveness of the proposed method obtaining Pareto optimal solutions in two representative MOEAs, i.e. NSGA-II and SPEA2, with Multiobjective 0/1 Knapsack Problems.

Wireless/Mobile Networks
Design and Implementation of Socket-level Bandwidth Aggregation Mechanism for Mobile Networking Environments

Hiroshi Sakakibara, Masato Saito, Hideyuki Tokuda
This paper presents a mechanism that aggregates the bandwidths of multiple network interfaces on computers in mobile environments. Research and development on wireless technologies has recently been extremely active. As a result, various kinds of wireless media have been emerging and these have been installed on computers. However, the demand for broader bandwidths has not yet been met. We designed and developed a socket-level bandwidth aggregation mechanism (SBAM) that offer broader bandwidths using multiple wireless interfaces on computers. The most distinctive feature of SBAM is that the mechanism is implemented as a feature of operating systems. This involves two benefits. First, it is possible to deploy SBAM gradually in computers on the Internet. If the bandwidth aggregation mechanism (BAM) is achieved in a network stack, it must be implemented in all computers on the Internet. However, it is not necessary to implement it in all computers on the Internet, since it can recognize its own existence on peer hosts and determine whether to use BAM or not. Second, it does not require modifications to most existing software. For example, if the mechanism is achieved as a feature of a transport protocol, such as TCP, all applications using TCP must be re-written to adapt to the new transport layer protocol. However, SBAM does not require this, since the socket hides the existence of the mechanism. We evaluated SBAM in an actual wireless network environment. We achieved an increase in throughput by a factor of 1.6. SBAM offers broader bandwidths for applications by utilizing various wireless devices on computers while it avoids modifications to existing environments.

Wireless/Mobile Networks
Efficient Channel Utilization Schemes for IEEE 802.11 DCF over MANET

Naoki Nakamura, Debasish Chakraborty, Apichet Chayabejara, Gen Kitagata, Takuo Suganuma, Goutam Chakraborty, Norio Shiratori
The MAC protocol of IEEE 802.11 reduces hidden terminal problem using the RTS/CTS handshaking mechanism. However, it lacks the ability to release or reallocate a channel that was reserved but not used. For Mobile Ad hoc NETworks (MANET), this often keeps the channel unnecessarily inaccessible, resulting in inefficient channel utilization. In this paper, we propose schemes for releasing and reallocating unused channels in MANET deployed over IEEE 802.11. We introduce a Network Allocation Vector (NAV) updating method that can improve the performance of the existing RTS Validation scheme and our proposed Extra Frame Transmission scheme. We then combine these schemes as a further improvement. Through simulations, done in different scenarios with varying networking load and node density, we were able to show that combining these schemes leads to a throughput improvement of up to 40%. In addition, our proposed mechanisms have no compatibility problems.

Learning Support
An Educational Schoolbag System for Providing an Object Reminder Service

Lei Jing, Zixue Cheng, Mizuo Kansen, Tongjun Huang, Shengguo Sun
Embedding educational functions into devices in the everyday environment is an important task for advocates of ubiquitous learning. In this paper, we discuss how to add a reminder service to a schoolbag. An educational function would be added to the device to help pupils remember belongings. Reminding oneself of things is a difficult task and reminder services have been an important subject of computer applications. However, most reminding tools are used for business, not education. Most such services use PDAs as terminals and require the user to create the reminder list by him or herself, making it too complex for some pupils to use. The systems also seldom pay attention to helping users learn how to avoid forgetting. In this research, a ubiquitous learning support system that makes use of schoolbags is presented to assist pupils in managing their personal items. With RFID and infrared sensors, a microcontroller embedded in a schoolbag can monitor what has been put in or taken out of the schoolbag and automatically maintain a schoolbag's items list. Such a bag also enables teachers to make up a schedule that specifies required items for given days. The microcontroller then compares the schedule with the items list in the schoolbag and provides a reminder service for the pupil. In addition to the reminder service, which is based on principles of behavior modification, the paper also proposes a series of methods to help pupils form good personal management habits and reduce their dependence on outside machines.

Network Security
NAL Level Stream Authentication for H.264/AVC

Shintaro Ueda, Hiroshi Shigeno, Ken-ichi Okada
The new video coding standard H.264/AVC offers major improvements in the coding efficiency and flexible mapping to transport layers. It consists of a video coding layer (VCL) and a network abstraction layer(NAL). The VCL carries out the coding, and the NAL encapsulates data from the VCL in a manner where transmission over a broad variety of transport layers is readily enabled. Since no security features are offered, an authentication scheme to authenticate the sender and data integrity is needed. In this paper we propose SANAL, a stream authentication scheme for H.264/AVC. Unlike existing schemes that carry out authentication procedures at the packet level, authentication procedures in SANAL are carried out at the NAL level. This makes it possible to set priorities to H.264/AVC-specific data without interfering with the H.264/AVC features. We implemented a SANAL prototype and carried out comparative evaluations on playout rate, communication overhead, and process load. The evaluation results show that the playout rate is improved by 40% compared to existing schemes.

Network Quality and Control
Congestion Control Using Multilevel Explicit Congestion Notification

Arjan Durresi, Leonard Barolli, Raj Jain, Makoto Takizawa
Congestion remains one of the main obstacles to the Quality of Service (QoS) on the Internet. We think that a good solution to Internet congestion should optimally combine congestion signaling from network and source reaction, with the following as its main goals: minimum losses and delays, maximum network utilization, fairness among flows, and last but not least, scalability of the solution. The solution should not significantly increase the complexity of router operations. In this paper, we present a new traffic management scheme based on an enhanced Explicit Congestion Notification (ECN) mechanism. Our Multilevel ECN (MECN) conveys more accurate feedback information about the network congestion status than the current ECN. We have designed a TCP source reaction that takes advantage of the extra information provided about congestion. Therefore, MECN responds better to congestion by allowing the system to reach the stability point faster, which results in better network performance. We use control theoretical tools verified by ns2simulations to show that MECN can outperform up to twenty times in term of throughput the de facto standard RED/ECN.

Database Systems
Cooperation Strategy for Broadcast Scheduling and Base Station Caching in Hybrid Wireless Broadcast Environment

Jing Cai, Tsutomu Terada, Takahiro Hara, Shojiro Nishio
Recent advances in computer and wireless communication technologies have increased interest in combinations of various techniques such as pull-based data dissemination, data broadcasting, and data caching, as well as investigations into data management in hybrid data delivery systems. This paper proposes cooperative data management based on the Hybrid Wireless Broadcast (HWB) model to provide more efficient data delivery to mobile clients. Our cooperative strategy integrates broadcast scheduling with cache management of the base station, taking heterogeneous and homogeneous access of clients into consideration, by making effective use of HWB data dissemination, i.e., push-based broadcast, pull-based broadcast, and pull-based point-to-point wireless communication. Our simulation results confirmed that the strategy we propose can improve the performance of the system even further.

Network Quality and Control
Evaluations of Web Server Performance with Heavy-tailedness

Takuo Nakashima
Providing quality of service (QoS) in a business environment requires accurate estimation of Internet traffic, especially HTTP traffic. HTTP traffic, mainly consisting of file transmission traffic, depends on the sizes of files with a heavy-tailed property. Analyzing the performance of specific Web servers and comparing server performance is important in QoS provisioning for user applications. In this paper, we present the experimental analysis of Web server performance using our active measurement. We capture activity data from 1984 diverse Web servers by sending measurement packets 20 times in order to evaluate the spatial properties, and we select 14 Web servers to send packets 2, 000 times in 5-second intervals to evaluate the temporal properties. The main contribution of our study is to provide methods of evaluating the temporal and spatial properties on any Web server by measuring from a remote observation host, and to illustrate the current activity of temporal and spatial properties in a series of figures. Crovella's well known work merely describes the self-similar properties for the total transmission time generated by a heavy-tailed file size distribution. We divided the total transmission time into network and server system dependable elements, of which the heavy-tailed properties are captured. We found that temporal properties consist of two factors: stability and activity in the Poisson process. Robustness of heavy-tailedness in terms of constructing elements of total transmission time was evident from the analysis of spatial properties. Finally, we observed the upper boundary and classified groups in a mean-variance plot of primitive elements of the Web server.

Networked Services in Contextual and Real-world supports
Exploratory Session Analysis in the Mobile Clickstream

Toshihiko Yamakami
As a sub-day-scale behavior analysis, the length of sessions in multiple mobile Internet services was examined. Using mobile clickstreams with user identifiers, two analyses were performed: a preparatory study for timeout values in session identification in 2000 and a long-term observation of session lengths and clicks per session in 2001. The first study showed that 10 minutes is a suitable timeout value for the observed mobile web. The second produced inter-service comparisons and showed the effects of different mobile-Internet-specific factors. The limitations and challenges for mobile-clickstream-based session identification are also discussed.

Musical Sound Analysis
Instrogram: Probabilistic Representation of Instrument Existence for Polyphonic Music

Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
This paper presents a new technique for recognizing musical instruments in polyphonic music. Since conventional musical instrument recognition in polyphonic music is performed notewise, i.e., for each note, accurate estimation of the onset time and fundamental frequency (F0) of each note is required. However, these estimations are generally not easy in polyphonic music, and thus estimation errors severely deteriorated the recognition performance. Without these estimations, our technique calculates the temporal trajectory of instrument existence probabilities for every possible F0. The instrument existence probability is defined as the product of a nonspecific instrument existence probabilitycalculated using the PreFEst and a conditional instrument existence probability calculated using hidden Markov models. The instrument existence probability is visualized as a spectrogram-like graphical representation called the instrogram and is applied to MPEG-7 annotation and instrumentation-similarity-based music information retrieval. Experimental results from both synthesized music and real performance recordings have shown that instrograms achieved MPEG-7 annotation (instrument identification) with a precision rate of 87.5% for synthesized music and 69.4% for real performances on average and that the instrumentation similarity measure reflected the actual instrumentation better than an MFCC-based measure.